All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2008 |
M. Wittenberg; Frank Bremmer; T. Wachtler Perceptual evidence for saccadic updating of color stimuli Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–9, 2008. @article{Wittenberg2008, In retinotopically organized areas of the macaque visual cortex, neurons have been found that shift their receptive fields before a saccade to their postsaccadic position. This saccadic remapping has been interpreted as a mechanism contributing to perceptual stability of space across eye movements. So far, there is only limited evidence for similar mechanisms that support perceptual stability of visual objects by remapping the representation of object features across saccades. In our present study, we investigated whether color stimuli presented before a saccade affected the perception of color stimuli at the same spatial position after the saccade. We found that the perceived hue of a postsaccadically flashed stimulus was systematically shifted toward the color of a presaccadically presented stimulus. This finding would be in accordance with a saccadic remapping process that preactivates, prior to a saccade, the neurons that represent a stimulus after the saccade at this very location. Such a remapping of visual object features could contribute to the stable perception of the visual world across saccades. |
Ronald Berg; Frans W. Cornelissen; Jos B. T. M. Roerdink Perceptual dependencies in information visualization assessed by complex visual search Journal Article In: ACM Transactions on Applied Perception, vol. 4, no. 4, pp. 1–21, 2008. @article{Berg2008, A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as color, size and orientation. Here, we consider visualizations in which different data dimensions should receive equal weight and attention. Many of the end-user tasks performed on these images involve a form of visual search. Often, it is simply assumed that features can be judged independently of each other in such tasks. However, there is evidence for perceptual dependencies when simultaneously presenting multiple features. Such dependencies could potentially affect information visualizations that contain combinations of features for encoding information and, thereby, bias subjects into unequally weighting the relevance of different data dimensions. We experimentally assess (1) the presence of judgment dependencies in a visualization task (searching for a target node in a node-link diagram) and (2) how feature contrast relates to salience. From a visualization point of view, our most relevant findings are that (a) to equalize saliency (and thus bottom-up weighting) of size and color, color contrasts have to become very low. Moreover, orientation is less suitable for representing information that consists of a large range of data values, because it does not show a clear relationship between contrast and salience; (b) color and size are features that can be used independently to represent information, at least as far as the range of colors that were used in our study are concerned; (c) the concept of (static) feature salience hierarchies is wrong; how salient a feature is compared to another is not fixed, but a function of feature contrasts; (d) final decisions appear to be as good an indicator of perceptual performance as indicators based on measures obtained from individual fixations. Eye tracking, therefore, does not necessarily present a benefit for user studies that aim at evaluating performance in search tasks. |
Menno Van Der Schoot; Alain L. Vasbinder; Tako M. Horsley; Ernest C. D. M. Van Lieshout The role of two reading strategies in text comprehension: An eye fixation study in primary school children Journal Article In: Journal of Research in Reading, vol. 31, no. 2, pp. 203–223, 2008. @article{VanDerSchoot2008, This study examined whether 1012-year-old children use two reading strategies to aid their text comprehension: (1) distinguishing between important and unimportant words; and (2) resolving anaphoric references. Of interest was the question to what extent use of these reading strategies was predictive of reading comprehension skill over and above decoding skill and vocabulary. Reading strategy use was examined by the recording of eye fixations on specific target words. In contrast to less successful comprehenders, more successful comprehenders invested more processing time in important than in unimportant words. On the other hand, they needed less time to determine the antecedent of an anaphor. The results suggest that more successful comprehenders build a more effective mental model of the text than less successful comprehenders in at least two ways. First, they allocate more attention to the incorporation of goal-relevant than goal-irrelevant information into the model. Second, they ascertain that the text model is coherent and richly connected. |
Stefan Van der Stigchel; Jan Theeuwes Differences in distractor-induced deviation between horizontal and vertical saccade trajectories Journal Article In: NeuroReport, vol. 19, no. 2, pp. 251–254, 2008. @article{VanderStigchel2008, The present study systematically investigated the influence of a distractor on horizontal and vertical eye movements. Results showed that both horizontal and vertical eye movements deviated away from the distractor but these deviations were stronger for vertical than for horizontal movements. As trajectory deviations away from a distractor are generally attributed to inhibition applied to the distractor, this suggests that this deviation is not only due to differences in activity between the two collicular motor maps, but can also be evoked by local application of inhibitory processes in the same map as the target. Nonetheless, deviations were more dominant for vertical movements which suggests that for these movements more inhibition is applied than for horizontal movements. |
Stefan Van der Stigchel; Wieske Zoest; Jan Theeuwes; Jason J. S. Barton The influence of "blind" distractors on eye movement trajectories in visual hemifield defects Journal Article In: Journal of Cognitive Neuroscience, vol. 20, no. 11, pp. 2025–2036, 2008. @article{VanderStigchel2008a, There is evidence that some visual information in blind regions may still be processed in patients with hemifield defects after cerebral lesions ("blindsight"). We tested the hypothesis that, in the absence of retinogeniculostriate processing, residual retinotectal processing may still be detected as modifications of saccades to seen targets by irrelevant distractors in the blind hemifield. Six patients were presented with distractors in the blind and intact portions of their visual field and participants were instructed to make eye movements to targets in the intact field. Eye movements were recorded to determine if blind-field distractors caused deviation in saccadic trajectories. No deviation was found in one patient with an optic chiasm lesion, which affect both retinotectal and retinogeniculostriate pathways. In five patients with lesions of the optic radiations or the striate cortex, the results were mixed, with two of the five patients showing significant deviations of saccadic trajectory away from the "blind" distractor. In a second experiment, two of the five patients were tested with the target and the distractor more closely aligned. Both patients showed a "global effect," in that saccades deviated toward the distractor, but the effect was stronger in the patient who also showed significant trajectory deviation in the first experiment. Although our study confirms that distractor effects on saccadic trajectory can occur in patients with damage to the retinogeniculostriate visual pathway but preserved retinotectal projections, there remain questions regarding what additional factors are required for these effects to manifest themselves in a given patient. |
Stan Van Pelt; W. Pieter Medendorp Updating target distance across eye movements in depth Journal Article In: Journal of Neurophysiology, vol. 99, no. 5, pp. 2281–2290, 2008. @article{VanPelt2008, We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements (n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching. |
Wieske Zoest; Mieke Donk Goal-driven modulation as a function of time in saccadic target selection Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 10, pp. 1553–1572, 2008. @article{Zoest2008, Four experiments were performed to investigate the contribution of goal-driven modulation in saccadic target selection as a function of time. Observers were required to make an eye movement to a prespecified target that was concurrently presented with multiple nontargets and possibly one distractor. Target and distractor were defined in different dimensions (orientation dimension and colour dimension in Experiment 1), or were both defined in the same dimension (i.e., both defined in the orientation dimension in Experiment 2, or both defined in the colour dimension in Experiments 3 and 4). The identities of target and distractor were switched over conditions. Speed-accuracy functions were computed to examine the full time course of selection in each condition. There were three major results. First, the ability to exert goal-driven control increased as a function of response latency. Second, this ability depended on the specific target-distractor combination, yet was not a function of whether target and distractor were defined within or across dimensions. Third, goal-driven control was available earlier when target and distractor were dissimilar than when they were similar. It was concluded that the influence of goal-driven control in visual selection is not all or none, but is of a continuous nature. |
Wieske Zoest; Stefan Van der Stigchel; Jason J. S. Barton Distractor effects on saccade trajectories: A comparison of prosaccades, antisaccades, and memory-guided saccades Journal Article In: Experimental Brain Research, vol. 186, no. 3, pp. 431–442, 2008. @article{Zoest2008a, The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2-4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display. |
André Vandierendonck; Maud Deschuyteneer; Ann Depoorter; Denis Drieghe Input monitoring and response selection as components of executive control in pro-saccades and anti-saccades Journal Article In: Psychological Research, vol. 72, no. 1, pp. 1–11, 2008. @article{Vandierendonck2008, Several studies have shown that anti-saccades, more than pro-saccades, are executed under executive control. It is argued that executive control subsumes a variety of controlled processes. The present study tested whether some of these underlying processes are involved in the execution of anti-saccades. An experiment is reported in which two such processes were parametrically varied, namely input monitoring and response selection. This resulted in four selective interference conditions obtained by factorially combining the degree of input monitoring and the presence of response selection in the interference task. The four tasks were combined with a primary task which required the participants to perform either pro-saccades or anti-saccades. By comparison of performance in these dual-task conditions and performance in single-task conditions, it was shown that anti-saccades, but not pro-saccades, were delayed when the secondary task required input monitoring or response selection. The results are discussed with respect to theoretical attempts to deconstruct the concept of executive control. |
Chie Nakatani; Cees Van Leeuwen A pragmatic approach to multi-modality and non-normality in fixation duration studies of cognitive processes Journal Article In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 1–12, 2008. @article{Nakatani2008, Interpreting eye-fixation durations in terms of cognitive processing load is complicated by the multimodality of their distribution. An important source of multimodality is the distinction between single and multiple fixations to the same object. Based on the distinction, we separated a log-transformed distribution made to an object in non-reading task. We could reasonably conclude that the separated distributions belong to the same, general logistic distribution, which has a finite population mean and variance. This allowed us to use the sample means as dependent variables in a parametric analysis. Six tasks were compared, which required different levels of post-perceptual processing. A no-task control condition was added to test for perceptual processing. Fixation durations differentiated task-specific perceptual, but not post-perceptual processing demands. |
Harold T. Nefs; J. M. Harris Induced motion in depth and the effects of vergence eye movements Journal Article In: Journal of Vision, vol. 8, no. 3, pp. 1–16, 2008. @article{Nefs2008, Induced motion is the false impression that physically stationary objects move when in the presence of other objects that really move. In this study, we investigated this motion illusion in the depth dimension. We raised three related questions, as follows: (1) What cues in the stimulus are responsible for this motion illusion in depth? (2) Is the size of this illusion affected by vergence eye movements? And (3) are the effects of eye movements different for motion in depth and for motion in the frontoparallel plane? To answer these questions, we measured the point of subjective stationarity. Observers viewed an inducer target that oscillated in depth and a test target that was located directly above it. The test target moved in phase or out of phase with the inducer, but with a smaller amplitude. Observers had to indicate whether the test target and the inducer target moved in phase or out of phase with one another. They were asked to keep their eyes either on the test target or on the inducer. For motion in depth, created by binocular disparity and retinal size change or by binocular disparity alone, we found that when the eyes followed the inducer, subjective stationarity occurred at approximately 40-45% of the inducer's amplitude. When the eyes were kept fixated on the test target, the bias decreased tenfold to around 4%. When size change was the only cue to motion in depth, there was no illusory motion. When the eyes were kept on an inducer moving in the frontoparallel plane, induced motion was of the same order as for induced motion in depth, namely, approximately 44%. When the induced motion was in the frontoparallel plane, we found that perceived stationarity occurred at approximately 23% of inducer's amplitude when the eyes were kept on the test target. |
Mark B. Neider; Gregory J. Zelinsky Exploring set size effects in scenes: Identifying the objects of search Journal Article In: Visual Cognition, vol. 16, no. 1, pp. 1–10, 2008. @article{Neider2008, Traditional search paradigms utilize simple displays, allowing a precise determination of set size. However, objects in realistic scenes are largely uncountable, and typically visually and semantically complex. Can traditional conceptions of set size be applied to search in realistic scenes? Observers searched quasirealistic scenes for a tank target hidden among tree distractors varying in number and density. Search efficiency improved as trees were added to the display, a reverse set size effect. Eye movement analyses revealed that observers fixated individual trees when the set size was small, and the open regions between trees when the set size was large. Rather than a set size consisting of objectively countable objects, we interpret these data as evidence for a restricted functional set size consisting of idiosyncratically defined objects of search. Observers exploit low-level perceptual grouping processes and high-level semantic scene constraints to dynamically create objects that are appropriate to a given search task. |
Larry Allen Abel; Zhong I. Wang; Louis F. Dell'Osso Wavelet analysis in infantile nystagmus syndrome: Limitations and abilities Journal Article In: Investigative Ophthalmology & Visual Science, vol. 49, no. 8, pp. 3413–3423, 2008. @article{Abel2008, PURPOSE: To investigate the proper usage of wavelet analysis in infantile nystagmus syndrome (INS) and determine its limitations and abilities. METHODS: Data were analyzed from accurate eye-movement recordings of INS patients. Wavelet analysis was performed to examine the foveation characteristics, morphologic characteristics and time variation in different INS waveforms. Also compared were the wavelet analysis and the expanded nystagmus acuity function (NAFX) analysis on sections of pre- and post-tenotomy data. RESULTS: Wavelet spectra showed some sensitivity to different features of INS waveforms and reflected their variations across time. However, wavelet analysis was not effective in detecting foveation periods, especially in a complicated INS waveform. NAFX, on the other hand, was a much more direct way of evaluating waveform changes after nystagmus treatments. CONCLUSIONS: Wavelet analysis is a tool that performs, with difficulty, some things that can be done faster and better by directly operating on the nystagmus waveform itself. It appears, however, to be insensitive to the subtle but visually important improvements brought about by INS therapies. Wavelet analysis may have a role in developing automated waveform classifiers where its time-dependent characterization of the waveform can be used. The limitations of wavelet analysis outweighed its abilities in INS waveform-characteristic examination. |
Joana Acha; Manuel Perea The effect of neighborhood frequency in reading: Evidence with transposed-letter neighbors Journal Article In: Cognition, vol. 108, pp. 290–300, 2008. @article{Acha2008, Transposed-letter effects (e.g., jugde activates judge) pose serious models for models of visual-word recognition that use position-specific coding schemes. However, even though the evidence of transposed-letter effects with nonword stimuli is strong, the evidence for word stimuli is scarce and inconclusive. The present experiment examined the effect of neighborhood frequency during normal silent reading using transposed-letter neighbors (e.g., silver, sliver). Two sets of low-frequency words were created (equated in the number of substitution neighbors, word frequency, and number of letters), which were embedded in sentences. In one set, the target word had a higher frequency transposed-letter neighbor, and in the other set, the target word had no transposed-letter neighbors. An inhibitory effect of neighborhood frequency was observed in measures that reflect late processing in words (number of regressions back to the target word, and total time). We examine the implications of these findings for models of visual-word recognition and reading. |
N. Alahyane; V. Fonteille; C. Urquizar; Roméo Salemme; Norbert Nighoghossian; Denis Pelisson; C. Tilikete Separate neural substrates in the human cerebellum for sensory-motor adaptation of reactive and of scanning voluntary saccades Journal Article In: Cerebellum, vol. 7, no. 4, pp. 595–601, 2008. @article{Alahyane2008, Sensory-motor adaptation processes are critically involved in maintaining accurate motor behavior throughout life. Yet their underlying neural substrates and task-dependency bases are still poorly understood. We address these issues here by studying adaptation of saccadic eye movements, a well-established model of sensory-motor plasticity. The cerebellum plays a major role in saccadic adaptation but it has not yet been investigated whether this role can account for the known specificity of adaptation to the saccade type (e.g., reactive versus voluntary). Two patients with focal lesions in different parts of the cerebellum were tested using the double-step target paradigm. Each patient was submitted to two separate sessions: one for reactive saccades (RS) triggered by the sudden appearance of a visual target and the second for scanning voluntary saccades (SVS) performed when exploring a more complex scene. We found that a medial cerebellar lesion impaired adaptation of reactive-but not of voluntary-saccades, whereas a lateral lesion affected adaptation of scanning voluntary saccades, but not of reactive saccades. These findings provide the first evidence of an involvement of the lateral cerebellum in saccadic adaptation, and extend the demonstrated role of the cerebellum in RS adaptation to adaptation of SVS. The double dissociation of adaptive abilities is also consistent with our previous hypothesis of the involvement in saccadic adaptation of partially separated cerebellar areas specific to the reactive or voluntary task (Alahyane et al. Brain Res 1135:107-121 (2007)). |
Nadia Alahyane; Anne-Dominique Devauchelle; Roméo Salemme; Denis Pélisson Spatial transfer of adaptation of scanning voluntary saccades in humans Journal Article In: Neuroreport, vol. 19, no. 1, pp. 37–41, 2008. @article{Alahyane2008a, The properties and neural substrates of the adaptive mechanisms that maintain over time the accuracy of voluntary, internally triggered saccades are still poorly understood. Here, we used transfer tests to evaluate the spatial properties of adaptation of scanning voluntary saccades. We found that an adaptive reduction of the size of a horizontal rightward 7 degrees saccade transferred to other saccades of a wide range of amplitudes and directions. This transfer decreased as tested saccades increasingly differed in amplitude or direction from the trained saccade, being null for vertical and leftward saccades. Voluntary saccade adaptation thus presents bounded, but large adaptation fields, suggesting that at least part of the underlying neural substrate encodes saccades as vectors. |
Naseem Al-aidroos; Jos J. Adam; Martin H. Fischer; Jay Pratt Structured perceptual arrays and the modulation of Fitts's Law: Examining saccadic eye movements Journal Article In: Journal of Motor Behavior, vol. 40, no. 2, pp. 155–164, 2008. @article{Alaidroos2008, On the basis of recent observations of a modulation of Fitts's law for manual pointing movements in structured visual arrays (J. J. Adam, R. Mol, J. Pratt, & M. H. Fischer, 2006; J. Pratt, J. J. Adam, & M. H. Fischer, 2007), the authors examined whether a similar modulation occurs for saccadic eye move- ments. Healthy participants (N = 19) made horizontal saccades to targets that appeared randomly in 1 of 4 positions, either on an empty background or within 1 of 4 placeholder boxes. Whereas in previous studies, placeholders caused a decrease in movement time (MT) without the normal decrease in movement accuracy predicted by Fitts's law, placeholders in the present experiment increased saccadic accuracy (decreased endpoint variability) with- out an increase in MT. The present results extend the findings of J. J. Adam et al. of a modulation of Fitts's law from the temporal domain to the spatial domain and from manual movements to eye movements. |
Antje S. Meyer; Marc Ouellet; Christine Häcker Parallel processing of objects in a naming task Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 982–987, 2008. @article{Meyer2008, The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel. |
Areh Mikulić; Michael C. Dorris Temporal and spatial allocation of motor preparation during a mixed-strategy game Journal Article In: Journal of Neurophysiology, vol. 100, no. 4, pp. 2101–2108, 2008. @article{Mikulic2008, Adopting a mixed response strategy in competitive situations can prevent opponents from exploiting predictable play. What drives stochastic action selection is unclear given that choice patterns suggest that, on average, players are indifferent to available options during mixed-strategy equilibria. To gain insight into this stochastic selection process, we examined how motor preparation was allocated during a mixed-strategy game. If selection processes on each trial reflect a global indifference between options, then there should be no bias in motor preparation (unbiased preparation hypothesis). If, however, differences exist in the desirability of options on each trial then motor preparation should be biased toward the preferred option (biased preparation hypothesis). We tested between these alternatives by examining how saccade preparation was allocated as human subjects competed against an adaptive computer opponent in an oculomotor version of the game "matching pennies." Subjects were free to choose between two visual targets using a saccadic eye movement. Saccade preparation was probed by occasionally flashing a visual distractor at a range of times preceding target presentation. The probability that a distractor would evoke a saccade error, and when it failed to do so, the probability of choosing each of the subsequent targets quantified the temporal and spatial evolution of saccade preparation, respectively. Our results show that saccade preparation became increasingly biased as the time of target presentation approached. Specifically, the spatial locus to which saccade preparation was directed varied from trial to trial, and its time course depended on task timing. |
William L. Miller; Vincenzo Maffei; Gianfranco Bosco; Marco Iosa; Myrka Zago; Emiliano Macaluso; Francesco Lacquaniti Vestibular nuclei and cerebellum put visual gravitational motion in context Journal Article In: Journal of Neurophysiology, vol. 99, no. 4, pp. 1969–1982, 2008. @article{Miller2008, Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion. |
D. A. Mills; Teresa C. Frohman; Scott L. Davis; A. R. Salter; Samuel M. McClure; I. Beatty; A. Shah; S. Galetta; E. Eggenberger; D. S. Zee; Elliot M. Frohman Break in binocular fusion during head turning in MS patients with INO Journal Article In: Neurology, vol. 71, pp. 457–460, 2008. @article{Mills2008, Internuclear ophthalmoparesis (INO) is the most common eye movement abnormality observed in pa- tients with multiple sclerosis (MS).1 While most MS patients with INO have no or little misalignment in the straight ahead position, significant disconjugacy occurs during horizontal saccades or with horizontal (yaw axis) head turning.2 A break in binocular fusion can produce a loss of stereopsis and depth percep- tion, transient diplopia (perceived as a double image or visual blur), oscillopsia, and disorientation.2 The purpose of this investigation was to confirm the hy- pothesis that a break in binocular fusion occurs in MS patients with INO during head or body turning, and that the magnitude of disconjugacy will be di- rectly correlated with the severity of this eye move- ment syndrome. |
Hans P. Op De Beeck; Jennifer A. Deutsch; Wim Vanduffel; Nancy Kanwisher; James J. DiCarlo A stable topography of selectivity for unfamiliar shape classes in monkey inferior temporal cortex Journal Article In: Cerebral Cortex, vol. 18, no. 7, pp. 1676–1694, 2008. @article{OpDeBeeck2008, The inferior temporal (IT) cortex in monkeys plays a central role in visual object recognition and learning. Previous studies have observed patches in IT cortex with strong selectivity for highly familiar object classes (e.g., faces), but the principles behind this functional organization are largely unknown due to the many properties that distinguish different object classes. To unconfound shape from meaning and memory, we scanned monkeys with functional magnetic resonance imaging while they viewed classes of initially novel objects. Our data revealed a topography of selectivity for these novel object classes across IT cortex. We found that this selectivity topography was highly reproducible and remarkably stable across a 3-month interval during which monkeys were extensively trained to discriminate among exemplars within one of the object classes. Furthermore, this selectivity topography was largely unaffected by changes in behavioral task and object retinal position, both of which preserve shape. In contrast, it was strongly influenced by changes in object shape. The topography was partially related to, but not explained by, the previously described pattern of face selectivity. Together, these results suggest that IT cortex contains a large-scale map of shape that is largely independent of meaning, familiarity, and behavioral task. |
Jorge Otero-Millan; Xoana G. Troncoso; Stephen L. Macknik; Ignacio Serrano-Pedraza; Susana Martinez-Conde Saccades and microsaccades during visual fixation, exploration, and search: Foundations for a common saccadic generator Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–18, 2008. @article{OteroMillan2008, Microsaccades are known to occur during prolonged visual fixation, but it is a matter of controversy whether they also happen during free-viewing. Here we set out to determine: 1) whether microsaccades occur during free visual exploration and visual search, 2) whether microsaccade dynamics vary as a function of visual stimulation and viewing task, and 3) whether saccades and microsaccades share characteristics that might argue in favor of a common saccade-microsaccade oculomotor generator. Human subjects viewed naturalistic stimuli while performing various viewing tasks, including visual exploration, visual search, and prolonged visual fixation. Their eye movements were simultaneously recorded with high precision. Our results show that microsaccades are produced during the fixation periods that occur during visual exploration and visual search. Microsaccade dynamics during free-viewing moreover varied as a function of visual stimulation and viewing task, with increasingly demanding tasks resulting in increased microsaccade production. Moreover, saccades and microsaccades had comparable spatiotemporal characteristics, including the presence of equivalent refractory periods between all pair-wise combinations of saccades and microsaccades. Thus our results indicate a microsaccade-saccade continuum and support the hypothesis of a common oculomotor generator for saccades and microsaccades. |
Don C. Mitchell; Xingjia Shen; Matthew J. Green; Timothy L. Hodgson Accounting for regressive eye-movements in models of sentence processing: A reappraisal of the Selective Reanalysis hypothesis Journal Article In: Journal of Memory and Language, vol. 59, no. 3, pp. 266–293, 2008. @article{Mitchell2008, When people read temporarily ambiguous sentences, there is often an increased prevalence of regressive eye-movements launched from the word that resolves the ambiguity. Traditionally, such regressions have been interpreted at least in part as reflecting readers' efforts to re-read and reconfigure earlier material, as exemplified by the Selective Reanalysis hypothesis [Frazier, L., & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14, 178-210]. Within such frameworks it is assumed that the selection of saccadic landing-sites is linguistically supervised. As an alternative to this proposal, we consider the possibility (dubbed the Time Out hypothesis) that regression control is partly decoupled from linguistic operations and that landing-sites are instead selected on the basis of low-level spatial properties such as their proximity to the point from which the regressive saccade was launched. Two eye-tracking experiments were conducted to compare the explanatory potential of these two accounts. Experiment 1 manipulated the formatting of linguistically identical sentences and showed, contrary to purely linguistic supervision, that the landing site of the first regression from a critical word was reliably influenced by the physical layout of the text. Experiment 2 used a fixed physical format but manipulated the position in the display at which reanalysis-relevant material was located. Here the results showed a highly reliable linguistic influence on the overall distribution of regression landing sites (though with few effects being apparent on the very first regression). These results are interpreted as reflecting mutually exclusive forms of regression control with fixation sequences being influenced both by spatially constrained, partially decoupled supervision systems as well as by some kind of linguistic guidance. The findings are discussed in relation to existing computational models of eye-movements in reading. |
Inger Montfoort; Josef N. Geest; Harm P. Slijper; Chris I. Zeeuw; Maarten A. Frens Adaptation of the cervico- and vestibulo-ocular reflex in whiplash injury patients Journal Article In: Journal of Neurotrauma, vol. 25, pp. 687–693, 2008. @article{Montfoort2008, The aim of this study was to investigate the underlying mechanisms of the increased gains of the cervico-ocular reflex (COR) and the lack of synergy between the COR and the vestibulo-ocular reflex (VOR) that have been previously observed in patients with whiplash-associated disorders (WAD). Eye movements during COR or VOR stimulation were recorded in four different experiments. The effect of restricted neck motion and the relationship between muscle activity and COR gain was examined in healthy controls. The adaptive ability of the COR and the VOR was tested in WAD patients and healthy controls. Reduced neck mobility yielded an increase in COR gain. No correlation between COR gain and muscle activity was observed. Adaptation of both the COR and VOR was observed in healthy controls, but not in WAD patients. The increased COR gain of WAD patients may stem from a reduced neck mobility. The lack of adaptation of the two stabilization reflexes may result in a lack of synergy between them. These abnormalities may underlie several of the symptoms frequently observed in WAD, such as vertigo and dizziness. |
Sofie Moresi; Jos J. Adam; Jons Rijcken; Pascal W. M. Van Gerven Cue validity effects in response preparation: A pupillometric study Journal Article In: Brain Research, vol. 1196, pp. 94–102, 2008. @article{Moresi2008, This study examined the effects of cue validity and cue difficulty on response preparation to provide a test of the Grouping Model [Adam, J.J., Hommel, B. and Umiltà, C., 2003. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognit. Psychol. 46(3), 302-58, Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) automatic and effortful processes in response cuing. Vis. Cogn. 12(8), 1444-1473.]. We used the pupillary response to index the cognitive processing load during and after the preparatory interval (2 s). Twenty-two participants performed the finger-cuing tasks with valid (75%) and invalid (25%) cues. Results showed longer reaction times, more errors, and larger pupil dilations for invalid than valid cues. During the preparation interval, pupil dilation varied systematically with cue difficulty, with easy cues (specifying 2 fingers on 1 hand) showing less pupil dilation than difficult cues (specifying 2 fingers on 2 hands). After the preparation interval, this pattern of differential pupil dilation as a function of cue difficulty reversed for invalid cues, suggesting that cues which incorrectly specified fingers on one hand required more effortful reprogramming operations than cues which incorrectly specified fingers on two hands. These outcomes were consistent with predictions derived from the Grouping Model. Finally, all participants exhibited two distinct pupil dilation strategies: an "early" strategy in which the onset of the main pupil dilation was tied to onset of the cue, and a "late" strategy in which the onset of the main pupil dilation was tied to the onset of the target. Thus, whereas the early pupil dilation strategy showed a strong dilation during the preparation interval, the late pupil strategy showed a strong constriction. Interestingly, only the late onset pupil dilation strategy revealed the above reported sensitivity to cue difficulty, showing for the first time that the well-known pupil's sensitivity to task difficulty can also emerge when the pupil is constricting instead of dilating. |
Sofie Moresi; Jos J. Adam; Jons Rijcken; Pascal W. M. Van Gerven; Harm Kuipers; Jelle Jolles Pupil dilation in response preparation Journal Article In: International Journal of Psychophysiology, vol. 67, no. 2, pp. 124–130, 2008. @article{Moresi2008a, This study examined changes in pupil size during response preparation in a finger-cuing task. Based on the Grouping Model of finger preparation [Adam, J.J., Hommel, B. and Umiltà, C., 2003b. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognitive Psychology. 46, (3), 302-358.; Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) Automatic and effortfull Processes in Response cuing. Visual Cognition. 12, (8), 1444-1473.], it was hypothesized that the selection and preparation of more difficult response sets would be accompanied by larger pupillary dilations. The results supported this prediction, thereby extending the validity of pupil size as a measure of cognitive load to the domain of response preparation. |
Jane L. Morgan; Gus Elswijk; Antje S. Meyer Extrafoveal processing of objects in a naming task: Evidence from word probe experiments Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 3, pp. 561–565, 2008. @article{Morgan2008, In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation. |
Linda Mortensen; Antje S. Meyer; Glyn W. Humphreys Speech planning during multiple-object naming: Effects of ageing Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 8, pp. 1217–1238, 2008. @article{Mortensen2008, Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently. |
S. Moshel; Ari Z. Zivotofsky; L. Jin-Rong; Ralf Engbert; Jürgen Kurths; Reinhold Kliegl; Shlomo Havlin Persistence and phase synchronisation properties of fixational eye movements Journal Article In: The European Physical Journal Special Topics, vol. 161, pp. 207–223, 2008. @article{Moshel2008, When we fixate our gaze on a stable object, our eyes move continuously with extremely small involuntary and autonomic movements, that even we are unaware of during their occurrence. One of the roles of these fixational eye movements is to prevent the adaptation of the visual system to continuous illumination and inhibit fading of the image. These random, small movements are restricted at long time scales so as to keep the target at the centre of the field of view. In addition, the synchronisation properties between both eyes are related to binocular coordination in order to provide stereopsis. We investigated the roles of different time scale behaviours, especially how they are expressed in the different spatial directions (vertical versus horizontal). We also tested the synchronisation between both eyes. Results show different scaling behaviour between horizontal and vertical movements. When the small ballistic movements, i.e., microsaccades, are removed, the scaling behaviour in both axes becomes similar. Our findings suggest that microsaccades enhance the persistence at short time scales mostly in the horizontal component and much less in the vertical component. We also applied the phase synchronisation decay method to study the synchronisation between six combinations of binocular fixational eye movement components. We found that the vertical-vertical components of right and left eyes are significantly more synchronised than the horizontal-horizontal components. These differences may be due to the need for continuously moving the eyes in the horizontal plane in order to match the stereoscopic image for different viewing distances. |
Brad C. Motter; Diglio A. Simoni Changes in the functional visual field during search with and without eye movements Journal Article In: Vision Research, vol. 48, pp. 2382–2393, 2008. @article{Motter2008, The size of the functional visual field (FVF) is dynamic, changing with the context and attentive demand that each fixation brings as we move our eyes and head to explore the visual scene. Using performance measures of the FVF we show that during search conditions with eye movements, the FVF is small compared to the size of the FVF measured during search without eye movements. In all cases the size of the FVF is constrained by the density of distracting items. During search without eye movements the FVF expands with time; subjects have idiosyncratic spatial biases suggesting covert shifts of attention. For search within the constraints imposed by item density, the rate of item inspection is the same across all search conditions. Array set size effects are not apparent once stimulus density is taken into account, a result that is consistent with a spatial constraint for the FVF based on the cortical separation hypothesis. |
Manon Mulckhuyse; Wieske Zoest; Jan Theeuwes Capture of the eyes by relevant and irrelevant onsets Journal Article In: Experimental Brain Research, vol. 186, no. 2, pp. 225–235, 2008. @article{Mulckhuyse2008, During early visual processing the eyes can be captured by salient visual information in the environment. Whether a salient stimulus captures the eyes in a purely automatic, bottom-up fashion or whether capture is contingent on task demands is still under debate. In the first experiment, we manipulated the relevance of a salient onset distractor. The onset distractor could either be similar or dissimilar to the target. Error saccade latency distributions showed that early in time, oculomotor capture was driven purely bottom-up irrespective of distractor similarity. Later in time, top-down information became available resulting in contingent capture. In the second experiment, we manipulated the saliency information at the target location. A salient onset stimulus could be presented either at the target or at a non-target location. The latency distributions of error and correct saccades had a similar time-course as those observed in the first experiment. Initially, the distributions overlapped but later in time task-relevant information decelerated the oculomotor system. The present findings reveal the interaction between bottom-up and top-down processes in oculomotor behavior. We conclude that the task relevance of a salient event is not crucial for capture of the eyes to occur. Moreover, task-relevant information may integrate with saliency information to initiate saccades, but only later in time. |
Ikuya Murakami; Rumi Hisakata The effects of eccentricity and retinal illuminance on the illusory motion seen in a stationary luminance gradient Journal Article In: Vision Research, vol. 48, no. 19, pp. 1940–1948, 2008. @article{Murakami2008, Kitaoka recently reported a novel illusion named the Rotating Snakes [Kitaoka, A., & Ashida, H. (2003). Phenomenal characteristics of the peripheral drift illusion. Vision, 15, 261-262], in which a stationary pattern appears to rotate constantly. In the first experiment, we attempted to quantify the anecdote that this illusion is better perceived in the periphery. The stimulus was a ring composed of stepwise luminance patterns and was presented in the left visual field. With increasing eccentricity up to 10-14 deg, the cancellation velocity required to establish perceptual stationarity increased. In the next experiment, we examined the effect of retinal illuminance. Interestingly, the cancellation velocity decreased as retinal illuminance was decreased. We also estimated the human temporal impulse response at some retinal illuminances by using the double-pulse method to confirm that the shape of the impulse response actually changes from biphasic to monophasic, which indicates that the transient processing system has weaker activities at lower illuminances. We conclude that some transient temporal processing system is necessary for the illusion. |
M. Niwa; J. Ditterich Perceptual decisions between multiple directions of visual motion Journal Article In: Journal of Neuroscience, vol. 28, no. 17, pp. 4435–4445, 2008. @article{Niwa2008, Previous studies and models of perceptual decision making have largely focused on binary choices. However, we often have to choose from multiple alternatives. To study the neural mechanisms underlying multialternative decision making, we have asked human subjects to make perceptual decisions between multiple possible directions of visual motion. Using a multicomponent version of the random-dot stimulus, we were able to control experimentally how much sensory evidence we wanted to provide for each of the possible alternatives. We demonstrate that this task provides a rich quantitative dataset for multialternative decision making, spanning a wide range of accuracy levels and mean response times. We further present a computational model that can explain the structure of our behavioral dataset. It is based on the idea of a race between multiple integrators to a decision threshold. Each of these integrators accumulates net sensory evidence for a particular choice, provided by linear combinations of the activities of decision-relevant pools of sensory neurons. |
Lauri Nummenmaa; Jussi Hirvonen; Riitta Parkkola; Jari K. Hietanen Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy Journal Article In: NeuroImage, vol. 43, no. 3, pp. 571–580, 2008. @article{Nummenmaa2008, Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy. |
Thomas Nyffeler; Dario Cazzoli; Pascal Wurtz; Mathias Lüthi; Roman Von Wartburg; Silvia Chaves; Anouk Déruaz; Christian W. Hess; René M. Müri Neglect-like visual exploration behaviour after theta burst transcranial magnetic stimulation of the right posterior parietal cortex Journal Article In: European Journal of Neuroscience, vol. 27, no. 7, pp. 1809–1813, 2008. @article{Nyffeler2008, The right posterior parietal cortex (PPC) is critically involved in visual exploration behaviour, and damage to this area may lead to neglect of the left hemispace. We investigated whether neglect-like visual exploration behaviour could be induced in healthy subjects using theta burst repetitive transcranial magnetic stimulation (rTMS). To this end, one continuous train of theta burst rTMS was applied over the right PPC in 12 healthy subjects prior to a visual exploration task where colour photographs of real-life scenes were presented on a computer screen. In a control experiment, stimulation was also applied over the vertex. Eye movements were measured, and the distribution of visual fixations in the left and right halves of the screen was analysed. In comparison to the performance of 28 control subjects without stimulation, theta burst rTMS over the right PPC, but not the vertex, significantly decreased cumulative fixation duration in the left screen-half and significantly increased cumulative fixation duration in the right screen-half for a time period of 30 min. These results suggest that theta burst rTMS is a reliable method of inducing transient neglect-like visual exploration behaviour. |
2007 |
Yasuki Noguchi; Shinsuke Shimojo; Ryusuke Kakigi; Minoru Hoshiyama Spatial contexts can inhibit a mislocalization of visual stimuli during smooth pursuit Journal Article In: Journal of Vision, vol. 7, no. 13, pp. 1–15, 2007. @article{Noguchi2007, The position of a flash presented during pursuit is mislocalized in the direction of the pursuit. Although this has been explained by a temporal mismatch between the slow visual processing of flash and fast efferent signals on eye positions, here we show that spatial contexts also play an important role in determining the flash position. We put various continuously lit objects (walls) between veridical and to-be-mislocalized positions of flash. Consequently, these walls significantly reduced the mislocalization of flash, preventing the flash from being mislocalized beyond the wall (Experiment 1). When the wall was shortened or had a hole in its center, the shape of the mislocalized flash was vertically shortened as if cutoff or funneled by the wall (Experiment 2). The wall also induced color interactions; a red wall made a green flash appear yellowish if it was in the path of mislocalization (Experiment 3). Finally, those flash-wall interactions could be induced even when the walls were presented after the disappearance of flash (Experiment 4). These results indicate that various features (position, shape, and color) of flash during pursuit are determined with an integration window that is spatially and temporally broad, providing a new insight for generating mechanisms of eye-movement mislocalizations. |
Antje Nuthmann; Ralf Engbert; Reinhold Kliegl The IOVP effect in mindless reading: Experiment and modeling Journal Article In: Vision Research, vol. 47, no. 7, pp. 990–1002, 2007. @article{Nuthmann2007, Fixation durations in reading are longer for within-word fixation positions close to word center than for positions near word boundaries. This counterintuitive result was termed the Inverted-Optimal Viewing Position (IOVP) effect. We proposed an explanation of the effect based on error-correction of mislocated fixations [Nuthmann, A., Engbert, R., & Kliegl, R. (2005). Mislocated fixations during reading and the inverted optimal viewing position effect. Vision Research, 45, 2201-2217], that suggests that the IOVP effect is not related to word processing. Here we demonstrate the existence of an IOVP effect in "mindless reading", a z-string scanning task. We compare the results from experimental data with results obtained from computer simulations of a simple model of the IOVP effect and discuss alternative accounts. We conclude that oculomotor errors, which often induce mislocalized fixations, represent the most important source of the IOVP effect. |
Konstantin Mergenthaler; Ralf Engbert Modeling the control of fixational eye movements with neurophysiological delays Journal Article In: Physical Review Letters, vol. 98, no. 13, pp. 1–4, 2007. @article{Mergenthaler2007, We propose a model for the control of fixational eye movements using time-delayed random walks. Fixational eye movements produce random displacements of the retinal image to prevent perceptual fading. First, we demonstrate that a transition from persistent to antipersistent correlations occurs in data recorded from a visual fixation task. Second, we propose and investigate a delayed random-walk model and get, by comparison of the transition points, an estimate of the neurophysiological delay. Differences between horizontal and vertical components of eye movements are found which can be explained neurophysiologically. Finally, we compare our numerical results with analytic approximations. |
Antje S. Meyer; Eva Belke; Christine Häcker; Linda Mortensen Use of word length information in utterance planning Journal Article In: Journal of Memory and Language, vol. 57, no. 2, pp. 210–231, 2007. @article{Meyer2007, Griffin [Griffin, Z. M. (2003). A reversed length effect in coordinating the preparation and articulation of words in speaking. Psychonomic Bulletin & Review, 10, 603-609.] found that speakers naming object pairs spent more time before utterance onset looking at the second object when the first object name was short than when it was long. She proposed that this reversed length effect arose because the speakers' decision when to initiate an utterance was based, in part, on their estimate of the spoken duration of the first object name and the time available during its articulation to plan the second object name. In Experiment 1 of the present study, participants named object pairs. They spent more time looking at the first object when its name was monosyllabic than when it was trisyllabic, and, as in Griffin's study, the average gaze-speech lag (the time between the end of the gaze to the first object and onset of its name, which corresponds closely to the pre-speech inspection time for the second object) showed a reversed length effect. Experiments 2 and 3 showed that this effect was not due to a trade-off between the time speakers spent looking at the first and second object before speech onset. Experiment 4 yielded a reversed length effect when the second object was replaced by a symbol (x or +), which the participants had to categorise. We propose a novel account of the reversed length effect, which links it to the incremental nature of phonological encoding and articulatory planning rather than the speaker's estimate of the length of the first object name. |
Antje S. Meyer; Eva Belke; Anna L. Telling; Glyn W. Humphreys Early activation of object names in visual search Journal Article In: Psychonomic Bulletin & Review, vol. 14, no. 4, pp. 710–716, 2007. @article{Meyer2007a, In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention. |
David M. Milstein; Michael C. Dorris The influence of expected value on saccadic preparation Journal Article In: Journal of Neuroscience, vol. 27, no. 18, pp. 4810–4818, 2007. @article{Milstein2007, Basing higher-order decisions on expected value (reward probability x reward magnitude) maximizes an agent's accruement of reward over time. The goal of this study was to determine whether the advanced preparation of simple actions reflected the expected value of the potential outcomes. Human subjects were required to direct a saccadic eye movement to a visual target that was presented either to the left or right of a central fixation point on each trial. Expected value was manipulated by adjusting the probability of presenting each target and their associated magnitude of monetary reward across 15 blocks of trials. We found that saccadic reaction times (SRTs) were negatively correlated to the relative expected value of the targets. Occasionally, an irrelevant visual distractor was presented before the target to probe the spatial allocation of saccadic preparation. Distractor-directed errors (oculomotor captures) varied as a function of the relative expected value of, and the distance of distractors from, the potential valued targets. SRTs and oculomotor captures were better correlated to the relative expected value of actions than to reward probability, reward magnitude, or overall motivation. Together, our results suggest that the level and spatial distribution of competitive dynamic neural fields representing saccadic preparation reflect the relative expected value of the potential actions. |
Harold T. Nefs; Julie M. Harris Vergence effects on the perception of motion-in-depth Journal Article In: Experimental Brain Research, vol. 183, no. 3, pp. 313–322, 2007. @article{Nefs2007, When the eyes follow a target that is moving directly towards the head they make a vergence eye movement. Accurate perception of the target's motion requires adequate compensation for the movements of the eyes. The experiments in this paper address the issue of how well the visual system compensates for vergence eye movements when viewing moving targets. We show that there are small but consistent biases across observers: When the eyes follow a target that is moving in depth, it is typically perceived as slower than when the eyes are kept stationary. We also analysed the eye movements that were made by observers. We found that there are considerable differences between observers and between trials, but we did not find evidence that the gains and phase lags of the eye movements were related to psychophysical performance. |
Sebastiaan F. W. Neggers; W. Huijbers; C. M. Vrijlandt; Björn N. S. Vlaskamp; D. J. L. G. Schutter; J. Leon Kenemans TMS pulses on the frontal eye fields break coupling between visuospatial attention and eye movements Journal Article In: Journal of Neurophysiology, vol. 98, no. 5, pp. 2765–2778, 2007. @article{Neggers2007, While preparing a saccadic eye movement, visual processing of the saccade goal is prioritized. Here, we provide evidence that the frontal eye fields (FEFs) are responsible for this coupling between eye movements and shifts of visuospatial attention. Functional magnetic resonance imaging (fMRI)-guided transcranial magnetic stimulation (TMS) was applied to the FEFs 30 ms before a discrimination target was presented at or next to the target of a saccade in preparation. Results showed that the well-known enhancement of discrimination performance on locations to which eye movements are being prepared was diminished by TMS contralateral to eye movement direction. Based on the present and other reports, we propose that saccade preparatory processes in the FEF affect selective visual processing within the visual cortex through feedback projections, in that way coupling saccade preparation and visuospatial attention. |
D. Agrafiotis; S. J. C. Davies; N. Canagarajah; D. R. Bull Towards efficient context-specific video coding based on gaze-tracking analysis Journal Article In: ACM Transactions on Multimedia Computing, Communications and Applications, vol. 3, no. 4, pp. 1–15, 2007. @article{Agrafiotis2007, This article discusses a framework for model-based, context-dependent video coding based on exploitation of characteristics of the human visual system. The system utilizes variable-quality coding based on priority maps which are created using mostly context-dependent rules. The technique is demonstrated through two case studies of specific video context, namely open signed content and football sequences. Eye-tracking analysis is employed for identifying the characteristics of each context, which are subsequently exploited for coding purposes, either directly or through a gaze prediction model. The framework is shown to achieve a considerable improvement in coding efficiency. |
Nadia Alahyane; Roméo Salemme; Christian Urquizar; Julien Cotti; Alain Guillaume; Jean-Louis Vercher; Denis Pélisson Oculomotor plasticity: Are mechanisms of adaptation for reactive and voluntary saccades separate? Journal Article In: Brain Research, vol. 1135, no. 1, pp. 107–121, 2007. @article{Alahyane2007, Saccadic eye movements are permanently controlled and their accuracy maintained by adaptive mechanisms that compensate for physiological or pathological perturbations. In contrast to the adaptation of reactive saccades (RS) which are automatically triggered by the sudden appearance of a single target, little is known about the adaptation of voluntary saccades which allow us to intentionally scan our environment in nearly all our daily activities. In this study, we addressed this issue in human subjects by determining the properties of adaptation of scanning voluntary saccades (SVS) and comparing these features to those of RS. We also tested the reciprocal transfers of adaptation between the two saccade types. Our results revealed that SVS and RS adaptations disclosed similar adaptation fields, time course and recovery levels, with only a slightly lower after-effect for SVS. Moreover, RS and SVS main sequences both remained unaffected after adaptation. Finally and quite unexpectedly, the pattern of adaptation transfers was asymmetrical, with a much stronger transfer from SVS to RS (79%) than in the reverse direction (22%). These data demonstrate that adaptations of RS and SVS share several behavioural properties but at the same time rely on partially distinct processes. Based on these findings, it is proposed that adaptations of RS and SVS may involve a neural network including both a common site and two separate sites specifically recruited for each saccade type. |
Brad C. Motter; Diglio A. Simoni The roles of cortical image separation and size in active visual search performance Journal Article In: Journal of Vision, vol. 7, no. 2, pp. 1–15, 2007. @article{Motter2007, Our previous research examined the effects of target eccentricity and global stimulus density on target detection during active visual search in monkey. Here, eye movement data collected from three human subjects on a standard single-color Ts and Ls task with varying set sizes were used to analyze the probability of target detection as a function of local stimulus density. Search performance was found to exhibit a systematic dependence on local stimulus density around the target and as a function of target eccentricity when density is calculated with respect to cortical space, in accordance with a model of the retinocortical geometrical transformation of image data onto the surface of V1. Density as measured by nearest neighbor separation and target image size as calculated from target eccentricity were found to contribute independently to search performance when measured with respect to cortical space but not with standard visual space. Density relationships to performance did not differ when target and nearest neighbor were on opposite sides of the vertical meridian, underscoring the hypothesis that such interactions were occurring within higher visual areas. The cortical separation of items appears to be the major determinant of array set size effects in active visual search. |
Leigh A. Mrotek; John F. Soechting Target interception: Hand-eye coordination and strategies Journal Article In: Journal of Neuroscience, vol. 27, no. 27, pp. 7297–7309, 2007. @article{Mrotek2007, This study was designed to define the characteristics of eye-hand coordination in a task requiring the interception of a moving target. It also assessed the extent to which the motion of the target was predicted and the strategies subjects used to determine when to initiate target interception. Target trajectories were constructed from sums of sines in the horizontal and vertical dimensions. Subjects intercepted these trajectories by moving their index finger along the surface of a display monitor. They were free to initiate the interception at any time, and on successful interception, the target disappeared. Although they were not explicitly instructed to do so, subjects tracked target motion with normal, high-gain smooth-pursuit eye movements right up until the target was intercepted. However, the probability of catch-up saccades was substantially depressed shortly after the onset of manual interception. The initial direction of the finger movement anticipated the motion of the target by approximately 150 ms. For any given trajectory, subjects tended to initiate interception at predictable times that depended on the characteristics of the target trajectories [i.e., when the curvature (or angular velocity) of the target was small and when the target was moving toward the finger]. The relative weighting of various parameters that influenced the decision to initiate interception varied from subject to subject and was not accounted for by a model based on the short-range predictability of target motion. |
Leigh A. Mrotek; John F. Soechting Predicting curvilinear target motion through an occlusion Journal Article In: Experimental Brain Research, vol. 178, no. 1, pp. 99–114, 2007. @article{Mrotek2007a, When a tracked target is occluded transiently, extraretinal signals are known to maintain smooth pursuit, albeit with a reduced gain. The extent to which extraretinal signals incorporate predictions of time-varying behavior, such as gradual changes in target direction, is not known. Three experiments were conducted to examine this question. In the experiments, subjects tracked a target that initially moved along a straight path, then (briefly) followed the arc of a circle, before it disappeared behind a visible occlusion. In the first experiment, the target did not emerge from the occlusion and subjects were asked to point to the location where they thought the target would have emerged. Gaze and pointing behaviors demonstrated that most of the subjects predicted that the target would follow a linear path through the occlusion. The direction of this extrapolated path was the same as the final visible target direction. In the second set of experiments, the target did emerge after following a curvilinear path through the occlusion, and subjects were asked to track the target with their eyes. Gaze behaviors indicated that, in this experimental condition, the subjects predicted curvilinear target motion while the target was occluded. Saccades were directed to the unseen curvilinear path and pursuit continued to follow this same path at a reduced speed in the occlusion. Importantly, the direction of smooth pursuit continued to change throughout the occlusion. Smooth pursuit angular velocity was maintained for approximately 200 ms following target disappearance. The results of the experiments indicate that extraretinal signals indeed incorporate cognitive expectations about the time-varying behavior of target motion. |
Ryan E. B. Mruczek; David L. Sheinberg Context familiarity enhances target processing by inferior temporal cortex neurons Journal Article In: Journal of Neuroscience, vol. 27, no. 32, pp. 8533–8545, 2007. @article{Mruczek2007, Experience-dependent changes in the response properties of ventral visual stream neurons are thought to underlie our ability to rapidly and efficiently recognize visual objects. How these neural changes are related to efficient visual processing during natural vision remains unclear. Here, we demonstrate a neurophysiological correlate of efficient visual search through highly familiar object arrays. Humans and monkeys are faster at locating the same target when it is surrounded by familiar compared with unfamiliar distractors. We show that this behavioral enhancement is driven by an increased sensitivity of target-selective neurons in inferior temporal cortex. This results from an increased "signal" for target representations and decreased "noise" from neighboring familiar distractors. These data highlight the dynamic properties of the inferior temporal cortex neurons and add to a growing body of evidence demonstrating how experience shapes neural processing in the ventral visual stream. |
Ryan E. B. Mruczek; David L. Sheinberg Activity of inferior temporal cortical neurons predicts recognition choice behavior and recognition time during visual search Journal Article In: Journal of Neuroscience, vol. 27, no. 11, pp. 2825–2836, 2007. @article{Mruczek2007a, Although the selectivity for complex stimuli exhibited by neurons in inferior temporal cortex is often taken as evidence of their role in visual perception, few studies have directly tested this hypothesis. Here, we sought to create a relatively natural task with few behavioral constraints to test whether activity in inferior temporal cortex neurons predicts whether or not a monkey will recognize and respond to a complex visual object. Monkeys were trained to freely view an array of images and report the presence of one of many possible target images previously associated with a hand response. On certain trials, the identity of the target was swapped during the monkeys' targeting saccade. Furthermore, the response association of the preswap target and the postswap target differed (e.g., right-to-left target swap). Neural activity in cells selective for the preswap target was significantly higher when the monkeys' response matched the hand association of the preswap target. Furthermore, the monkeys' response time was predicted by the magnitude of the presaccadic firing rate on nonswap trials. Our results provide additional support for the role of inferior temporal cortex in object recognition during natural behavior. |
Selim Onat; Klaus Libertus; Peter König Integrating audiovisual information for the control of overt attention Journal Article In: Journal of Vision, vol. 7, no. 10, pp. 1–6, 2007. @article{Onat2007, In everyday life, our brains decide about the relevance of huge amounts of sensory input. Further complicating this situation, this input is distributed over different modalities. This raises the question of how different sources of information interact for the control of overt attention during free exploration of the environment under natural conditions. Different modalities may work independently or interact to determine the consequent overt behavior. To answer this question, we presented natural images and lateralized natural sounds in a variety of conditions and we measured the eye movements of human subjects. We show that, in multimodal conditions, fixation probabilities increase on the side of the image where the sound originates showing that, at a coarser scale, lateralized auditory stimulation topographically increases the salience of the visual field. However, this shift of attention is specific because the probability of fixation of a given location on the side of the sound scales with the saliency of the visual stimulus, meaning that the selection of fixation points during multimodal conditions is dependent on the saliencies of both auditory and visual stimuli. Further analysis shows that a linear combination of both unimodal saliencies provides a good model for this integration process, which is optimal according to information-theoretical criteria. Our results support a functional joint saliency map, which integrates different unimodal saliencies before any decision is taken about the subsequent fixation point. These results provide guidelines for the performance and architecture of any model of overt attention that deals with more than one modality. |
Jacinta O'Shea; Neil G. Muggleton; Alan Cowey; Vincent Walsh Human frontal eye fields and spatial priming of pop-out Journal Article In: Journal of Cognitive Neuroscience, vol. 19, no. 7, pp. 1140–1151, 2007. @article{OShea2007, "Priming of pop-out" is a form of implicit memory that facilitates detection of a recently inspected search target. Repeated presentation of a target's features or its spatial position improves detection speed (feature/spatial priming). This study investigated a role for the human frontal eye fields (FEFs) in the priming of color pop-out. To test the hypothesis that the FEFs play a role in short-term memory storage, transcranial magnetic stimulation (TMS) was applied during the intertrial interval. There was no effect of TMS on either spatial or feature priming. To test whether the FEFs are important when a saccade is being programmed to a repeated target color or location, TMS was applied during the search array. TMS over the left but not the right FEFs abolished spatial priming, but had no effect on feature priming. These findings demonstrate functional specialization of the left FEFs for spatial priming, and distinguish this role from target discrimination and saccade-related processes. The results suggest that the left FEFs integrate a spatial memory signal with an evolving saccade program, which facilitates saccades to a recently inspected location. |
Anna Montagnini; Eric Castet Spatiotemporal dynamics of visual attention during saccade preparation: Independence and coupling between attention and movement planning Journal Article In: Journal of Vision, vol. 7, no. 14, pp. 1–16, 2007. @article{Montagnini2007, During the preparation of a saccadic eye movement, a visual stimulus is more efficiently processed when it is spatially coincident with the saccadic target as compared to when the visual and the saccadic targets are displayed at different locations. We studied the coupling between visual selective attention and saccadic preparation by measuring orientation acuity of human subjects at different locations relative to the saccadic target and at different delays relative to the saccade cue onset. First, we generalized previous results (E. Castet, S. Jeanjean, A. Montagnini, D. Laugier, & G. S. Masson, 2006) revealing that a dramatic perceptual advantage at the saccadic target emerges dynamically within the first 150-200 ms from saccade cue onset. Second, by varying the validity of the spatial cue for the discrimination task, we encouraged subjects to modulate the spatial distribution of attentional resources independently from the automatic deployment to saccadic target. We found that an independent component of attention can be voluntarily deployed away from the saccadic target. The relative weight of the automatic versus the independent component of attention increases across time during saccadic preparation. |
Inger Montfoort; Maarten A. Frens; Ignace T. C. Hooge; Gerardina C. Lagers-van Haselen; Josef N. Geest Visual search deficits in Williams-Beuren syndrome Journal Article In: Neuropsychologia, vol. 45, no. 5, pp. 931–938, 2007. @article{Montfoort2007, Williams-Beuren syndrome (WBS) is a rare genetic condition characterized by several physical and mental traits, such as a poor visuo-spatial processing and a relative strength in language. In this study we investigated how WBS subjects search and scan their visual environment. We presented 10 search displays on a computer screen to WBS subjects as well as control subjects, with the instruction to find a target out of several stimulus elements. We analyzed the eye movement patterns for fixation characteristics and systematicy of search. Fixations generally lasted longer in WBS subjects than in control subjects. WBS subjects made more fixations at a stimulus element they had already looked at and more fixations that were not aimed at a stimulus element at all, decreasing the efficiency of search. These outcomes lead to the conclusion that visual search of individuals with Williams-Beuren syndrome is less effective than in control subjects. This finding may be related to their motor deficits, an impaired processing of global visual information and/or deficits in working memory and could reflect impairments within the dorsal stream. |
Stephen J. Kerrigan; John F. Soechting Anisotropies in the gain of smooth pursuit during two-dimensional tracking as probed by brief perturbations Journal Article In: Experimental Brain Research, vol. 180, no. 3, pp. 435–448, 2007. @article{Kerrigan2007, Previous investigations suggest the gain of smooth pursuit is directionally anisotropic and is regulated in a task-dependent manner. Smooth pursuit is also known to be influenced by expectations concerning the target's motion, but the role of such expectations in modulating feedback gain is not known. In the present work, the gain of smooth pursuit was probed by applying brief perturbations to quasi-predictable two-dimensional target motion at multiple time points. The target initially moved in a straight line, then followed the circumference of a circle for distances ranging between 180 degrees and 270 degrees . Finally, the path reverted to linear motion. Perturbations consisted of a pulse of velocity 50 or 100 ms in duration, applied in one of eight possible directions. They were applied at the onset of the curve or after the target had traversed an arc of 45 degrees or 90 degrees . Pursuit gain was measured by computing the average amplitude of the response in smooth pursuit velocity over a 100 ms interval. To do so we used a coordinate system defined by the motion of the target at the onset of the perturbation, with directions tangential and normal to the path. Responses to the perturbations had two components: one that was modulated with the direction of the perturbation and one that was directionally nonspecific. For the directional response, on average the gain in the normal direction was slightly larger than the gain in the tangential direction, with a ratio ranging from 1.0 to 1.3. The directionally nonspecific response, which was more prominent for perturbations at curve onset or at 90 degrees , consisted of a transient decrease in pursuit speed. Perturbations applied at curve onset also delayed the tracking of the curved target motion. |
Frank Joosten; Gert De Sutter; Denis Drieghe; Stefan Grondelaers; Robert J. Hartsuiker; Dirk Speelman Dutch collective nouns and conceptual profiling Journal Article In: Linguistics, vol. 45, no. 1, pp. 85–132, 2007. @article{Joosten2007, Collective nouns such as committee, family, or team are conceptually (and in English also syntactically) complex in the sense that they are both singular ("one") and plural ("more than one"): they refer to a multiplicity that is conceptualized as a unity. In this article, which focuses on Dutch collective nouns, it is argued that some collective nouns are rather "one", whereas others are rather "more than one". Collective nouns are shown to be different from one another in member level accessibility. Whereas all collective nouns have both a conceptual collection level ("one") and a conceptual member level ("more than one"), the latter is not always conceptually profiled (i.e., focused on) to the same extent. A gradient is sketched in which collective nouns such as bemanning ('crew') (member level highly accessible) and vereniging ('association') (member level scarcely accessible) form the extremes. Arguments in favor of the conceptual phenomenon of variable member level accessibility derive from an analysis of property distribution, from corpus research on verbal and pronominal singular-plural variation, and from a psycholinguistic eye-tracking experiment. |
Johanna K. Kaakinen; Jukka Hyönä Strategy use in the reading span test: An analysis of eye movements and reported encoding strategies Journal Article In: Memory, vol. 15, no. 6, pp. 634–646, 2007. @article{Kaakinen2007, Strategy use in the traditional reading span test was examined by recording participants' eye movements during the task (Experiment 1) and by interviewing participants about their strategy use (Experiment 2). In Experiment 1, no differences between individuals with a low, medium, and high span were observed in how they distributed processing time between task elements. In all three groups, fixation times on words up to the to-be-remembered (TBR) word became shorter and the time spent on the TBR longer as memory load in the task increased. The results of Experiment 2, however, show that span groups differ in the use of memory encoding strategies: individuals with a low span use mainly rehearsal, whereas individuals with a high span use almost exclusively semantic elaboration. The results indicate that the use of elaborative strategies may enhance span performance but that not all individuals are necessarily able to use such strategies efficiently. |
Johanna K. Kaakinen; Jukka Hyönä Perspective effects in repeated reading: An eye movement study Journal Article In: Memory & Cognition, vol. 35, no. 6, pp. 1323–1336, 2007. @article{Kaakinen2007a, The present study examined the influence of perspective instructions on online processing of expository text during repeated reading. Sixty-two participants read either a high or a low prior knowledge (HPK vs. LPK) text twice from a given perspective while their eye movements were recorded. They switched perspective before a third reading. Reading perspective affected the first-pass reading and also increased sentence wrap-up processing time in the perspective-relevant sentences. Prior knowledge facilitated the recognition of the (ir)relevance of text information and resulted in relatively earlier perspective effects in the HPK versus LPK text. Repeated reading facilitated processing, as indicated by all eye movement measures. After the perspective switch, a repetition benefit was observed for the previously relevant text information, whereas a repetition cost was found for the previously irrelevant text information. These results indicate that reading perspective and prior knowledge have a significant influence on how readers allocate visual attention during reading. |
Andre Kaminiarz; Bart Krekelberg; Frank Bremmer Localization of visual targets during optokinetic eye movements Journal Article In: Vision Research, vol. 47, no. 6, pp. 869–878, 2007. @article{Kaminiarz2007, We investigated localization of brief visual targets during reflexive eye movements (optokinetic nystagmus). Subjects mislocalized these targets in the direction of the slow eye movement. This error decreased shortly before a saccade and temporarily increased afterwards. The pattern of mislocalization differs markedly from mislocalization during voluntary eye movements in the presence of visual references, but (spatially) resembles mislocalization during voluntary eye movements in darkness. Because neither reflexive eye movements nor voluntary eye movements in darkness have explicit (visual) goals, these data support the view that visual goals support perceptual stability as an important link between pre- and post-saccadic scenes. |
Ryota Kanai; Bhavin R. Sheth; Shinsuke Shimojo Dynamical evolution of motion perception Journal Article In: Vision Research, vol. 47, no. 7, pp. 937–945, 2007. @article{Kanai2007, Motion is defined as a sequence of positional changes over time. However, in perception, spatial position and motion dynamically interact with each other. This reciprocal interaction suggests that the perception of a moving object itself may dynamically evolve following the onset of motion. Here, we show evidence that the percept of a moving object systematically changes over time. In experiments, we introduced a transient gap in the motion sequence or a brief change in some feature (e.g., color or shape) of an otherwise smoothly moving target stimulus. Observers were highly sensitive to the gap or transient change if it occurred soon after motion onset (≤200 ms), but significantly less so if it occurred later (≥300 ms). Our findings suggest that the moving stimulus is initially perceived as a time series of discrete potentially isolatable frames; later failures to perceive change suggests that over time, the stimulus begins to be perceived as a single, indivisible gestalt integrated over space as well as time, which could well be the signature of an emergent stable motion percept. |
Wolfgang Jaschinski; Stephanie Jainta; Jörg Hoormann; Nina Walper Objective vs subjective measurements of dark vergence Journal Article In: Ophthalmic and Physiological Optics, vol. 27, no. 1, pp. 85–92, 2007. @article{Jaschinski2007, Dark vergence is a resting position of vergence (tonic vergence), measured in a dark visual field to eliminate fusional, accommodative, and proximal stimuli. The vergence resting position is relevant for measures of phoria and fixation disparity. Dark vergence differs reliably among subjects: the average subject converges at a viewing distance of about 1 m, while the inter-individual range is from infinity to about 40 cm. In previous research, dark vergence was measured subjectively, i.e. observers adjusted the horizontal offset of dichoptically presented nonius targets to perceived alignment. Results of such subjective vergence tests do not necessarily agree with those of the objective measurements of eye position with eye trackers. Therefore, we made simultaneous subjective and objective measurements of dark vergence and found similar results with both methods in repeated tests in two sessions. Thus, the nonius test is sufficient for a subjective estimation of dark vergence. |
Rebecca L. Johnson; Keith Rayner Top-down and bottom-up effects in pure alexia: Evidence from eye movements Journal Article In: Neuropsychologia, vol. 45, no. 10, pp. 2246–2257, 2007. @article{Johnson2007, The eye movements of a patient with pure alexia, GJ, were recorded as he read sentences in order to explore the roles of top-down and bottom-up information during letter-by-letter reading. Specifically, the effects of word frequency and word predictability were examined. Additional analyses examined the interaction of these effects with the lower level influences of word length and letter confusability. The results indicate that GJ is sensitive to all four of these variables in sentence reading. These findings support an interactive account of reading where letter-by-letter readers use both bottom-up and top-down information to decode words. Due to the disrupted bottom-up processes caused by damage to the Visual Word Form Area or the input connections to it, pure alexic patients rely more heavily on intact top-down information in reading. |
Lee Hogarth; Anthony Dickinson; Alexander Wright; Mariangela Kouvaraki; Theodora Duka The role of drug expectancy in the control of human drug seeking Journal Article In: Journal of Experimental Psychology: Animal Behavior Processes, vol. 33, no. 4, pp. 484–496, 2007. @article{Hogarth2007, Human drug seeking may be goal directed in the sense that it is mediated by a mental representation of the drug or habitual in the sense that it is elicited by drug-paired cues directly. To test these 2 accounts, the authors assessed whether a drug-paired stimulus (S+) would transfer control to an independently trained drug-seeking response. Smokers were trained on an instrumental discrimination that established a tobacco S+ in Experiment 1 and a tobacco and a money S+ in Experiment 2 that elicited an expectancy of their respective outcomes. Participants then learned 2 new instrumental responses, 1 for each outcome, in the absence of these stimuli. Finally, in the transfer test, each S+ was found to augment performance of the new instrumental response that was trained with the same outcome. This outcome-specific transfer effect indicates that drug-paired stimuli controlled human drug seeking via a representation or expectation of the drug rather than through a direct stimulus-response association. |
Linus Holm; Timo Mäntylä Memory for scenes: Refixations reflect retrieval Journal Article In: Memory & Cognition, vol. 35, no. 7, pp. 1664–1674, 2007. @article{Holm2007, Most conceptions of episodic memory hold that reinstatement of encoding operations is essential for retrieval success, but the specific mechanisms of retrieval reinstatement are not well understood. In three experiments, we used saccadic eye movements as a window for examining reinstatement in scene recognition. In Experiment 1, participants viewed complex scenes, while number of study fixations was controlled by using a gaze-contingent paradigm. In Experiment 2, effects of stimulus saliency were minimized by directing participants' eye movements during study. At test, participants made remember/know judgments for each recognized stimulus scene. Both experiments showed that remember responses were associated with more consistent study-test fixations than false rejections (Experiments 1 and 2) and know responses (Experiment 2). In Experiment 3, we examined the causal role of gaze consistency on retrieval by manipulating participants' expectations during recognition. After studying name and scene pairs, each test scene was preceded by the same or different name as during study. Participants made more consistent eye movements following a matching, rather than mismatching, scene name. Taken together, these findings suggest that explicit recollection is a function of perceptual reconstruction and that event memory influences gaze control in this active reconstruction process. |
P. -J. Hsieh; P. U. Tse Grouping inhibits motion fading by giving rise to virtual trackable features Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 33, no. 1, pp. 57–63, 2007. @article{Hsieh2007, After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. The authors show that grouping can slow down the process of "motion fading," suggesting that cortical configural form analysis interacts with the computation of motion signals during motion fading. The authors determined that grouping slows motion fading because it can give rise to trackable features, such as virtual contour terminators not present in the image, that possess stronger motion signals than would occur in the absence of such trackable features. That a continuously rotating stimulus will eventually be perceived to stop, despite the presence of such trackable features, suggests that the motion-from-form system itself can be fatigued. The finding that stationary form can remain visible even after the motion signal has faded suggests that the neural bases of motion visibility and form visibility arise from different neuronal populations. |
J. Hübner; Andreas Sprenger; C. Klein; J. Hagenah; Holger Rambold; C. Zuhlke; D. Kompf; A. Rolfs; H. Kimmig; Christoph Helmchen Eye movement abnormalities in spinocerebellar ataxia type 17 (SCA17) Journal Article In: Neurology, vol. 69, no. 11, pp. 1160–1168, 2007. @article{Huebner2007, BACKGROUND: Spinocerebellar ataxia type 17 (SCA17) is associated with an expansion of CAG/CAA trinucleotide repeats in the gene encoding the TATA-binding protein. In this quantitative characterization of eye movements in SCA17 mutation carriers, we investigated whether eye movement abnormalities originate from multiple lesion sites as suggested by their phenotypic heterogeneity. METHODS: Eye movements (saccades, smooth pursuit) of 15 SCA17 mutation carriers (mean age 36.9 years, range 20 to 54 years; mean disease duration 7.3 years, range 0 to 20 years; 2 clinically unaffected, 13 affected) were compared with 15 age-matched control subjects using the video-based two-dimensional EYELINK II system. RESULTS: Smooth pursuit initiation (step-ramp paradigm) and maintenance were strongly impaired, i.e., pursuit latency was increased and acceleration decreased, whereas latency and position error of the first catch-up saccade were normal. Visually guided saccades were hypometric but had normal velocities. Gaze-evoked nystagmus was found in one-third of the mutation carriers, including downbeat and rebound nystagmus. There was a pathologic increase in error rates of antisaccades (52%) and memory-guided saccades (42%). Oculomotor disorders were not correlated with repeat length. Smooth pursuit impairment and saccadic disorders increased with disease duration. CONCLUSIONS: Several oculomotor deficits of spinocerebellar ataxia type 17 (SCA17) mutation carriers are compatible with cerebellar degeneration. This is consistent with histopathologic and imaging (morphometric) data. In contrast, increased error rates in antisaccades and memory-guided saccades point to a deficient frontal inhibition of reflexive movements, which is probably best explained by cortical dysfunction and may be related to other phenotypic SCA17 signs, e.g., dementia and parkinsonism. |
Vyv C. Huddy; Timothy L. Hodgson; Masuma Kapasi; Stanley H. Mutsatsa; Isobel Harrison; Thomas R. E. Barnes; Eileen M. Joyce Gaze strategies during planning in first-episode psychosis Journal Article In: Journal of Abnormal Psychology, vol. 116, no. 3, pp. 589–598, 2007. @article{Huddy2007, Eye movements were measured during the performance of a computerized Tower of London task to specify the source of planning abnormalities in patients with 1st-episode schizophrenia or schizoaffective disorder. Subjects viewed 2 arrays of colored balls in the upper and lower parts of the screen. They were asked to plan the shortest sequence of moves required to rearrange the balls in the lower screen to match the upper arrangement. Compared with healthy controls, patients made more planning errors, and decision times were longer. However, the patients showed the same gaze biases as controls prior to making a response, indicating that they understood the requirements of the task, approached the task in a strategic manner by identifying the nature of the problem, and used appropriate fixation strategies to plan and elaborate solutions. The patients showed increased duration of long-gaze periods toward both parts of the screen. This suggests that the patients had difficulty in encoding the essential features of the stimulus array. This finding is compatible with slowing of working memory consolidation. |
Falk Huettig; Gerry T. M. Altmann Visual-shape competition during language-mediated attention is based on lexical input and not modulated by contextual appropriateness Journal Article In: Visual Cognition, vol. 15, no. 8, pp. 985–1018, 2007. @article{Huettig2007, Visual attention can be directed immediately, as a spoken word unfolds, towards conceptually related but nonassociated objects, even if they mismatch on other dimensions that would normally determine which objects in the scene were appropriate referents for the unfolding word (Huettig & Altmann, 2005). Here we demonstrate that the mapping between language and concurrent visual objects can also be mediated by visual-shape relations. On hearing "snake", participants directed overt attention immediately, within a visual display depicting four objects, to a picture of an electric cable, although participants had viewed the visual display with four objects for approximately 5 s before hearing the target word-sufficient time to recognize the objects for what they were. The time spent fixating the cable correlated significantly with ratings of the visual similarity between snakes in general and this particular cable. Importantly, with sentences contextually biased towards the concept snake, participants looked at the snake well before the onset of "snake'', but they did not look at the visually similar cable until hearing "snake''. Finally, we demonstrate that such activation can, under certain circumstances (e. g., during the processing of dominant meanings of homonyms), constrain the direction of visual attention even when it is clearly contextually inappropriate. We conclude that language-mediated attention can be guided by a visual match between spoken words and visual objects, but that such a match is based on lexical input and may not be modulated by contextual appropriateness. |
Falk Huettig; James M. McQueen The tug of war between phonological, semantic and shape information in language-mediated visual search Journal Article In: Journal of Memory and Language, vol. 57, no. 4, pp. 460–482, 2007. @article{Huettig2007a, Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with beker, 'beaker', for example, the display contained phonological (a beaver, bever), shape (a bobbin, klos), and semantic (a fork, vork) competitors. When the display appeared at sentence onset, fixations to phonological competitors preceded fixations to shape and semantic competitors. When display onset was 200 ms before (e.g.) beker, fixations were directed to shape and then semantic competitors, but not phonological competitors. In Experiments 3 and 4, displays contained the printed names of the previously-pictured entities; only phonological competitors were fixated preferentially. These findings suggest that retrieval of phonological, shape and semantic knowledge in the spoken-word and picture-recognition systems is cascaded, and that visual attention shifts are co-determined by the time-course of retrieval of all three knowledge types and by the nature of the information in the visual environment. |
Amelia R. Hunt; Robbie M. Cooper; Clara Hungr; Alan Kingstone The effect of emotional faces on eye movements and attention Journal Article In: Visual Cognition, vol. 15, no. 5, pp. 513–531, 2007. @article{Hunt2007, The present study investigated the nature of attention to facial expressions using an oculomotor capture paradigm. Participants were required to make a speeded saccade toward a predefined target and ignore distractors. The valence (happy or angry) and orientation (upright or inverted) of the target and distractors varied.We found evidence that irrelevant happy and angry face distractors did capture attention, but only when emotions were the target of search. Eye movements were not directed toward angry distractors any more often than toward happy distractors, and saccades to angry face targets were no faster than to other targets. The results provide evidence that emotion information can be used as a feature to voluntarily select targets and direct attention, suggesting attention is not necessary for the identification of emotional expression. There was no evidence, however, that angry face stimuli have a special priority for reflexively orienting attention. |
Amelia R. Hunt; Adrian Mühlenen; Alan Kingstone The time course of attentional and oculomotor capture reveals a common cause Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 33, no. 2, pp. 271–284, 2007. @article{Hunt2007a, Eye movements are often misdirected toward a distractor when it appears abruptly, an effect known as oculomotor capture. Fundamental differences between eye movements and attention have led to questions about the relationship of oculomotor capture to the more general effect of sudden onsets on performance, known as attentional capture. This study explores that issue by examining the time course of eye movements and manual localization responses to targets in the presence of sudden-onset distractors. The results demonstrate that for both response types, the proportion of trials on which responses are erroneously directed to sudden onsets reflects the quality of information about the visual display at a given point in time. Oculomotor capture appears to be a specific instance of a more general attentional capture effect. Differences and similarities between the two types of capture can be explained by the critical idea that the quality of information about a visual display changes over time and that different response systems tend to access this information at different moments in time. |
Samuel B. Hutton; Brendan S. Weekes Low frequency rTMS over posterior parietal cortex impairs smooth pursuit eye tracking Journal Article In: Experimental Brain Research, vol. 183, no. 2, pp. 195–200, 2007. @article{Hutton2007, The role of the posterior parietal cortex in smooth pursuit eye movements remains unclear. We used low frequency repetitive transcranial magnetic stimulation (rTMS) to study the cognitive and neural systems involved in the control of smooth pursuit eye movements. Eighteen participants were tested on two separate occasions. On each occasion we measured smooth pursuit eye tracking before and after 6 min of 1 Hz rTMS delivered at 90% of motor threshold. Low frequency rTMS over the posterior parietal cortex led to a significant reduction in smooth pursuit velocity gain, whereas rTMS over the motor cortex had no effect on gain. We conclude that low frequency offline rTMS is a potentially useful tool with which to explore the cortical systems involved in oculomotor control. |
Reinhold Kliegl; Sarah Risse; Jochen Laubrock Preview benefit and parafoveal-on-foveal effects from word n + 2 Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 33, no. 5, pp. 1250–1255, 2007. @article{Kliegl2007, Using the gaze-contingent boundary paradigm with the boundary placed after word n, the experiment manipulated preview of word n + 2 for fixations on word n. There was no preview benefit for 1st-pass reading on word n + 2, replicating the results of K. Rayner, B. J. Juhasz, and S. J. Brown (2007), but there was a preview benefit on the 3-letter word n + 1, that is, after the boundary but before word n + 2. Additionally, both word n + 1 and word n + 2 exhibited parafoveal-on-foveal effects on word n. Thus, during a fixation on word n and given a short word n + 1, some information is extracted from word n + 2, supporting the hypothesis of distributed processing in the perceptual span. |
Lisa Irmen What's in a (role) name? Formal and conceptual aspects of comprehending personal nouns Journal Article In: Journal of Psycholinguistic Research, vol. 36, no. 6, pp. 431–456, 2007. @article{Irmen2007, Two eye-tracking studies assessed effects of grammatical and conceptual gender cues in generic role name processing in German. Participants read passages about a social or occupational group introduced by way of a generic role name (e.g., Soldaten/soldiers, Künstler/artists). Later in the passage the gender of this group was specified by the anaphoric expression diese Männer/these men or diese Frauen/these women. Testing masculine generic role names of male, female or neutral conceptual gender (Exp. 1) showed that a gender mismatch between the role name's conceptual gender and the anaphor significantly slowed reading immediately before and after the anaphoric noun. A mismatch between the antecedent's grammatical gender and the anaphor slowed down the reading of the anaphoric noun itself. Testing grammatically gender-unmarked role names (Exp. 2) revealed a general male bias in participants' understanding, irrespective of grammatical or conceptual gender. The experiments extend previous findings on gender effects to non-referential role names and generic contexts. Theoretical aspects of gender and plural reference as well as gender information in mental models are discussed. |
David E. Irwin; Laura E. Thomas The effect of saccades on number processing Journal Article In: Perception and Psychophysics, vol. 69, no. 3, pp. 450–458, 2007. @article{Irwin2007, Recent research has shown that saccadic eye movements interfere with dorsal-stream tasks such as judgments of object orientation, but not with ventral-stream tasks such as object recognition. Because saccade programming and execution also rely on the dorsal stream, it has been hypothesized that cognitive saccadic suppression occurs as a result of dual-task interference within the dorsal stream. Judging whether one number is larger or smaller than another (magnitude comparison) is a dorsal-stream task that relies especially on the right parietal cortex. In contrast, judging whether a number is odd or even (parity judgment) does not involve the dorsal stream. In the present study, one group of subjects judged whether two-digit numbers were greater than or less than 65, whereas another group judged whether two-digit numbers were odd or even. Subjects in both groups made these judgments while making no, short, or long saccades. Saccade distance had no effect on parity judgments, but reaction times to make magnitude comparison judgments increased with saccade distance when the eyes moved from right to left. Because the right parietal cortex is instrumental in generating leftward saccades, these results provide further evidence for the hypothesis that cognitive suppression during saccades occurs as a result of dual-task interference within the dorsal stream. |
Roxane J. Itier; Christina Villate; Jennifer D. Ryan Eyes always attract attention but gaze orienting is task-dependent: Evidence from eye movement monitoring Journal Article In: Neuropsychologia, vol. 45, no. 5, pp. 1019–1028, 2007. @article{Itier2007, Eyes and gaze are central to social cognition but whether they attract attention differently depending on the task is unknown. Here, the shift in attention towards the eye region and gaze direction of a perceived face was studied in two tasks by monitoring eye movements. The same face stimuli in front- or 3/4-view, with direct or averted gaze, were used in both tasks. In the Gaze task, subjects performed an explicit gaze direction judgment (gaze straight or averted) while in the Head task they performed a head orientation judgment (front- or 3/4-view). Gaze processing was evident in both tasks as shown by longer RTs and lower accuracy when head and gaze directions did not match. In both tasks the eye region was the most attended area but the amount of viewing was task-dependent. Most importantly, ∼90% of the initial saccades landed in the eye region in the Gaze task but only ∼50% of them did so in the Head task. These saccades were made in the direction signaled by gaze in the Gaze task but in the direction signaled by head orientation in the Head task. Altogether, these task-modulated behaviors argue against a purely exogenous and automatic orienting-to-gaze mechanism. Based on patient work and neuroimaging studies of gaze processing, we suggest that this task-dependent orienting behavior is rather endogenous and subtended by cortical areas amongst which frontal regions play a central role. We discuss the implications of this finding for clinical populations. |
Stephanie Jainta; Jörg Hoormann; W. Jaschinski Objective and subjective measures of vergence step responses Journal Article In: Vision Research, vol. 47, no. 26, pp. 3238–3246, 2007. @article{Jainta2007, Dichoptic nonius lines are used for subjectively (psychophysically) measuring vergence states, but they have been questioned as valid indicators of vergence eye position. In a mirror-stereoscope, we presented convergent and divergent step-stimuli and estimated the vergence response with nonius lines flashed at fixed delays after the disparity step stimulus. For each delay, an adaptive psychophysical procedure was run to determine the physical nonius offset required for subjective alignment; these vergence states were compared with objective eye movement recordings. Between both measures of initial vergence, we calculated the maximal cross-correlation coefficient: the median in our sample was about 0.9 for convergence and divergence, suggesting a good agreement. Relative to the objective measures, the subjective method revealed a smaller vergence velocity and a larger vergence response in the final phase of the response, but both measures were well correlated. The dynamic nonius test is therefore considered to be useful to relatively evaluate a subject's ability in disparity vergence. |
Denis Drieghe; Timothy Desmet; Marc Brysbaert How important are linguistic factors in word skipping during reading? Journal Article In: British Journal of Psychology, vol. 98, no. 1, pp. 157–171, 2007. @article{Drieghe2007, The probability of skipping a word is influenced by its processing ease. For instance, a word that is predictable from the preceding context is skipped more often than an unpredictable word. A meta-analysis of studies examining this predictability effect reported effect sizes ranging from 0 to 13%, with an average of 8%. One study does not fit within this picture and reported 23% more skipping of Dutch pronouns in sentences in which the pronoun had no disambiguating value (e.g. 'Mary was envious of Helen because she never looked so good') than in sentences where it did have a disambiguating value (e.g. 'Mary was envious of Albert because she never looked so good'). We re-examined this ambiguity in Dutch using a task that more closely resembles normal reading and observed only a 9% difference in skipping of the pronoun, bringing this linguistic effect in line with the other findings. |
Jason A. Droll; Krista Gigone; Mary Hayhoe Learning where to direct gaze during change detection Journal Article In: Journal of Vision, vol. 7, no. 14, pp. 1–12, 2007. @article{Droll2007, Where do observers direct their attention in complex scenes? Previous work on the cognitive control of fixation patterns in natural environments suggests that subjects must learn where to direct attention and gaze. We examined this question in the context of a change blindness paradigm, where some objects were more likely to undergo a change in orientation than others. The experiments revealed that observers are capable of learning the frequency with which objects undergo a change, and that this learning is manifested in the distribution of gaze among objects in the scene, as well as in the reaction time for detecting visual changes, and the frequency of localizing changing objects. However, observers were much less sensitive to the conditional probability of a second feature, border color, predicting a change in orientation. We conclude that striking demonstrations of change blindness may reflect not only the constraints of attention and working memory, but also what observers have learnt about what information to attend and select for storage during the task of change detection. Such exploitation of the frequency of change suggests that gaze allocation is sensitive to the probabilistic structure of the environment. |
Paola E. Dussias; Nuria Sagarra The effect of exposure on syntactic parsing in Spanish - English bilinguals Journal Article In: Bilingualism: Language and Cognition, vol. 10, no. 1, pp. 101–116, 2007. @article{Dussias2007, An eye tracking experiment examined how exposure to a second language (L2) influences sentence parsing in the first language. Forty-four monolingual Spanish speakers, 24 proficient Spanish - English bilinguals with limited immersion experience in the L2 environment and 20 proficient Spanish - English bilinguals with extensive L2 immersion experience read temporarily ambiguous constructions. The ambiguity concerned whether a relative clause (RC) that appeared after a complex noun phrase (NP) was interpreted as modifying the first or the second noun in the complex NP (El policía arrestó a la hermana del criado que estaba enferma desde hacía tiempo). The results showed that whereas the Spanish monolingual speakers and the Spanish - English bilinguals with limited exposure reliably attached the relative clause to the first noun, the Spanish - English bilingual with extensive exposure attached the relative to the second noun. Results are discussed in terms of models of sentence parsing most consistent with the findings. © 2007 Cambridge University Press. |
Wouter Duyck; Eva Van Assche; Denis Drieghe; Robert J. Hartsuiker Visual word recognition by bilinguals in a sentence context: Evidence for nonselective lexical access Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 4, pp. 663–679, 2007. @article{Duyck2007, Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment, Dutch-English bilinguals performing a 2nd language (L2) lexical decision task were faster to recognize identical and nonidentical cognate words (e.g., banaan-banana) presented in isolation than control words. A second experiment replicated this effect when the same set of cognates was presented as the final words of low-constraint sentences. In a third experiment that used eyetracking, the authors showed that early target reading time measures also yield cognate facilitation but only for identical cognates. These results suggest that a sentence context may influence, but does not nullify, cross-lingual lexical interactions during early visual word recognition by bilinguals. |
Géry D'Ydewalle; Wim De Bruycker Eye movements of children and adults while reading television subtitles Journal Article In: European Psychologist, vol. 12, no. 3, pp. 196–205, 2007. @article{DYdewalle2007, Eye movements of children (Grade 5–6) and adults were monitored while they were watching a foreign language movie with either standard (foreign language soundtrack and native language subtitling) or reversed (foreign language subtitles and native language soundtrack) subtitling. With standard subtitling, reading behavior in the subtitle was observed, but there was a difference between one- and two-line subtitles. As two lines of text contain verbal information that cannot easily be inferred from the pictures on the screen, more regular reading occurred; a single text line is often redundant to the information in the picture, and accordingly less reading of one-line text was apparent. Reversed subtitling showed even more irregular reading patterns (e.g., more subtitles skipped, fewer fixations, longer latencies). No substantial age differences emerged, except that children took longer to shift attention to the subtitle at its onset, and showed longer fixations and shorter saccades in the text. On the whole, the results demonstrated the flexibility of the attentional system and its tuning to the several information sources available (image, soundtrack, and subtitles). |
Julie A. Van Dyke Interference effects from grammatically unavailable constituents during sentence processing Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 2, pp. 407–430, 2007. @article{Dyke2007, Evidence from 3 experiments reveals interference effects from structural relationships that are inconsis- tent with any grammatical parse of the perceived input. Processing disruption was observed when items occurring between a head and a dependent overlapped with either (or both) syntactic or semantic features of the dependent. Effects of syntactic interference occur in the earliest online measures in the region where the retrieval of a long-distance dependent occurs. Semantic interference effects occur in later online measures at the end of the sentence. Both effects endure in offline comprehension measures, suggesting that interfering items participate in incorrect interpretations that resist reanalysis. The data are discussed in terms of a cue-based retrieval account of parsing, which reconciles the fact that the parser must violate the grammar in order for these interference effects to occur. Broader implications of this research indicate a need for a precise specification of the interface between the parsing mechanism and the memory system that supports language comprehension. |
Miguel P. Eckstein; Brent R. Beutter; Binh T. Pham; Steven S. Shimozaki; Leland S. Stone Similar neural representations of the target for saccades and perception during search Journal Article In: Journal of Neuroscience, vol. 27, no. 6, pp. 1266–1270, 2007. @article{Eckstein2007, Are the body's actions and the mind's perceptions the result of shared neural processing, or are they performed largely independently? The brain has two major processing streams, and some have proposed that this division segregates visual processing for action and perception. The ventral pathway is claimed to support conscious experience (perception), whereas the dorsal pathway is claimed to support the control of movement (action). Others have argued that perception and action share much of their visual processing within the primate cortex. During visual search, the brain performs a sophisticated deployment of eye movements (saccadic actions) to gather information to subserve perceptual judgments. The relationship between the neural mechanisms mediating perception and action in visual search remains unexplored. Here, we investigate the visual representation of target information in the human brain, both for perceptual decisions and for saccadic actions during visual search. We use classification image analysis, a form of reverse correlation, to estimate the behavioral receptive fields of the visual mechanisms responsible for saccadic and perceptual responses during the same visual search task. Results show that the behavioral receptive fields mediating the perceptual decisions are indistinguishable from those driving the oculomotor decisions, suggesting that similar neural mechanisms are responsible for both perception and oculomotor action during search. Diverging target representations would result in an inefficient coupling between eye movement planning and perceptual judgments. Thus, a common target representation would be more optimal and might be expected to have evolved through natural selection in the neural systems responsible for visual search. |
Tom Foulsham; Geoffrey Underwood How does the purpose of inspection influence the potency of visual salience in scene perception? Journal Article In: Perception, vol. 36, no. 8, pp. 1123–1138, 2007. @article{Foulsham2007, Salience-map models have been taken to suggest that the locations of eye fixations are determined by the extent of the low-level discontinuities in an image. While such models have found some support, an increasing emphasis on the task viewers are performing implies that these models must combine with cognitive demands to describe how the eyes are guided efficiently. An experiment is reported in which eye movements to objects in photographs were examined while viewers performed a memory-encoding task or one of two search tasks. The objects depicted in the scenes had known salience ranks according to a popular model. Participants fixated higher-salience objects sooner and more often than lower-salience objects, but only when memorising scenes. This difference shows that salience-map models provide useful predictions even in complex scenes and late in viewing. However, salience had no effects when searching for a target defined by category or exemplar. The results suggest that salience maps are not used to guide the eyes in these tasks, that cognitive override by task demands can be total, and that modelling top-down search is important but may not be easily accomplished within a salience-map framework. |
Jay A. Edelman; Árni Kristjánsson; Ken Nakayama The influence of object-relative visuomotor set on express saccades Journal Article In: Journal of Vision, vol. 7, no. 6, pp. 1–13, 2007. @article{Edelman2007, Express saccades are considered to have the shortest latency (70-110 ms) of all saccadic eye movements. The influence of visuomotor set, preparatory processes that spatially affect a sensorimotor response, on express saccades was examined by instructing human subjects to make a saccade to one of two simultaneously appearing spots defined by its position relative to the other. A temporal gap between fixation point disappearance and target appearance was used to facilitate the production of express saccades. For all subjects, the instruction influenced the vector of express saccades without increasing saccade latency. The effect on express saccades was only slightly weaker than that for longer latency saccades. Saccade curvature was minimal and did not depend strongly on task. Further experiments demonstrated that the effect of instruction on express saccade vector was much weaker when saccades were instructed to be made to one side of a single small spot, that the effect of instruction was equally strong when directing saccades to the less salient of two stimuli, and that an instruction could not only determine the direction of the effect but also modulate the effect's magnitude. The effect of instruction on saccade vector was no higher when blocked than when varied across trials. These results suggest that express saccades are influenced by object-relative spatial preparatory processes without increasing their reaction time and, thus, that high-level cognitive processes can influence the most reflexive of saccadic eye movements. |
Erik E. Emeric; Joshua W. Brown; Leanne Boucher; Roger H. S. Carpenter; Doug P. Hanes; Robin Harris; Gordon D. Logan; Reena N. Mashru; Martin Paré; Pierre Pouget; Veit Stuphorn; Tracy L. Taylor; Jeffrey D. Schall Influence of history on saccade countermanding performance in humans and macaque monkeys Journal Article In: Vision Research, vol. 47, no. 1, pp. 35–49, 2007. @article{Emeric2007, The stop-signal or countermanding task probes the ability to control action by requiring subjects to withhold a planned movement in response to an infrequent stop signal which they do with variable success depending on the delay of the stop signal. We investigated whether performance of humans and macaque monkeys in a saccade countermanding task was influenced by stimulus and performance history. In spite of idiosyncrasies across subjects several trends were evident in both humans and monkeys. Response time decreased after successive trials with no stop signal. Response time increased after successive trials with a stop signal. However, post-error slowing was not observed. Increased response time was observed mainly or only after cancelled (signal inhibit) trials and not after noncancelled (signal respond) trials. These global trends were based on rapid adjustments of response time in response to momentary fluctuations in the fraction of stop signal trials. The effects of trial sequence on the probability of responding were weaker and more idiosyncratic across subjects when stop signal fraction was fixed. However, both response time and probability of responding were influenced strongly by variations in the fraction of stop signal trials. These results indicate that the race model of countermanding performance requires extension to account for these sequential dependencies and provide a basis for physiological studies of executive control of countermanding saccade performance. |
David T. Field; R. M. Wilkie; J. P. Wann Neural systems in the visual control of steering Journal Article In: Journal of Neuroscience, vol. 27, no. 30, pp. 8002–8010, 2007. @article{Field2007, Visual control of locomotion is essential for most mammals and requires coordination between perceptual processes and action systems. Previous research on the neural systems engaged by self-motion has focused on heading perception, which is only one perceptual subcomponent. For effective steering, it is necessary to perceive an appropriate future path and then bring about the required change to heading. Using function magnetic resonance imaging in humans, we reveal a role for the parietal eye fields (PEFs) in directing spatially selective processes relating to future path information. A parietal area close to PEFs appears to be specialized for processing the future path information itself. Furthermore, a separate parietal area responds to visual position error signals, which occur when steering adjustments are imprecise. A network of three areas, the cerebellum, the supplementary eye fields, and dorsal premotor cortex, was found to be involved in generating appropriate motor responses for steering adjustments. This may reflect the demands of integrating visual inputs with the output response for the control device. |
Alison Firestone; Nicholas B. Turk-Browne; Jennifer D. Ryan Age-related deficits in face recognition are related to underlying changes in scanning behavior Journal Article In: Aging, Neuropsychology, and Cognition, vol. 14, no. 6, pp. 594–607, 2007. @article{Firestone2007, Previous studies demonstrating age-related impairments in recognition memory for faces are suggestive of underlying differences in face processing. To study these differ-ences, we monitored eye movements while younger and older adults viewed younger and older faces. Compared to the younger group, older adults showed increased sampling of facial features, and more transitions. However, their scanning behavior was most similar to the younger group when looking at older faces. Moreover, while older adults exhibited worse recognition memory than younger adults overall, their memory was more accurate for older faces. These findings suggest that age-related differences in recognition memory for faces may be related to changes in scanning behavior, and that older adults may use social group status as a compensatory processing strategy. |
Stephani Foraker; Brian McElree The role of prominence in pronoun resolution: Active versus passive representations Journal Article In: Journal of Memory and Language, vol. 56, no. 3, pp. 357–383, 2007. @article{Foraker2007, A prominent antecedent facilitates anaphor resolution. Speed-accuracy tradeoff modeling in Experiments 1 and 3 indicated that clefting did not affect the speed of accessing an antecedent representation, which is inconsistent with claims that discourse-focused information is actively maintained in focal attention [e.g., Gundel, J. K. (1999). On different kinds of focus. In P. Bosch & R. van der Sandt, (Eds.), Focus: Linguistic, cognitive, and computational perspectives. Cambridge: Cambridge University Press]. Rather, clefting simply increased the likelihood of retrieving the antecedent representation, suggesting that clefting only increases the strength of a representation in memory. Eye fixation measures in Experiment 2 showed that clefting did not affect early bonding of the pronoun and antecedent, but did ease later integration. Collectively, the results indicate that clefting made antecedent representations more distinctive in working memory, hence more available for subsequent discourse operations. Pronoun type also affected resolution processes. Gendered pronouns (he or she) were interpreted more accurately than an ungendered pronoun (it), and in one case, earlier in time-course. We argue that both effects are due to the greater ambiguity of it, as a cue to retrieve the correct antecedent representation. © 2006 Elsevier Inc. All rights reserved. |
Ian T. Everdell; Heidi Marsh; Micheal D. Yurick; Kevin G. Munhall; Martin Paré Gaze behaviour in audiovisual speech perception: Asymmetrical distribution of face-directed fixations Journal Article In: Perception, vol. 36, no. 10, pp. 1535–1545, 2007. @article{Everdell2007, Speech perception under natural conditions entails integration of auditory and visual information. Understanding how visual and auditory speech information are integrated requires detailed descriptions of the nature and processing of visual speech information. To understand better the process of gathering visual information, we studied the distribution of face-directed fixations of humans performing an audiovisual speech perception task to characterise the degree of asymmetrical viewing and its relationship to speech intelligibility. Participants showed stronger gaze fixation asymmetries while viewing dynamic faces, compared to static faces or face-like objects, especially when gaze was directed to the talkers' eyes. Although speech perception accuracy was significantly enhanced by the viewing of congruent, dynamic faces, we found no correlation between task performance and gaze fixation asymmetry. Most participants preferentially fixated the right side of the faces and their preferences persisted while viewing horizontally mirrored stimuli, different talkers, or static faces. These results suggest that the asymmetrical distributions of gaze fixations reflect the participants' viewing preferences, rather than being a product of asymmetrical faces, but that this behavioural bias does not predict correct audiovisual speech perception. |
Christopher A. Dickinson; Gregory J. Zelinsky Memory for the search path: Evidence for a high-capacity representation of search history Journal Article In: Vision Research, vol. 47, no. 13, pp. 1745–1755, 2007. @article{Dickinson2007, Using a gaze-contingent paradigm, we directly measured observers' memory capacity for fixated distractor locations during search. After approximately half of the search objects had been fixated, they were masked and a spatial probe appeared at either a previously fixated location or a non-fixated location; observers then rated their confidence that the target had appeared at the probed location. Observers were able to differentiate the 12 most recently fixated distractor locations from non-fixated locations, but analyses revealed that these locations were represented fairly coarsely. We conclude that there exists a high-capacity, but low-resolution, memory for a search path. |
Adele Diederich; Hans Colonius Why two "Distractors" are better than one: Modeling the effect of non-target auditory and tactile stimuli on visual saccadic reaction time Journal Article In: Experimental Brain Research, vol. 179, no. 1, pp. 43–54, 2007. @article{Diederich2007, Saccadic reaction time (SRT) was measured in a focused attention task with a visual target stimulus (LED) and auditory (white noise burst) and tactile (vibration applied to palm) stimuli presented as non-targets at five different onset times (SOAs) with respect to the target. Mean SRT was reduced (i) when the number of non-targets was increased and (ii) when target and non-targets were all presented in the same hemifield; (iii) this facilitation first increases and then decreases as the time point of presenting the non-targets is shifted from early to late relative to the target presentation. These results are consistent with the time-window-of-integration (TWIN) model (Colonius and Diederich in J Cogn Neurosci 16:1000-1009, 2004) which distinguishes a peripheral stage of independent sensory channels racing against each other from a second stage of neural integration of the input and preparation of an oculomotor response. Cross-modal interaction manifests itself in an increase or decrease of second stage processing time. For the first time, without making specific distributional assumptions on the processing times, TWIN is shown to yield numerical estimates for the facilitative effects of the number of non-targets and of the spatial configuration of target and non-targets. More generally, the TWIN model framework suggests that multisensory integration is a function of unimodal stimulus properties, like intensity, in the first stage and of cross-modal stimulus properties, like spatial disparity, in the second stage. |
Adele Diederich; Hans Colonius Modeling spatial effects in visual-tactile saccadic reaction time Journal Article In: Perception and Psychophysics, vol. 69, no. 1, pp. 56–67, 2007. @article{Diederich2007a, Saccadic reaction time (SRT) to visual targets tends to be shorter when nonvisual stimuli are presented in close temporal or spatial proximity, even when subjects are instructed to ignore the accessory input. Here, we investigate visual-tactile interaction effects on SRT under varying spatial configurations. SRT to bimodal stimuli was reduced by up to 30 msec, in comparison with responses to unimodal visual targets. In contrast to previous findings, the amount of multisensory facilitation did not decrease with increases in the physical distance between the target and the nontarget but depended on (1) whether the target and the nontarget were presented in the same hemifield (ipsilateral) or in different hemifields (contralateral), (2) the eccentricity of the stimuli, and (3) the frequency of the vibrotactile nontarget. The time-window-of-integration (TWIN) model for SRT (Colonius & Diederich, 2004) is shown to yield an explicit characterization of the observed multisensory spatial interaction effects through the removal of the peripheral-processing effects of stimulus location and tactile frequency. |
Annie Roy-Charland; Jean Saint-Aubin; Mary Ann Evans Eye movements in shared book reading with children from kindergarten to Grade 4 Journal Article In: Reading and Writing, vol. 20, no. 9, pp. 909–931, 2007. @article{RoyCharland2007, Previous studies have revealed that preschool-age children who are not yet readers pay little attention to written text in a shared book reading situation (see Evans & Saint-Aubin, 2005). The current study was aimed at investigating the constancy of these results across reading development, by monitoring eye movements in shared book reading, for children from kindergarten to Grade 4. Children were read books of three difficulty levels. The results revealed a higher proportion of time, a higher proportion of landing positions, and a higher proportion of reading-like saccades on the text as grade level increased and as reading skills improved. More precisely, there was a link between the difficulty of the material and attention to text. Children spent more time on a text that was within their reading abilities than when the book difficulty exceeded their reading skills. |
Annie Roy-Charland; Jean Saint-Aubin; Raymond M. Klein; Michael A. Lawrence Eye movements as direct tests of the GO model for the missing-letter effect Journal Article In: Perception and Psychophysics, vol. 69, no. 3, pp. 324–337, 2007. @article{RoyCharland2007a, When asked to detect target letters while reading a text, participants miss more letters in frequently occurring function words than in less frequent content words. To account for this pattern of results, known as the missing-letter effect, Greenberg, Healy, Koriat, and Kreiner proposed the guidance-organization (GO) model, which integrates the two leading models of the missing-letter effect while incorporating innovative assumptions based on the literature on eye movements during reading. The GO model was evaluated by monitoring the eye movements of participants while they searched for a target letter in a continuous text display. Results revealed the usual missing-letter effect, and many empirical benchmark effects in eye movement literature were observed. However, contrary to the predictions of the GO model, response latencies were longer for function words than for content words. Alternative models are discussed that can accommodate both error and response latency data for the missing-letter effect. |
Ueli Rutishauser; Christof Koch Probabilistic modeling of eye movement data during conjunction search via feature-based attention Journal Article In: Journal of Vision, vol. 7, no. 6, pp. 1–20, 2007. @article{Rutishauser2007, Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells. |
Jennifer D. Ryan; Grace Leung; Nicholas B. Turk-Browne; Lynn Hasher Assessment of age-related changes in inhibition and binding using eye movement monitoring Journal Article In: Psychology and Aging, vol. 22, no. 2, pp. 239–250, 2007. @article{Ryan2007, Age-related memory deficits may result from attending to too much information (inhibition deficit) and/or storing too little information (binding deficit). The present study evaluated the inhibition and binding accounts by exploiting a situation in which deficits of inhibition should benefit relational memory binding. Older adults directed more viewing toward abrupt onsets in scenes compared with younger adults under instructions to ignore any such onsets, providing evidence for age-related inhibitory deficits, which were ameliorated with additional practice. Subsequently, objects that served as abrupt onsets underwent changes in their spatial relations. Despite successful inhibition of the onsets, eye movements of younger adults were attracted to manipulated objects. In contrast, the eye movements of older adults, who directed more viewing to the late onsets compared with younger adults, were not attracted toward manipulated regions. Similar differences between younger and older adults in viewing of manipulated regions were observed under free viewing conditions. These findings provide evidence for concurrent inhibition and binding deficits in older adults and demonstrate that age-related declines in inhibitory processing do not lead to enhanced relational memory for extraneous information. |
Nicola Rycroft; Samuel B. Hutton; O. Clowry; C. Groomsbridge; A. Sierakowski; Jennifer M. Rusted Non-cholinergic modulation of antisaccade performance: A modafinil-nicotine comparison Journal Article In: Psychopharmacology, vol. 195, no. 2, pp. 245–253, 2007. @article{Rycroft2007, INTRODUCTION: The antisaccade task provides a powerful tool with which to investigate the cognitive and neural systems underlying goal-directed behaviour, particularly in situations when the correct behavioural response requires the suppression of a prepotent response. Antisaccade errors (failures to suppress reflexive prosaccades towards sudden-onset targets) are increased in patients with damage to the dorsolateral prefrontal cortex, and in patients with schizophrenia. Nicotine has been found to improve antisaccade performance in patients with schizophrenia and healthy controls. This performance enhancing effect may be due to direct effects on the cholinergic system, but there has been no test of this hypothesis. MATERIALS AND METHODS: In a double blind, double dummy, placebo-controlled design, we compared the effect of nicotine and modafinil, a putative indirect noradrenergic agonist, on antisaccade performance in healthy non-smokers. RESULTS AND DISCUSSION: Both compounds reduced latency for correct antisaccades, although neither reduced antisaccade errors. These findings are discussed with reference to the pharmacological route of performance enhancement on the antisaccade task and current models of antisaccade performance. |