All EyeLink Publications
All 10,000+ peer-reviewed EyeLink research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
Cliodhna Quigley; Selim Onat; Sue Harding; Martin Cooke; Peter König
Audio-visual integration during overt visual attention Journal Article
In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 4, 2008.
How do different sources of information arising from different modalities interact to control where we look? To answer this question with respect to real-world operational conditions we presented natural images and spatially localized sounds in (V)isual, Audiovisual (AV) and (A)uditory conditions and measured subjects' eye-movements. Our results demonstrate that eye-movements in AV conditions are spatially biased towards the part of the image corresponding to the sound source. Interestingly, this spatial bias is dependent on the probability of a given image region to be fixated (saliency) in the V condition. This indicates that fixation behaviour during the AV conditions is the result of an integration process. Regression analysis shows that this integration is best accounted for by a linear combination of unimodal saliencies.
Ralph Radach; Lynn Huestegge; Ronan G. Reilly
In: Psychological Research, vol. 72, no. 6, pp. 675–688, 2008.
Although the development of the field of reading has been impressive, there are a number of issues that still require much more attention. One of these concerns the variability of skilled reading within the individual. This paper explores the topic in three ways: (1) it quantifies the extent to which, two factors, the specific reading task (comprehension vs. word verification) and the format of reading material (sentence vs. passage) influence the temporal aspects of reading as expressed in word-viewing durations; (2) it examines whether they also affect visuomotor aspects of eye-movement control; and (3) determine whether they can modulate local lexical processing. The results reveal reading as a dynamic, interactive process involving semi-autonomous modules, with top-down influences clearly evident in the eye-movement record.
Christoph Rasche; Karl R. Gegenfurtner
Orienting during gaze guidance in a letter-identification task Journal Article
In: Journal of Eye Movement Research, vol. 3, no. 4, pp. 1–10, 2008.
The idea of gaze guidance is to lead a viewer's gaze through a visual display in order to facilitate the viewer's search for specific information in a least-obtrusive manner. This study investigates saccadic orienting when a viewer is guided in a fast-paced, low-contrast letter identification task. Despite the task's difficulty and although guiding cues were ad-justed to gaze eccentricity, observers preferred attentional over saccadic shifts to obtain a letter identification judgment; and if a saccade was carried out its saccadic constant error was 50%. From those results we derive a number of design recommendations for the process of gaze guidance.
Keith Rayner; Brett Miller; Caren M. Rotello
In: Applied Cognitive Psychology, vol. 22, no. 5, pp. 697–707, 2008.
Viewers looked at print advertisements as their eye movements were recorded. Half of them were asked to rate how much they liked each ad (for convenience, we will generally use the term 'ad' from this point on), while the other half were asked to rate the effectiveness of each ad. Previous research indicated that viewers who were asked to consider purchasing products in the ads looked at the text earlier and more often than the picture part of the ad. In contrast, viewers in the present experiment looked at the picture part of the ad earlier and longer than the text. The results indicate quite clearly that the goal of the viewer very much influences where (and for how long) viewers look at different parts of ads, but also indicate that the nature of the ad per se matters.
Paul Reeve; James J. Clark; J. Kevin O'Regan
In: Journal of Vision, vol. 8, no. 13, pp. 1–19, 2008.
Visual space is sometimes said to be "compressed" before saccadic eye movements. The most central evidence for this hypothesis is a converging pattern of localization errors on single flashes presented close to saccade time under certain conditions. An intuitive version of the compression hypothesis predicts that the reported distance between simultaneous, spatially separated presaccadic flashes should contract in the same way as their individual locations. In our experiment we tested this prediction by having subjects perform one of two tasks on stimuli made up of two bars simultaneously flashed near saccade time: either localizing one of the bars or judging the separation between the two. Localization judgments showed the previously observed converging pattern over the 50-100 ms before saccades. Contractions in perceived separation between the two bars were not accurately predicted by this pattern: they occurred mainly during saccades and were much weaker than convergence in localization. Different forms of spatial information about flashed stimuli can be differentially modulated before, during, and after saccades. Structural alterations in the perceptual field around saccades may explain these different effects, but alternative hypotheses based on decision making under uncertainty and on the influence of other perisaccadic mechanisms are also consistent with this and other evidence.
Erik D. Reichle; Polina M. Vanyukov; Patryk A. Laurent; Tessa Warren
In: Vision Research, vol. 48, no. 17, pp. 1831–1836, 2008.
This paper presents an experiment investigating attention allocation in four tasks requiring varied degrees of lexical processing of 1-4 simultaneously displayed words. Response times and eye movements were only modestly affected by the number of words in an asterisk-detection task but increased markedly with the number of words in letter-detection, rhyme-judgment, and semantic-judgment tasks, suggesting that attention may not be serial for tasks that do not require significant lexical processing (e.g., detecting visual features), but is approximately serial for tasks that do (e.g., retrieving word meanings). The implications of these results for models of readers' eye movements are discussed.
Kathleen Pirog Revill; Michael K. Tanenhaus; Richard N. Aslin
Context and spoken word recognition in a novel lexicon Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 5, pp. 1207–1223, 2008.
Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access–selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access–selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models.
Frédéric P. Rey; Thanh Thuan Lê; René Bertin; Zoï Kapoula
In: Auris Nasus Larynx, vol. 35, no. 2, pp. 185–191, 2008.
Objective: There is a discrepancy about the effect of saccades on postural control: some studies reported a stabilization effect, other studies the opposite. Perturbation of posture by saccades could be related to loss of vision during saccades (saccades suppression) due to high velocity retinal slip. On the other hand, efferent and afferent proprioceptive signals related to saccades can be used for obtaining spatial stability over saccades and maintaining good postural control. In natural conditions saccades can be horizontal, vertical and made at different distance. The present study examines all these parameters to provide a more complete view on the role of saccade on postural control in quiet stance. Methods: Horizontal or vertical saccades of 30° were made at 1 Hz and at two distances, 40 and 200 cm. Eye movements were recorded with video-oculograhpy (EyeLink II). Posturography was recorded with the TechnoConcept platform. The results from "saccade" conditions are compared to "fixation control" condition (at far and near). Results: The video oculography results show that subjects performed the fixation or the saccade task correctly. Execution of saccades (horizontal or vertical at near or at far distance) had no significant effect on the surface of center of pressure (CoP), neither on the standard deviation of the lateral body sway, nor on the variance of speed of the CoP. Moreover, whatever the distance, execution of saccades decreased significantly the standard deviation of the antero-posterior sway. Conclusion: We conclude that saccades, of either the direction and at either the distance, do not deteriorate postural control; rather they could reduce sway. Efferent and proprioceptive oculomotor signals as well as attention could contribute to maintain or improve postural stability while making saccades.
Xiaochuan Pan; Kosuke Sawa; Ichiro Tsuda; Minoru Tsukada; Masamichi Sakagami
In: Nature Neuroscience, vol. 11, no. 6, pp. 703–712, 2008.
To adapt to changeable or unfamiliar environments, it is important that animals develop strategies for goal-directed behaviors that meet the new challenges. We used a sequential paired-association task with asymmetric reward schedule to investigate how prefrontal neurons integrate multiple already-acquired associations to predict reward. Two types of reward-related neurons were observed in the lateral prefrontal cortex: one type predicted reward independent of physical properties of visual stimuli and the other encoded the reward value specific to a category of stimuli defined by the task requirements. Neurons of the latter type were able to predict reward on the basis of stimuli that had not yet been associated with reward, provided that another stimulus from the same category was paired with reward. The results suggest that prefrontal neurons can represent reward information on the basis of category and propagate this information to category members that have not been linked directly with any experience of reward.
Sebastian Pannasch; Jens R. Helmert; Katharina Roth; Ann-Katrin Herbold; Henrik Walter
In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–19, 2008.
Is there any relationship between visual fixation durations and saccade amplitudes in free exploration of pictures and scenes? In four experiments with naturalistic stimuli, we compared eye movements during early and late phases of scene perception. Influences of repeated presentation of similar stimuli (Experiment 1), object density (Experiment 2), emotional stimuli (Experiment 3) and mood induction (Experiment 4) were examined. The results demonstrate a systematic increase in the durations of fixations and a decrease for saccadic amplitudes over the time course of scene perception. This relationship was very stable across the variety of studied conditions. It can be interpreted in terms of a shifting balance of the two modes of visual information processing.
Alicia Peltsch; Aaron B. Hoffman; I. T. Armstrong; Giovanna Pari; D. P. Munoz
Saccadic impairments in Huntington's disease Journal Article
In: Experimental Brain Research, vol. 186, no. 3, pp. 457–469, 2008.
Huntington's disease (HD), a progressive neurological disorder involving degeneration in basal ganglia structures, leads to abnormal control of saccadic eye movements. We investigated whether saccadic impairments in HD (N = 9) correlated with clinical disease severity to determine the relationship between saccadic control and basal ganglia pathology. HD patients and age/sex-matched controls performed various eye movement tasks that required the execution or suppression of automatic or voluntary saccades. In the "immediate" saccade tasks, subjects were instructed to look either toward (pro-saccade) or away from (anti-saccade) a peripheral stimulus. In the "delayed" saccade tasks (pro-/anti-saccades; delayed memory-guided sequential saccades), subjects were instructed to wait for a central fixation point to disappear before initiating saccades towards or away from a peripheral stimulus that had appeared previously. In all tasks, mean saccadic reaction time was longer and more variable amongst the HD patients. On immediate anti-saccade trials, the occurrence of direction errors (pro-saccades initiated toward stimulus) was higher in the HD patients. In the delayed tasks, timing errors (eye movements made prior to the go signal) were also greater in the HD patients. The increased variability in saccadic reaction times and occurrence of errors (both timing and direction errors) were highly correlated with disease severity, as assessed with the Unified Huntington's Disease Rating Scale, suggesting that saccadic impairments worsen as the disease progresses. Thus, performance on voluntary saccade paradigms provides a sensitive indicator of disease progression in HD.
Angélica Pérez Fornos; Jörg Sommerhalder; Alexandre Pittard; Avinoam B. Safran; Marco Pelizzone
In: Vision Research, vol. 48, no. 16, pp. 1705–1718, 2008.
Retinal prostheses attempt to restore some amount of vision to totally blind patients. Vision evoked this way will be however severely constrained because of several factors (e.g., size of the implanted device, number of stimulating contacts, etc.). We used simulations of artificial vision to study how such restrictions of the amount of visual information provided would affect performance on simple pointing and manipulation tasks. Five normal subjects participated in the study. Two tasks were used: pointing on random targets (LEDs task) and arranging wooden chips according to a given model (CHIPs task). Both tasks had to be completed while the amount of visual information was limited by reducing the resolution (number of pixels) and modifying the size of the effective field of view. All images were projected on a 10° × 7° viewing area, stabilised at a given position on the retina. In central vision, the time required to accomplish the tasks remained systematically slower than with normal vision. Accuracy was close to normal at high image resolutions and decreased at 500 pixels or below, depending on the field of view used. Subjects adapted quite rapidly (in less than 15 sessions) to performing both tasks in eccentric vision (15° in the lower visual field), achieving after adaptation performances close to those observed in central vision. These results demonstrate that, if vision is restricted to a small visual area stabilised on the retina (as would be the case in a retinal prosthesis), the perception of several hundreds of retinotopically arranged phosphenes is still needed to restore accurate but slow performance on pointing and manipulation tasks. Considering that present prototypes afford less than 100 stimulation contacts and that our simulations represent the most favourable visual input conditions that the user might experience, further development is required to achieve optimal rehabilitation prospects.
Matthew S. Peterson; Melissa R. Beck; Jason H. Wong
In: Psychonomic Bulletin & Review, vol. 15, no. 2, pp. 372–377, 2008.
Recent evidence has indicated that performing a working memory task that loads executive working memory leads to less efficient visual search (Han & Kim, 2004). We explored the role that executive functioning plays in visual search by examining the pattern of eye movements while participants performed a search task with or without a secondary executive working memory task. Results indicate that executive functioning plays two roles in visual search: the identification of objects and the control of the disengagement of attention.
Tobias Pflugshaupt; Thomas Nyffeler; Roman Von Wartburg; Christian W. Hess; René M. Müri
In: Journal of Neurology, Neurosurgery and Psychiatry, vol. 79, no. 4, pp. 474–477, 2008.
Despite their relevance for locomotion and social interaction in everyday situations, little is known about the cortical control of vertical saccades in humans. Results from microstimulation studies indicate that both frontal eye fields (FEFs) contribute to these eye movements. Here, we present a patient with a damaged right FEF, who hardly made vertical saccades during visual exploration. This finding suggests that, for the cortical control of exploratory vertical saccades, integrity of both FEFs is indeed important.
M. Niwa; J. Ditterich
In: Journal of Neuroscience, vol. 28, no. 17, pp. 4435–4445, 2008.
Previous studies and models of perceptual decision making have largely focused on binary choices. However, we often have to choose from multiple alternatives. To study the neural mechanisms underlying multialternative decision making, we have asked human subjects to make perceptual decisions between multiple possible directions of visual motion. Using a multicomponent version of the random-dot stimulus, we were able to control experimentally how much sensory evidence we wanted to provide for each of the possible alternatives. We demonstrate that this task provides a rich quantitative dataset for multialternative decision making, spanning a wide range of accuracy levels and mean response times. We further present a computational model that can explain the structure of our behavioral dataset. It is based on the idea of a race between multiple integrators to a decision threshold. Each of these integrators accumulates net sensory evidence for a particular choice, provided by linear combinations of the activities of decision-relevant pools of sensory neurons.
Lauri Nummenmaa; Jussi Hirvonen; Riitta Parkkola; Jari K. Hietanen
In: NeuroImage, vol. 43, no. 3, pp. 571–580, 2008.
Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.
Thomas Nyffeler; Dario Cazzoli; Pascal Wurtz; Mathias Lüthi; Roman Von Wartburg; Silvia Chaves; Anouk Déruaz; Christian W. Hess; René M. Müri
In: European Journal of Neuroscience, vol. 27, no. 7, pp. 1809–1813, 2008.
The right posterior parietal cortex (PPC) is critically involved in visual exploration behaviour, and damage to this area may lead to neglect of the left hemispace. We investigated whether neglect-like visual exploration behaviour could be induced in healthy subjects using theta burst repetitive transcranial magnetic stimulation (rTMS). To this end, one continuous train of theta burst rTMS was applied over the right PPC in 12 healthy subjects prior to a visual exploration task where colour photographs of real-life scenes were presented on a computer screen. In a control experiment, stimulation was also applied over the vertex. Eye movements were measured, and the distribution of visual fixations in the left and right halves of the screen was analysed. In comparison to the performance of 28 control subjects without stimulation, theta burst rTMS over the right PPC, but not the vertex, significantly decreased cumulative fixation duration in the left screen-half and significantly increased cumulative fixation duration in the right screen-half for a time period of 30 min. These results suggest that theta burst rTMS is a reliable method of inducing transient neglect-like visual exploration behaviour.
Matthew H. Phillips; Jay A. Edelman
In: Vision Research, vol. 48, no. 21, pp. 2184–2192, 2008.
Phillips and Edelman [Phillips, M. H., & Edelman, J. A. (2008). The dependence of visual scanning performance on saccade, fixation, and perceptual metrics. Vision Research, 48(7), 926-936] presented evidence that performance variability in a visual scanning task depends on oculomotor variables related to saccade amplitude rather than fixation duration, and that saccade-related metrics reflects perceptual span. Here, we extend these results by showing that even for extremely difficult searches trial-to-trial performance variability still depends on saccade-related metrics and not fixation duration. We also show that scanning speed is faster for horizontal than for vertical searches, and that these differences derive again from differences in saccade-based metrics and not from differences in fixation duration. We find perceptual span to be larger for horizontal than vertical searches, and approximately symmetric about the line of gaze.
Elmar H. Pinkhardt; Reinhart Jürgens; Wolfgang Becker; Federica Valdarno; Albert C. Ludolph; Jan Kassubek
In: Journal of Neurology, vol. 255, no. 12, pp. 1916–1925, 2008.
Vertical gaze palsy is a highly relevant clinical sign in parkinsonian syndromes. As the eponymous sign of progressive supranuclear palsy (PSP), it is one of the core features in the diagnosis of this disease. Recent studies have suggested a further differentiation of PSP in Richardson's syndrome (RS) and PSP-parkinsonism (PSPP). The aim of this study was to search for oculomotor abnormalities in the PSP-P subset of a sample of PSP patients and to compare these findings with those of (i) RS patients, (ii) patients with idiopathic Parkinson's disease (IPD), and (iii) a control group. Twelve cases of RS, 5 cases of PSP-P, and 27 cases of IPD were examined by use of video-oculography (VOG) and compared to 23 healthy normal controls. Both groups of PSP patients (RS, PSP-P) had significantly slower saccades than either IPD patients or controls, whereas no differences in saccadic eye peak velocity were found between the two PSP groups or in the comparison of IPD with controls. RS and PSP-P were also similar to each other with regard to smooth pursuit eye movements (SPEM), with both groups having significantly lower gain than controls (except for downward pursuit); however, SPEM gain exhibited no consistent difference between PSP and IPD. A correlation between eye movement data and clinical data (Hoehn & Yahr scale or disease duration) could not be observed. As PSP-P patients were still in an early stage of the disease when a differentiation from IPD is difficult on clinical grounds, the clear-cut separation between PSP-P and IPD obtained by measuring saccade velocity suggests that VOG could contribute to the early differentiation between these patient groups.
Alexander Pollatsek; Timothy J. Slattery; Barbara J. Juhasz
In: Language and Cognitive Processes, vol. 23, no. 7-8, pp. 1133–1158, 2008.
Two experiments compared how relatively long novel prefixed words (e.g., overfarm) and existing prefixed words were processed in reading. The use of novel prefixed words allows one to examine the roles of whole-word access and decompositional processing in the processing of non-novel prefixed words. The two experiments found that, although there was a large cost to novelty (e.g., gaze durations were about 100 ms longer for novel prefixedwords), the effect of the frequency of the root morpheme on fixation measures was about the same for novel and non-novel prefixed words for most measures. This finding rules out a (‘‘horse-race'') dual-route model of processing for existing prefixed words in which the whole-word and decompositional route are parallel and independent, as such a model would predict a substantially larger root frequency effect for novel words (where whole-word processes do not exist). The most likely model to explain the processing of prefixed words is a parallel interactive one.
Hans P. Op De Beeck; Jennifer A. Deutsch; Wim Vanduffel; Nancy Kanwisher; James J. DiCarlo
In: Cerebral Cortex, vol. 18, no. 7, pp. 1676–1694, 2008.
The inferior temporal (IT) cortex in monkeys plays a central role in visual object recognition and learning. Previous studies have observed patches in IT cortex with strong selectivity for highly familiar object classes (e.g., faces), but the principles behind this functional organization are largely unknown due to the many properties that distinguish different object classes. To unconfound shape from meaning and memory, we scanned monkeys with functional magnetic resonance imaging while they viewed classes of initially novel objects. Our data revealed a topography of selectivity for these novel object classes across IT cortex. We found that this selectivity topography was highly reproducible and remarkably stable across a 3-month interval during which monkeys were extensively trained to discriminate among exemplars within one of the object classes. Furthermore, this selectivity topography was largely unaffected by changes in behavioral task and object retinal position, both of which preserve shape. In contrast, it was strongly influenced by changes in object shape. The topography was partially related to, but not explained by, the previously described pattern of face selectivity. Together, these results suggest that IT cortex contains a large-scale map of shape that is largely independent of meaning, familiarity, and behavioral task.
Jorge Otero-Millan; Xoana G. Troncoso; Stephen L. Macknik; Ignacio Serrano-Pedraza; Susana Martinez-Conde
In: Journal of Vision, vol. 8, no. 14, pp. 1–18, 2008.
Microsaccades are known to occur during prolonged visual fixation, but it is a matter of controversy whether they also happen during free-viewing. Here we set out to determine: 1) whether microsaccades occur during free visual exploration and visual search, 2) whether microsaccade dynamics vary as a function of visual stimulation and viewing task, and 3) whether saccades and microsaccades share characteristics that might argue in favor of a common saccade-microsaccade oculomotor generator. Human subjects viewed naturalistic stimuli while performing various viewing tasks, including visual exploration, visual search, and prolonged visual fixation. Their eye movements were simultaneously recorded with high precision. Our results show that microsaccades are produced during the fixation periods that occur during visual exploration and visual search. Microsaccade dynamics during free-viewing moreover varied as a function of visual stimulation and viewing task, with increasingly demanding tasks resulting in increased microsaccade production. Moreover, saccades and microsaccades had comparable spatiotemporal characteristics, including the presence of equivalent refractory periods between all pair-wise combinations of saccades and microsaccades. Thus our results indicate a microsaccade-saccade continuum and support the hypothesis of a common oculomotor generator for saccades and microsaccades.
Manabu Shikauchi; Shin Ishii; Tomohiro Shibata
Prediction of aperiodic target sequences by saccades Journal Article
In: Behavioural Brain Research, vol. 189, no. 2, pp. 325–331, 2008.
Through recording of saccadic eye movements, we investigated whether humans can achieve prediction of aperiodic target sequences which cannot be predicted based solely on memorizing short-length patterns of the target sequence. We proposed a novel experimental paradigm in which Auto-Regressive (AR) processes are used to generate aperiodic target sequences. If subjects can fully utilize the knowledge on the AR dynamics that have generated the target sequence, optimal prediction can be made. As a control task, a completely unpredictable (random) target sequence was generated by shuffling the AR sequences. Behavioral analysis suggested that the prediction of the next target position in the AR sequence was significantly more successful than that by the random guess or the optimal guess for the random sequence. Although their performances were not optimal, learning of the AR dynamics was observed for first-order AR sequences, suggesting that the subjects attempted to predict the next target position based on partially identified AR dynamics.
Mariano Sigman; Jérôme Sackur; Antoine Del Cul; Stanislas Dehaene
In: Journal of Vision, vol. 8, no. 1, pp. 1–10, 2008.
A briefly presented target shape can be made invisible by the subsequent presentation of a mask that replaces the target. While varying the target-mask interval in order to investigate perception near the consciousness threshold, we discovered a novel visual illusion. At some intervals, the target is clearly visible, but its location is misperceived. By manipulating the mask's size and target's position, we demonstrate that the perceived target location is always displaced to the boundary of a virtual surface defined by the mask contours. Thus, mutual exclusion of surfaces appears as a cause of masking.
Michael A. Silver; Amitai Shenhav; Mark D'Esposito
In: Neuron, vol. 60, no. 5, pp. 904–914, 2008.
Animal studies have shown that acetylcholine decreases excitatory receptive field size and spread of excitation in early visual cortex. These effects are thought to be due to facilitation of thalamocortical synaptic transmission and/or suppression of intracortical connections. We have used functional magnetic resonance imaging (fMRI) to measure the spatial spread of responses to visual stimulation in human early visual cortex. The cholinesterase inhibitor donepezil was administered to normal healthy human subjects to increase synaptic levels of acetylcholine in the brain. Cholinergic enhancement with donepezil decreased the spatial spread of excitatory fMRI responses in visual cortex, consistent with a role of acetylcholine in reducing excitatory receptive field size of cortical neurons. Donepezil also reduced response amplitude in visual cortex, but the cholinergic effects on spatial spread were not a direct result of reduced amplitude. These findings demonstrate that acetylcholine regulates spatial integration in human visual cortex.
Tim J. Smith; John M. Henderson
In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–17, 2008.
Although we experience the visual world as a continuous, richly detailed space we often fail to notice large and significant changes. Such change blindness has been demonstrated for local object changes and changes to the visual form of whole images, however it is assumed that total changes from one image to another would be easily detected. Film editing presents such total changes several times a minute yet we rarely seem to be aware of them, a phenomenon we refer to here as edit blindness. This phenomenon has never been empirically demonstrated even though film editors believe they have at their disposal techniques that induce edit blindness, the Continuity Editing Rules. In the present study we tested the relationship between Continuity Editing Rules and edit blindness by instructing participants to detect edits while watching excerpts from feature films. Eye movements were recorded during the task. The results indicate that edits constructed according to the Continuity Editing Rules result in greater edit blindness than edits not adhering to the rules. A quarter of edits joining two viewpoints of the same scene were undetected and this increased to a third when the edit coincided with a sudden onset of motion. Some cuts may be missed due to suppression of the cut transients by coinciding with eyeblinks or saccadic eye movements but the majority seem to be due to inattentional blindness as viewers attend to the depicted narrative. In conclusion, this study presents the first empirical evidence of edit blindness and its relationship to natural attentional behaviour during dynamic scene viewing.
J. F. Soechting; Martha Flanders
Extrapolation of visual motion for manual interception Journal Article
In: Journal of Neurophysiology, vol. 99, no. 6, pp. 2956–2967, 2008.
A frequent goal of hand movement is to touch a moving target or to make contact with a stationary object that is in motion relative to the moving head and body. This process requires a prediction of the target's motion, since the initial direction of the hand movement anticipates target motion. This experiment was designed to define the visual motion parameters that are incorporated in this prediction of target motion. On seeing a go signal (a change in target color), human subjects slid the right index finger along a touch-sensitive computer monitor to intercept a target moving along an unseen circular or oval path. The analysis focused on the initial direction of the interception movement, which was found to be influenced by the time required to intercept the target and the target's distance from the finger's starting location. Initial direction also depended on the curvature of the target's trajectory in a manner that suggested that this parameter was underestimated during the process of extrapolation. The pattern of smooth pursuit eye movements suggests that the extrapolation of visual target motion was based on local motion cues around the time of the onset of hand movement, rather than on a cognitive synthesis of the target's pattern of motion.
Alexandra Soliman; Gillian A. O'Driscoll; Jens Pruessner; Anne Lise V. Holahan; Isabelle Boileau; Danny Gagnon; Alain Dagher
In: Neuropsychopharmacology, vol. 33, no. 8, pp. 2033–2041, 2008.
Drugs that increase dopamine levels in the brain can cause psychotic symptoms in healthy individuals and worsen them in schizophrenic patients. Psychological stress also increases dopamine release and is thought to play a role in susceptibility to psychotic illness. We hypothesized that healthy individuals at elevated risk of developing psychosis would show greater striatal dopamine release than controls in response to stress. Using positron emission tomography and [(11)C]raclopride, we measured changes in synaptic dopamine concentrations in 10 controls and 16 psychometric schizotypes; 9 with perceptual aberrations (PerAb, ie positive schizotypy) and 7 with physical anhedonia (PhysAn, ie negative schizotypy). [(11)C]Raclopride binding potential was measured during a psychological stress task and a sensory-motor control. All three groups showed significant increases in self-reported stress and cortisol levels between the stress and control conditions. However, only the PhysAn group showed significant stress-induced dopamine release. Dopamine release in the entire sample was significantly negatively correlated with smooth pursuit gain, an endophenotype linked to frontal lobe function. Our findings suggest the presence of abnormalities in the dopamine response to stress in negative symptom schizotypy, and provide indirect evidence of a link to frontal function.
Leah Roberts; Marianne Gullberg; Peter Indefrey
In: Studies in Second Language Acquisition, vol. 30, pp. 333–357, 2008.
This study investigates whether advanced second language (L2) learners of a nonnull subject language (Dutch) are influenced by their null subject first language (L1) (Turkish) in their offline and online resolution of subject pronouns in L2 discourse. To tease apart potential L1 effects from possible general L2 processing effects, we also tested a group of German L2 learners of Dutch who were predicted to perform like the native Dutch speakers. The two L2 groups differed in their offline interpretations of subject pronouns. The Turkish L2 learners exhibited a L1 influence, because approximately half the time they interpreted Dutch subject pronouns as they would overt pronouns in Turkish, whereas the German L2 learners performed like the Dutch controls, interpreting pronouns as coreferential with the current discourse topic. This L1 effect was not in evidence in eye-tracking data, however. Instead, the L2 learners patterned together, showing an online processing disadvantage when two potential antecedents for the pronoun were grammatically available in the discourse. This processing disadvantage was in evidence irrespective of the properties of the learners' L1 or their final interpretation of the pronoun. Therefore, the results of this study indicate both an effect of the L1 on the L2 in offline resolution and a general L2 processing effect in online subject pronoun resolution.
Anne Roefs; Anita Jansen; Sofie Moresi; Paul Willems; Sarah Grootel; Anouk Borgh
Looking good: BMI, attractiveness bias and visual attention. Journal Article
In: Appetite, vol. 51, pp. 552–555, 2008.
The aim of this study was to study attentional bias when viewing one's own and a control body, and to relate this bias to body-weight and attractiveness ratings. Participants were 51 normal-weight female students with an unrestrained eating style. They were successively shown pictures of their own and a control body for 30s each, while their eye movements (overt attention) were being measured. Afterwards, participants were asked to identify the most attractive and most unattractive body part of both their own and a control body. The results show that with increasing BMI and where an individual has given a relatively low rating of attractiveness to their own body, participants attended relatively more to their self-identified most unattractive body part and the control body's most attractive body part. This increasingly negative bias in visual attention for bodies may maintain and/or exacerbate body dissatisfaction.
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 2, pp. 353–368, 2008.
The flow of activation from concepts to phonological forms within the word production system was examined in 3 experiments. In Experiment 1, participants named pictures while ignoring superimposed distractor pictures that were semantically related, phonologically related, or unrelated. Eye movements and naming latencies were recorded. The distractor pictures affected the latencies of gaze shifting and vocal naming. The magnitude of the phonological effects increased linearly with latency, excluding lapses of attention as the cause of the effects. In Experiment 2, no distractor effects were obtained when both pictures were named. When pictures with superimposed distractor words were named or the words were read in Experiment 3, the words influenced the latencies of gaze shifting and picture naming, but the pictures yielded no such latency effects in word reading. The picture-word asymmetry was obtained even with equivalent reading and naming latencies. The picture-picture effects suggest that activation spreads continuously from concepts to phonological forms, whereas the picture-word asymmetry indicates that the amount of activation is limited and task dependent.
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 6, pp. 1580–1598, 2008.
Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared, and they manually responded to arrows presented away from (Experiment 1), or superimposed onto, the pictures (Experiments 2 and 3); or they responded to tones (Experiment 4). Pictures and arrows/tones were presented at stimulus onset asynchronies of 0, 300, and 1,000 ms. Earlier research showed that vocal responding hampers auditory perception, which predicts earlier shifts of attention to the tones than to the arrows. Word planning yielded dual-task interference. Phonological preparation reduced the latencies of picture naming and gaze shifting. The preparation benefit was propagated into the latencies of the manual responses to the arrows but not to the tones. The malleability of the interference supports the attentional control account. This conclusion was corroborated by computer simulations showing that an extension of WEAVER++ (A. Roelofs, 2003) with assumptions about the attentional control of tasks quantitatively accounts for the latencies of vocal responding, gaze shifting, and manual responding.
Martin Rolfs; Reinhold Kliegl; Ralf Engbert
In: Journal of Vision, vol. 8, no. 11, pp. 1–23, 2008.
Microsaccades are one component of the small eye movements that constitute fixation. Their implementation in the oculomotor system is unknown. To better understand the physiological and mechanistic processes underlying microsaccade generation, we studied microsaccadic inhibition, a transient drop of microsaccade rate, in response to irrelevant visual and auditory stimuli. Quantitative descriptions of the time course and strength of inhibition revealed a strong dependence of microsaccadic inhibition on stimulus characteristics. In Experiment 1, microsaccadic inhibition occurred sooner after auditory than after visual stimuli and after luminance-contrast than after color-contrast visual stimuli. Moreover, microsaccade amplitude strongly decreased during microsaccadic inhibition. In Experiment 2, the latency of microsaccadic inhibition increased with decreasing luminance contrast. We develop a conceptual model of microsaccade generation in which microsaccades result from fixation-related activity in a motor map coding for both fixation and saccades. In this map, fixation is represented at the central site. Saccades are generated by activity in the periphery, their amplitude increasing with eccentricity. The activity at the central, fixation-related site of the map predicts the rate of microsaccades as well as their amplitude and direction distributions. This model represents a framework for understanding the dynamics of microsaccade behavior in a broad range of tasks.
Martin Rolfs; Jochen Laubrock; Reinhold Kliegl
In: Journal of Eye Movement Research, vol. 1, no. 3, pp. 1–8, 2008.
Fixations consist of small movements including microsaccades, i.e., rapid flicks in eye position that replace the retinal image by up to 1 degree of visual angle. Recently, we showed in a delayed-saccade task (1) that the rate of microsaccades decreased in the course of saccade preparation and (2) that microsaccades occurring around the time of a go signal were associated with prolonged saccade latencies (Rolfs et al., 2006). A re-analysis of the same data set revealed a strong dependence of these findings on microsaccade amplitude. First, microsaccade amplitude dropped to a minimum just before the generation of a saccade. Second, the delay of response saccades was a function of microsaccade amplitude: Microsaccades with larger amplitudes were followed by longer response latencies. These finding were predicted by a recently proposed model that attributes microsaccade generation to fixation-related activity in a saccadic motor map that is in competition with the generation of large saccades (Rolfs et al., 2008). We propose, therefore, that microsaccade statistics provide a behavioral correlate of fixation-related activity in the oculomotor system.
N. N. J. Rommelse; Stefan Van der Stigchel; J. Witlox; C. J. A. Geldof; J. -B. Deijen; Jan Theeuwes; Jaap Oosterlaan; J. A. Sergeant
In: Journal of Neural Transmission, vol. 115, no. 2, pp. 249–260, 2008.
Few studies have assessed visuo-spatial working memory and inhibition in attention-deficit/hyperactivity disorder (ADHD) by recording saccades and consequently little additional knowledge has been gathered on oculomotor functioning in ADHD. Moreover, this is the first study to report the performance of non-affected siblings of children with ADHD, which may shed light on the familiality of deficits. A total of 14 boys with ADHD, 18 non-affected brothers, and 15 control boys aged 7-14 years, were administered a memory-guided saccade task with delays of three and seven seconds. Familial deficits were found in accuracy of visuo-spatial working memory, percentage of anticipatory saccades, and tendency to overshoot saccades relative to controls. These findings suggest memory-guided saccade deficits may relate to a familial predisposition for ADHD.
Gianluca U. Sorrento; Denise Y. P. Henriques
Reference frame conversions for repeated arm movements Journal Article
In: Journal of Neurophysiology, vol. 99, no. 6, pp. 2968–2984, 2008.
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.
Jan L. Souman; Tom C. A. Freeman
In: Journal of Vision, vol. 8, no. 14, pp. 1–14, 2008.
Smooth pursuit eye movements add motion to the retinal image. To compensate, the visual system can combine estimates of pursuit velocity and retinal motion to recover motion with respect to the head. Little attention has been paid to the temporal characteristics of this compensation process. Here, we describe how the latency difference between the eye movement signal and the retinal signal can be measured for motion perception during sinusoidal pursuit. In two experiments, observers compared the peak velocity of a motion stimulus presented in pursuit and fixation intervals. Both the pursuit target and the motion stimulus moved with a sinusoidal profile. The phase and amplitude of the motion stimulus were varied systematically in different conditions, along with the amplitude of pursuit. The latency difference between the eye movement signal and the retinal signal was measured by fitting the standard linear model and a non-linear variant to the observed velocity matches. We found that the eye movement signal lagged the retinal signal by a small amount. The non-linear model fitted the velocity matches better than the linear one and this difference increased with pursuit amplitude. The results support previous claims that the visual system estimates eye movement velocity and retinal velocity in a non-linear fashion and that the latency difference between the two signals is small.
David Souto; Dirk Kerzel
In: Journal of Vision, vol. 8, no. 14, pp. 3–1–16, 2008.
Many studies indicate that saccades are necessarily preceded by a shift of attention to the target location. There is no direct evidence for the same coupling during smooth pursuit. If smooth pursuit and attention were coupled, pursuit onset should be delayed whenever attention is focused on a stationary, non-target location. To test this hypothesis, observers were instructed to shift their attention to a peripheral location according to a location cue (Experiments 1 and 2) or a symbolic cue (Experiment 3) around the time of smooth pursuit initiation. Attending to static targets had only negligible effects on smooth pursuit latencies and the early open-loop response but lowered pursuit velocity substantially about the onset of closed-loop pursuit. Around this time, eye velocity reflected the competition between the to-be-tracked and to-be-attended object motion, entailing a reduction of eye velocity by 50% compared to the single task condition. The precise time course of attentional modulation of smooth pursuit initiation was at odds with the idea that an attention shift must precede any voluntary eye movement. Finally, the initial catch-up saccades were strongly delayed with attention diverted from the pursuit target. Implications for models of target selection for pursuit and saccades are discussed.
Miriam Spering; Anna Montagnini; Karl R. Gegenfurtner
In: Journal of Vision, vol. 8, no. 15, pp. 1–19, 2008.
Visual processing of color and luminance for smooth pursuit and saccadic eye movements was investigated using a target selection paradigm. In two experiments, stimuli were varied along the dimensions color and luminance, and selection of the more salient target was compared in pursuit and saccades. Initial pursuit was biased in the direction of the luminance component whereas saccades showed a relative preference for color. An early pursuit response toward luminance was often reversed to color by a later saccade. Observers' perceptual judgments of stimulus salience, obtained in two control experiments, were clearly biased toward luminance. This choice bias in perceptual data implies that the initial short-latency pursuit response agrees with perceptual judgments. In contrast, saccades, which have a longer latency than pursuit, do not seem to follow the perceptual judgment of salience but instead show a stronger relative preference for color. These substantial differences in target selection imply that target selection processes for pursuit and saccadic eye movements use distinctly different weights for color and luminance stimuli.
Rike Steenken; Hans Colonius; Adele Diederich; Stefan Rach
In: Brain Research, vol. 1220, pp. 150–156, 2008.
Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Observed SRT reductions typically range between 10 and 50 ms and decrease as spatial disparity between the stimuli increases. Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory accessory. Here we probe this hypothesis by presenting an additional white-noise masker background of 3 s duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident vs. disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. As verified in a separate auditory localization task, localizability of the auditory accessory decreases with masker level. The SRT results are accounted for by a conceptual model positing that increasing masker level enlarges the area of possible auditory stimulus locations: it implies that perceivable distances decrease for disparate stimulus configurations and increase for coincident stimulus pairs.
Rike Steenken; Adele Diederich; Hans Colonius
In: Neuroscience Letters, vol. 435, no. 1, pp. 78–83, 2008.
In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.
Timo Stein; Ignacio Vallines; Werner X. Schneider
In: NeuroReport, vol. 19, no. 13, pp. 1277–1281, 2008.
When two masked targets are presented in a rapid sequence, attentional limitations are reflected in reduced identification accuracy for the second target (T2). We used functional magnetic resonance imaging to disentangle the distinct neural substrates of T2 processing during this attentional blink phenomenon. Spatially separating the two targets allows the retinotopic localization of the different stimuli's encoding sites in primary visual cortex (V1) and thus enables activation elicited by each target to be differentially measured in V1. The encoding location of the second target mirrored T2 identification accuracy in a retinotopically specific manner. These results are the first evidence for effects of behavioral performance on hemodynamic responses in V1 under conditions of the attentional blink.
Paul Sauleau; Pierre Pollak; Paul Krack; Jean Hubert Courjon; Alain Vighetto; Alim Louis Benabid; Denis Pélisson; Caroline Tilikete
In: Clinical Neurophysiology, vol. 119, no. 8, pp. 1857–1863, 2008.
Objective: To determine the effect of subthalamic stimulation on visually triggered eye and head movements in patients with Parkinson's disease (PD). Methods: We compared the gain and latency of visually triggered eye and head movements in 12 patients bilaterally implanted into the subthalamic nucleus (STN) for severe PD and six age-matched control subjects. Visually triggered movements of eye (head restrained), and of eye and head (head unrestrained) were recorded in the absence of dopaminergic medication. Bilateral stimulation was turned OFF and then turned ON with voltage and contact used in chronic setting. The latency was determined from the beginning of initial horizontal eye movements relative to the target onset, and the gain was defined as the ratio of the amplitude of the initial movement to the amplitude of the target movement. Results: Without stimulation, the initiation of the head movement was significantly delayed in patients and the gain of head movement was reduced. Our patients also presented significantly prolonged latencies and hypometry of visually triggered saccades in the head-fixed condition and of gaze in head-free condition. Bilateral STN stimulation with therapeutic parameters improved performance of orienting gaze, eye and head movements towards the controls' level. Conclusions: These results demonstrate that visually triggered saccades and orienting eye-head movements are impaired in the advanced stage of PD. In addition, subthalamic stimulation enhances amplitude and shortens latency of these movements. Significance: These results are likely explained by alteration of the information processed by the superior colliculus (SC), a pivotal visuomotor structure involved in both voluntary and reflexive saccades. Improvement of movements with stimulation of the STN may be related to its positive input either on the STN-Substantia Nigra-SC pathway or on the parietal cortex-SC pathway.
Christoph Scheepers; Frank Keller; Mirella Lapata
In: Cognitive Psychology, vol. 56, no. 1, pp. 1–29, 2008.
Metonymic verbs like start or enjoy often occur with artifact-denoting complements (e.g., The artist started the picture) although semantically they require event-denoting complements (e.g., The artist started painting the picture). In case of artifact-denoting objects, the complement is assumed to be type shifted (or coerced) into an event to conform to the verb's semantic restrictions. Psycholinguistic research has provided evidence for this kind of enriched composition: readers experience processing difficulty when faced with metonymic constructions compared to non-metonymic controls. However, slower reading times for metonymic constructions could also be due to competition between multiple interpretations that are being entertained in parallel whenever a metonymic verb is encountered. Using the visual-world paradigm, we devised an experiment which enabled us to determine the time course of metonymic interpretation in relation to non-metonymic controls. The experiment provided evidence in favor of a non-competitive, serial coercion process.
Anne-Catherine Scherlen; Jean-Baptiste Bernard; Aurélie Calabrèse; Eric Castet
Page mode reading with simulated scotomas: Oculo-motor patterns Journal Article
In: Vision Research, vol. 48, no. 18, pp. 1870–1878, 2008.
This study investigated the relationship between reading speed and oculo-motor parameters when normally sighted observers had to read single sentences with an artificial macular scotoma. Using multiple regression analysis, our main result shows that two significant predictors, number of saccades per sentence followed by average fixation duration, account for 94% of reading speed variance: reading speed decreases when number of saccades and fixation duration increase. The number of letters per forward saccade (L/FS), which was measured directly in contrast to previous studies, is not a significant predictor. The results suggest that, independently of the size of saccades, some or all portions of a sentence are temporally integrated across an increasing number of fixations as reading speed is reduced.
Laura Schmalzl; Romina Palermo; Melissa J. Green; Ruth Brunsdon; Max Coltheart
In: Cognitive Neuropsychology, vol. 25, no. 5, pp. 704–729, 2008.
In the current report we describe a successful training study aimed at improving recognition ofa set of familiar face photographs in K., a 4-year-old girl with congenital prosopagnosia (CP). A detailed assessment of K.'s face-processing skills showed a deficit in structural encoding, most pronounced in the processing of facial features within the face. In addition, eye movement recordings revealed that K.'s scan paths for faces were characterized by a large percentage of fixations directed to areas outside the internal core features (i.e., eyes, nose, and mouth), in particular by poor attendance to the eye region. Following multiple baseline assessments, training focused on teaching K. to reliably recognize a set of familiar face photographs by directing visual attention to specific characteristics of the internal features of each face. The training significantly improved K.'s ability to recognize the target faces, with her performance being flawless immediately after training as well as at a follow-up assessment 1 month later. In addition, eye movement recordings following training showed a significant change in K.'s scan paths, with a significant increase in the percentage offixations directed to the internal features, particularly the eye region. Encouragingly, not only was the change in scan paths observed for the set offamiliar trained faces, but it generalized to a set offaces that was not presented during training. In addition to documenting significant training effects, our study raises the intriguing question ofwhether abnormal scan paths for faces may be a common factor underlying face recognition impairments in childhood CP, an issue that has not been explored so far.
Michael Schneider; Angela Heine; Verena Thaler; Joke Torbeyns; Bert De Smedt; Lieven Verschaffel; Arthur M. Jacobs; Elsbeth Stern
In: Cognitive Development, vol. 23, no. 3, pp. 409–422, 2008.
The number line estimation task captures central aspects of children's developing number sense, that is, their intuitions for numbers and their interrelations. Previous research used children's answer patterns and verbal reports as evidence of how they solve this task. In the present study we investigated to what extent eye movements recorded during task solution reflect children's use of the number line. By means of a cross-sectional design with 66 children from Grades 1, 2, and 3, we show that eye-tracking data (a) reflect grade-related increase in estimation competence, (b) are correlated with the accuracy of manual answers, (c) relate, in Grade 2, to children's addition competence, (d) are systematically distributed over the number line, and (e) replicate previous findings concerning children's use of counting strategies and orientation-point strategies. These findings demonstrate the validity and utility of eye-tracking data for investigating children's developing number sense and estimation competence.
Werner X. Schneider; Ellen Matthias; Melissa L. -H. Võ
In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–13, 2008.
The study presented here introduces a new approach to the investigation of transsaccadic memory for objects in naturalistic scenes. Participants were tested with a whole-report task from which — based on the theory of visual attention (TVA) — processing efficiency parameters were derived, namely visual short-term memory storage capacity and visual processing speed. By combining these processing efficiency parameters with transsaccadic memory data from a previous study, we were able to take a closer look at the contribution of visual short-term memory capacity and processing speed to the establishment of visual long-term memory representations during scene viewing. Results indicate that especially the VSTM storage capacity plays a major role in the generation of transsaccadic visual representations of naturalistic scenes.
Alexander C. Schütz; Doris I. Braun; Dirk Kerzel; Karl R. Gegenfurtner
Improved visual sensitivity during smooth pursuit eye movements Journal Article
In: Nature Neuroscience, vol. 11, no. 10, pp. 1211–1216, 2008.
When we view the world around us, we constantly move our eyes. This brings objects of interest into the fovea and keeps them there, but visual sensitivity has been shown to deteriorate while the eyes are moving. Here we show that human sensitivity for some visual stimuli is improved during smooth pursuit eye movements. Detection thresholds for briefly flashed, colored stimuli were 16% lower during pursuit than during fixation. Similarly, detection thresholds for luminance-defined stimuli of high spatial frequency were lowered. These findings suggest that the pursuit-induced sensitivity increase may have its neuronal origin in the parvocellular retino-thalamic system. This implies that the visual system not only uses feedback connections to improve processing for locations and objects being attended to, but that a whole processing subsystem can be boosted. During pursuit, facilitation of the parvocellular system may reduce motion blur for stationary objects and increase sensitivity to speed changes of the tracked object.
Tamara A. Russell; Melissa J. Green; Ian Simpson; Max Coltheart
In: Schizophrenia Research, vol. 103, no. 1-3, pp. 248–256, 2008.
The study examined changes in visual attention in schizophrenia following training with a social-cognitive remediation package designed to improve facial emotion recognition (the Micro-Expression Training Tool; METT). Forty out-patients with schizophrenia were randomly allocated to active training (METT; n = 26), or repeated exposure (RE; n = 14); all completed an emotion recognition task with concurrent eye movement recording. Emotion recognition accuracy was significantly improved in the METT group, and this effect was maintained after one week. Immediately following training, the METT group directed more eye movements within feature areas of faces (i.e., eyes, nose, mouth) compared to the RE group. The number of fixations directed to feature areas of faces was positively associated with emotion recognition accuracy prior to training. After one week, the differences between METT and RE groups in viewing feature areas of faces were reduced to trends. However, within group analyses of the METT group revealed significantly increased number of fixations to, and dwell time within, feature areas following training which were maintained after one week. These results provide the first evidence that improvements in emotion recognition following METT training are associated with changes in visual attention to the feature areas of emotional faces. These findings support the contribution of visual attention abnormalities to emotion recognition impairment in schizophrenia, and suggest that one mechanism for improving emotion recognition involves re-directing visual attention to relevant features of emotional faces.
Stan Van Pelt; W. Pieter Medendorp
Updating target distance across eye movements in depth Journal Article
In: Journal of Neurophysiology, vol. 99, no. 5, pp. 2281–2290, 2008.
We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements (n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.
Wieske Van Zoest; Mieke Donk
In: Quarterly Journal of Experimental Psychology, vol. 61, no. 10, pp. 1553–1572, 2008.
Four experiments were performed to investigate the contribution of goal-driven modulation in saccadic target selection as a function of time. Observers were required to make an eye movement to a prespecified target that was concurrently presented with multiple nontargets and possibly one distractor. Target and distractor were defined in different dimensions (orientation dimension and colour dimension in Experiment 1), or were both defined in the same dimension (i.e., both defined in the orientation dimension in Experiment 2, or both defined in the colour dimension in Experiments 3 and 4). The identities of target and distractor were switched over conditions. Speed-accuracy functions were computed to examine the full time course of selection in each condition. There were three major results. First, the ability to exert goal-driven control increased as a function of response latency. Second, this ability depended on the specific target-distractor combination, yet was not a function of whether target and distractor were defined within or across dimensions. Third, goal-driven control was available earlier when target and distractor were dissimilar than when they were similar. It was concluded that the influence of goal-driven control in visual selection is not all or none, but is of a continuous nature.
Wieske Van Zoest; Stefan Van der Stigchel; Jason J. S. Barton
In: Experimental Brain Research, vol. 186, no. 3, pp. 431–442, 2008.
The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2-4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display.
André Vandierendonck; Maud Deschuyteneer; Ann Depoorter; Denis Drieghe
In: Psychological Research, vol. 72, no. 1, pp. 1–11, 2008.
Several studies have shown that anti-saccades, more than pro-saccades, are executed under executive control. It is argued that executive control subsumes a variety of controlled processes. The present study tested whether some of these underlying processes are involved in the execution of anti-saccades. An experiment is reported in which two such processes were parametrically varied, namely input monitoring and response selection. This resulted in four selective interference conditions obtained by factorially combining the degree of input monitoring and the presence of response selection in the interference task. The four tasks were combined with a primary task which required the participants to perform either pro-saccades or anti-saccades. By comparison of performance in these dual-task conditions and performance in single-task conditions, it was shown that anti-saccades, but not pro-saccades, were delayed when the secondary task required input monitoring or response selection. The results are discussed with respect to theoretical attempts to deconstruct the concept of executive control.
Suiping Wang; Hsuan-Chih Chen; Jinmian Yang; Lei Mo
In: Language and Cognitive Processes, vol. 23, no. 2, pp. 241–257, 2008.
An eye-movement study was conducted to examine whether Chinese readers immediately activate and integrate related background information during discourse comprehension. Participants were asked to read short passages, each containing a critical word that fitted well within the local context but was inconsistent or neutral with background information from the early part of the passage. This manipulation of textual consistency produced reliable effects on both first-pass reading fixations in the target region and second-pass reading times in the pre-target and target regions. These results indicate that integration processes start very rapidly in reading text in a writing system with properties that encourage delayed processing, suggesting that immediate processing is likely a universal principle in discourse comprehension.
Z. I. Wang; Louis F. Dell'Osso
In: Vision Research, vol. 48, no. 12, pp. 1409–1419, 2008.
Our purpose was to perform a systematic study of the post-four-muscle-tenotomy procedure changes in target acquisition time by comparing predictions from the behavioral ocular motor system (OMS) model and data from infantile nystagmus syndrome (INS) patients. We studied five INS patients who underwent only tenotomy at the enthesis and reattachment at the original insertion of each (previously unoperated) horizontal rectus muscle for their INS treatment. We measured their pre- and post-tenotomy target acquisition changes using data from infrared reflection and high-speed digital video. Three key aspects were calculated and analyzed: the saccadic latency (Ls), the time to target acquisition after the target jump (Lt) and the normalized stimulus time within the cycle. Analyses were performed in MATLAB environment (The MathWorks, Natick, MA) using OMLAB software (OMtools, available from http://www.omlab.org). Model simulations were performed in MATLAB Simulink environment. The model simulation suggested an Lt reduction due to an overall foveation-quality improvement. Consistent with that prediction, improvement in Lt, ranging from ∼200 ms to ∼500 ms (average ∼ 280 ms), was documented in all five patients post-tenotomy. The Lt improvement was not a result of a reduced Ls. INS patients acquired step-target stimuli faster post-tenotomy. This target acquisition improvement may be due to the elevated foveation quality resulting in less inherent variation in the input to the OMS. A refined behavioral OMS model, with "fast" and "slow" motor neuron pathways and a more physiological plant, successfully predicted this improved visual behavior and again demonstrated its utility in guiding ocular motor research.
Tessa Warren; Kerry McConnell; Keith Rayner
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 1001–1010, 2008.
Plausibility violations resulting in impossible scenarios lead to earlier and longer lasting eye movement disruption than violations resulting in highly unlikely scenarios (K. Rayner, T. Warren, B. J. Juhasz, & S. P. Liversedge, 2004; T. Warren & K. McConnell, 2007). This could reflect either differences in the timing of availability of different kinds of information (e.g., selectional restrictions, world knowledge, and context) or differences in their relative power to guide semantic interpretation. The authors investigated eye movements to possible and impossible events in real-world and fantasy contexts to determine when contextual information influences detection of impossibility cued by a semantic mismatch between a verb and an argument. Gaze durations on a target word were longer to impossible events independent of context. However, a measure of the time elapsed from first fixating the target word to moving past it showed disruption only in the real-world context. These results suggest that contextual information did not eliminate initial disruption but moderated it quickly thereafter.
Geoffrey Underwood; Emma Templeman; Laura Lamming; Tom Foulsham
In: Consciousness and Cognition, vol. 17, no. 1, pp. 159–170, 2008.
Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.
Seppo Vainio; Jukka Hyönä; Anneli Pajunen
In: Memory and Cognition, vol. 36, no. 2, pp. 329–340, 2008.
The present study examined whether type of inflectional case (semantic or grammatical) and phonological and morphological transparency affect the processing of Finnish modifier-head agreement in reading. Readers' eye movement patterns were registered. In Experiment 1, an agreeing modifier condition (agreement was transparent) was compared with a no-modifier condition, and in Experiment 2, similar constructions with opaque agreement were used. In both experiments, agreement was found to affect the processing of the target noun with some delay. In Experiment 3, unmarked and case-marked modifiers were used. The results again demonstrated a delayed agreement effect, ruling out the possibility that the agreement effects observed in Experiments 1 and 2 reflect a mere modifier-presence effect. We concluded that agreement exerts its effect at the level of syntactic integration but not at the level of lexical access.
Matteo Valsecchi; Sven Saage; Brian J. White; Karl R. Gegenfurtner
In: Journal of Eye Movement Research, vol. 6, no. 5:2, pp. 1–15, 2008.
Formulaic sequences such as idioms, collocations, and lexical bundles, which may be processed as holistic units, make up a large proportion of natural language. For language learners, however, formulaic patterns are a major barrier to achieving native like compe- tence. The present study investigated the processing of lexical bundles by native speakers and less advanced non-native English speakers using corpus analysis for the identification of lexical bundles and eye-tracking to measure the reading times. The participants read sentences containing 4-grams and control phrases which were matched for sub-string fre- quency. The results for native speakers demonstrate a processing advantage for formulaic sequences over the matched control units. We do not find any processing advantage for non-native speakers which suggests that native like processing of lexical bundles comes only late in the acquisition process
Ronald Berg; Frans W. Cornelissen; Jos B. T. M. Roerdink
In: ACM Transactions on Applied Perception, vol. 4, no. 4, pp. 1–21, 2008.
A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as color, size and orientation. Here, we consider visualizations in which different data dimensions should receive equal weight and attention. Many of the end-user tasks performed on these images involve a form of visual search. Often, it is simply assumed that features can be judged independently of each other in such tasks. However, there is evidence for perceptual dependencies when simultaneously presenting multiple features. Such dependencies could potentially affect information visualizations that contain combinations of features for encoding information and, thereby, bias subjects into unequally weighting the relevance of different data dimensions. We experimentally assess (1) the presence of judgment dependencies in a visualization task (searching for a target node in a node-link diagram) and (2) how feature contrast relates to salience. From a visualization point of view, our most relevant findings are that (a) to equalize saliency (and thus bottom-up weighting) of size and color, color contrasts have to become very low. Moreover, orientation is less suitable for representing information that consists of a large range of data values, because it does not show a clear relationship between contrast and salience; (b) color and size are features that can be used independently to represent information, at least as far as the range of colors that were used in our study are concerned; (c) the concept of (static) feature salience hierarchies is wrong; how salient a feature is compared to another is not fixed, but a function of feature contrasts; (d) final decisions appear to be as good an indicator of perceptual performance as indicators based on measures obtained from individual fixations. Eye tracking, therefore, does not necessarily present a benefit for user studies that aim at evaluating performance in search tasks.
Menno Van Der Schoot; Alain L. Vasbinder; Tako M. Horsley; Ernest C. D. M. Van Lieshout
In: Journal of Research in Reading, vol. 31, no. 2, pp. 203–223, 2008.
This study examined whether 1012-year-old children use two reading strategies to aid their text comprehension: (1) distinguishing between important and unimportant words; and (2) resolving anaphoric references. Of interest was the question to what extent use of these reading strategies was predictive of reading comprehension skill over and above decoding skill and vocabulary. Reading strategy use was examined by the recording of eye fixations on specific target words. In contrast to less successful comprehenders, more successful comprehenders invested more processing time in important than in unimportant words. On the other hand, they needed less time to determine the antecedent of an anaphor. The results suggest that more successful comprehenders build a more effective mental model of the text than less successful comprehenders in at least two ways. First, they allocate more attention to the incorporation of goal-relevant than goal-irrelevant information into the model. Second, they ascertain that the text model is coherent and richly connected.
Stefan Van der Stigchel; Jan Theeuwes
In: NeuroReport, vol. 19, no. 2, pp. 251–254, 2008.
The present study systematically investigated the influence of a distractor on horizontal and vertical eye movements. Results showed that both horizontal and vertical eye movements deviated away from the distractor but these deviations were stronger for vertical than for horizontal movements. As trajectory deviations away from a distractor are generally attributed to inhibition applied to the distractor, this suggests that this deviation is not only due to differences in activity between the two collicular motor maps, but can also be evoked by local application of inhibitory processes in the same map as the target. Nonetheless, deviations were more dominant for vertical movements which suggests that for these movements more inhibition is applied than for horizontal movements.
Stefan Van Der Stigchel; Wieske Van Zoest; Jan Theeuwes; Jason J. S. Barton
In: Journal of Cognitive Neuroscience, vol. 20, no. 11, pp. 2025–2036, 2008.
There is evidence that some visual information in blind regions may still be processed in patients with hemifield defects after cerebral lesions ("blindsight"). We tested the hypothesis that, in the absence of retinogeniculostriate processing, residual retinotectal processing may still be detected as modifications of saccades to seen targets by irrelevant distractors in the blind hemifield. Six patients were presented with distractors in the blind and intact portions of their visual field and participants were instructed to make eye movements to targets in the intact field. Eye movements were recorded to determine if blind-field distractors caused deviation in saccadic trajectories. No deviation was found in one patient with an optic chiasm lesion, which affect both retinotectal and retinogeniculostriate pathways. In five patients with lesions of the optic radiations or the striate cortex, the results were mixed, with two of the five patients showing significant deviations of saccadic trajectory away from the "blind" distractor. In a second experiment, two of the five patients were tested with the target and the distractor more closely aligned. Both patients showed a "global effect," in that saccades deviated toward the distractor, but the effect was stronger in the patient who also showed significant trajectory deviation in the first experiment. Although our study confirms that distractor effects on saccadic trajectory can occur in patients with damage to the retinogeniculostriate visual pathway but preserved retinotectal projections, there remain questions regarding what additional factors are required for these effects to manifest themselves in a given patient.
Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde
In: Spatial Vision, vol. 22, pp. 335–348, 2008.
When corners are embedded in a luminance gradient, their perceived salience varies linearly with corner angle (Troncoso et al., 2005). Here we hypothesize that this relationship may hold true for all corners, not just corner gradients. To test this hypothesis, we developed a novel variant of the flicker-augmented contrast illusion (Anstis and Ho, 1998) that employs solid (non-gradient) corners of varying angles to modify perceived brightness. We flickered solid corners from dark to light grey (50% luminance over time) against a black or a white background. With this new stimulus, subjects compared the apparent brightness of corners, which did not vary in actual luminance, to non-illusory stimuli that varied in actual luminance. We found that the apparent brightness of corners was linearly related to the sharpness of corner angle. Thus this relationship is not solely an effect of corners embedded in gradients, but may be a general principle of corner perception. These findings may have important repercussions for brain mechanisms underlying the early visual processing of shape and brightness. A large fraction of Vasarely's art showcases the perceptual salience of corners, curvature and terminators. Several of these artworks and their implications for visual processing are discussed.
Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-conde
Microsaccades counteract perceptual ﬁlling-in Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–9, 2008.
Artificial scotomas positioned within peripheral dynamic noise fade perceptually during visual fixation (that is, the surrounding dynamic noise appears to fill-in the scotoma). Because the scotomas' edges are continuously refreshed by the dynamic noise background, this filling-in effect cannot be explained by low-level adaptation mechanisms (such as those that may underlie classical Troxler fading). We recently showed that microsaccades counteract Troxler fading and drive first-order visibility during fixation (S. Martinez-Conde, S. L. Macknik, X. G. Troncoso, & T. A. Dyar, 2006). Here we set out to determine whether microsaccades may counteract the perceptual filling-in of artificial scotomas and thus drive second-order visibility. If so, microsaccades may not only counteract low-level adaptation but also play a role in higher perceptual processes. We asked subjects to indicate, via button press/release, whether an artificial scotoma presented on a dynamic noise background was visible or invisible at any given time. The subjects' eye movements were simultaneously measured with a high precision video system. We found that increases in microsaccade production counteracted the perception of filling-in, driving the visibility of the artificial scotoma. Conversely, decreased microsaccades allowed perceptual filling-in to take place. Our results show that microsaccades do not solely overcome low-level adaptation mechanisms but they also contribute to maintaining second-order visibility during fixation.
Xoana G. Troncoso; Stephen L. Macknik; Jorge Otero-Millan; Susana Martinez-Conde
Microsaccades drive illusory motion in the Enigma illusion Journal Article
In: Proceedings of the National Academy of Sciences, vol. 105, no. 41, pp. 16033–16038, 2008.
Visual images consisting of repetitive patterns can elicit striking illusory motion percepts. For almost 200 years, artists, psychologists, and neuroscientists have debated whether this type of illusion originates in the eye or in the brain. For more than a decade, the controversy has centered on the powerful illusory motion perceived in the painting Enigma, created by op-artist Isia Leviant. However, no previous study has directly correlated the Enigma illusion to any specific physiological mechanism, and so the debate rages on. Here, we show that microsaccades, a type of miniature eye movement produced during visual fixation, can drive illusory motion in Enigma. We asked subjects to indicate when illusory motion sped up or slowed down during the observation of Enigma while we simultaneously recorded their eye movements with high precision. Before "faster" motion periods, the rate of microsaccades increased. Before "slower/no" motion periods, the rate of microsaccades decreased. These results reveal a direct link between microsaccade production and the perception of illusory motion in Enigma and rule out the hypothesis that the origin of the illusion is purely cortical.
Yuan-Chi Tseng; Chiang-Shan Ray Li
In: The Open Psychology Journal, vol. 1, no. 1, pp. 18–25, 2008.
The stop-signal task (SST) and anti-saccade tasks are both widely used to explore cognitive inhibitory control. Our previous work on a manual SST showed that subjects' readiness to respond to the go signal and the extent to which subjects monitor their errors need to be considered in order to attribute impaired performance to deficits in response inhi- bition. Here we examine whether these same task-related variables similarly influence oculomotor SST and anti-saccade performance. Thirty-six and sixty healthy, adult subjects participated in an oculomotor SST and anti-saccade task, respec- tively, in which the fore-period (FP) of imperative stimulus varied randomly from trial to trial. We computed a FP effect to index response readiness to the imperative stimulus and a post-error slowing (PES) effect to index error monitoring. Contrary to what we had anticipated, other than a weak but negative association between the FP effect and anti-saccade errors, these behavioral variables did not correlate with SST or anti-saccade performance.
Brian Sullivan; Jelena Jovancevic-Misic; Mary Hayhoe; Gwen Sterns
In: Ophthalmic and Physiological Optics, vol. 28, no. 2, pp. 168–177, 2008.
Individuals with central visual field loss often use a preferred retinal locus (PRL) to compensate for their deficit. We present a case study examining the eye movements of a subject with Stargardt's disease causing bilateral central scotomas, while performing a set of natural tasks including: making a sandwich; building a model; reaching and grasping; and catching a ball. In general, the subject preferred to use PRLs in the lower left visual field. However, there was considerable variation in the location and extent of the PRLs used. Our results demonstrate that a well-defined PRL is not necessary to adequately perform this set of tasks and that many sites in the peripheral retina may be viable for PRLs, contingent on task and stimulus constraints.
Joshua M. Susskind; Daniel H. Lee; Andrée Cusi; Roman Feiman; Wojtek Grabski; Adam K. Anderson
Expressing fear enhances sensory acquisition Journal Article
In: Nature Neuroscience, vol. 11, no. 7, pp. 843–850, 2008.
It has been proposed that facial expression production originates in sensory regulation. Here we demonstrate that facial expressions of fear are configured to enhance sensory acquisition. A statistical model of expression appearance revealed that fear and disgust expressions have opposite shape and surface reflectance features. We hypothesized that this reflects a fundamental antagonism serving to augment versus diminish sensory exposure. In keeping with this hypothesis, when subjects posed expressions of fear, they had a subjectively larger visual field, faster eye movements during target localization and an increase in nasal volume and air velocity during inspiration. The opposite pattern was found for disgust. Fear may therefore work to enhance perception, whereas disgust dampens it. These convergent results provide support for the Darwinian hypothesis that facial expressions are not arbitrary configurations for social communication, but rather, expressions may have originated in altering the sensory interface with the physical world.
Giovanni Taibbi; Zhong I. Wang; Louis F. Dell'Osso
In: Ophthalmology, vol. 2, no. 3, pp. 585–589, 2008.
We investigated the effects of contact lenses in broadening and improving the high-foveation-quality fi eld in a subject with infantile nystagmus syndrome (INS). A high-speed, digitized video system was used for the eye-movement recording. The subject was asked to fi xate a far target at different horizontal gaze angles with contact lenses inserted. Data from the subject while fi xating at far without refractive correction and at near (at a convergence angle of 60 PD), were used for comparison. The eXpanded Nystagmus Acuity Function (NAFX) was used to evaluate the foveation quality at each gaze angle. Contact lenses broadened the high- foveation-quality range of gaze angles in this subject. The broadening was comparable to that achieved during 60 PD of convergence although the NAFX values were lower. Contact lenses allowed the subject to see “more” (he had a wider range of high-foveation-quality gaze angles) and “better” (he had improved foveation at each gaze angle). Instead of being contraindicated by INS, contact lenses emerge as a potentially important therapeutic option. Contact lenses employ afferent feedback via the ophthalmic division of the V cranial nerve to damp INS slow phases over a broadened range of gaze angles. This supports the proprioceptive hypothesis of INS improvement.
Kohske Takahashi; Katsumi Watanabe
Persisting effect of prior experience of change blindness Journal Article
In: Perception, vol. 37, no. 2, pp. 324–327, 2008.
Most cognitive scientists know that an airplane tends to lose its engine when the display is flickering. How does such prior experience influence visual search? We recorded eye movements made by vision researchers while they were actively performing a change-detection task. In selected trials, we presented Rensink's familiar 'airplane' display, but with changes occurring at locations other than the jet engine. The observers immediately noticed that there was no change in the location where the engine had changed in the previous change-blindness demonstration. Nevertheless, eye-movement analyses indicated that the observers were compelled to look at the location of the unchanged engine. These results demonstrate the powerful effect of prior experience on eye movements, even when the observers are aware of the futility of doing so.
Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Thomas Eggert; Zoï Kapoula
In: Investigative Ophthalmology & Visual Science, vol. 49, no. 1, pp. 230–237, 2008.
PURPOSE: Human ocular saccades are not perfectly yoked; the origin of this disconjugacy (muscular versus central) remains controversial. The purpose of this study was to test a cortical influence on the binocular coordination of saccades. METHODS: The authors used a gap paradigm to elicit vertical or horizontal saccades of 10 degrees , randomly interleaved; transcranial magnetic stimulation (TMS) was applied on the posterior parietal cortex (PPC) 100 ms after the target onset. RESULTS: TMS of the left or right PPC increased (i) the misalignment of the eyes during the presaccadic fixation period; (ii) the size difference between the saccades of the eyes, called disconjugacy; the increase of disconjugacy was significant for rightward and downward saccades after TMS of the right PPC and for downward saccades after TMS of the left PPC. CONCLUSIONS: The authors conclude that the PPC is actively involved in maintaining eye alignment during fixation and in the control of binocular coordination of saccades.
Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Zoï Kapoula
In: Brain Research Bulletin, vol. 76, no. 1-2, pp. 50–56, 2008.
This study tests the influence of transcranial magnetic stimulation (TMS) of the posterior parietal cortex (PPC) on the initiation of horizontal and vertical saccades, alone or combined with a predictable divergence. A gap paradigm was used; TMS was applied 100 ms after target onset. TMS of the left PPC increased the latency of unpredictable rightward saccades, while TMS of the right PPC increased the latency of unpredictable downward saccades. Yet, when unpredictable saccades were combined with predictable divergence, neither component was affected. We suggest that in the latter case, the initiation of both components was taken in charge by another area, e.g. frontal. Thus, even when one component was predictable, a common mechanism controls the initiation of both components. The results confirm that TMS only modifies the latency when the cortical area stimulated is involved in the triggering of the eye movement.
Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Zoï Kapoula
In: Optometry and Vision Science, vol. 85, no. 3, pp. 187–195, 2008.
Purpose. In real life, divergence is frequently combined with vertical saccades. The purpose of this study was to examine the initiation of vertical and horizontal saccades, pure or combined with divergence. Methods. We used a gap paradigm to elicit vertical or horizontal saccades (10 degrees), pure or combined with a predictable divergence (10 degrees). Eye movements from 12 subjects were recorded with EyeLink II. Results. The major results were (i) when combined with divergence, the latency of horizontal saccades increased but not the latency of vertical saccades; (ii) for both vertical and horizontal saccades, a tight correlation between the latency of saccade and divergence was found; (iii) when the divergence was anticipated, the saccade was delayed. Conclusion. We conclude that the initiation of both components of combined movements is interdependent.
Julius Verrel; Harold Bekkering; Bert Steenbergen
In: Experimental Brain Research, vol. 187, no. 1, pp. 107–116, 2008.
In the present study we investigated eye-hand coordination in adolescents with hemiparetic cerebral palsy (CP) and neurologically healthy controls. Using an object prehension and transport task, we addressed two hypotheses, motivated by the question whether early brain damage and the ensuing limitations of motor activity lead to general and/or effector-specific effects in visuomotor control of manual actions. We hypothesized that individuals with hemiparetic CP would more closely visually monitor actions with their affected hand, compared to both their less affected hand and to control participants without a sensorimotor impairment. A second, more speculative hypothesis was that, in relation to previously established deficits in prospective action control in individuals with hemiparetic CP, gaze patterns might be less anticipatory in general, also during actions performed with the less affected hand. Analysis of the gaze and hand movement data revealed the increased visual monitoring of participants with CP when using their affected hand at the beginning as well as during object transport. In contrast, no general deficit in anticipatory gaze control in the participants with hemiparetic CP could be observed. Collectively, these findings are the first to directly show that individuals with hemiparetic CP adapt eye-hand coordination to the specific constraints of the moving limb, presumably to compensate for sensorimotor deficits.
Christian Vorstius; Ralph Radach; Alan R. Lang; Christina J. Riccardi
In: Psychopharmacology, vol. 196, no. 2, pp. 201–210, 2008.
RATIONALE: Alcohol affects a variety of human behaviors, including visual perception and motor control. Although recent research has begun to explore mechanisms that mediate these changes, their exact nature is still not well understood. OBJECTIVES: The present study used two basic oculomotor tasks to examine the effect of alcohol on different levels of visual processing within the same individuals. A theoretical framework is offered to integrate findings across multiple levels of oculomotor control. MATERIALS AND METHODS: Twenty-four healthy participants were asked to perform eye movements in reflexive (pro-) and voluntary (anti-) saccade tasks. In one of two counterbalanced sessions, performance was measured after alcohol administration (mean BrAC=69 mg%); the other served as a within-subjects no-alcohol comparison condition. RESULTS: Error rates were not influenced by alcohol intoxication in either task. However, there were significant effects of alcohol on saccade latency and peak velocity in both tasks. Critically, a specific alcohol-induced impairment (hypermetria) in saccade amplitudes was observed exclusively in the anti-saccade task. CONCLUSIONS: The saccade latency data strongly suggest that alcohol intoxication impairs temporal aspects of saccade generation, irrespective of the level of processing triggering the saccade. The absence of effects on anti-saccade errors calls for further research into the notion of alcohol-induced impairment of the ability to inhibit prepotent responses. Furthermore, the specific impairment of saccade amplitude in the anti-saccade task under alcohol suggests that higher level processes involved in the spatial remapping of target location in the absence of a visually specified saccade goal are specifically affected by alcohol intoxication.
Robin Walker; Eugene McSorley
In: Journal of Eye Movement Research, vol. 2, no. 3, pp. 1–13, 2008.
It has long been known that the path (trajectory) taken by the eye to land on a target is rarely straight (Yarbus, 1967). Furthermore, the magnitude and direction of this natural tendency for curvature can be modulated by the presence of a competing distractor stimulus presented along with the saccade target. The distractorrelated modulation of saccade trajectories provides a subtle measure of the underlying competitive processes involved in saccade target selection. Here we review some of our own studies into the effects distractors have on saccade trajectories, which can be regarded as a way of probing the competitive balance between target and distractor salience.
Benjamin W. Tatler; Benjamin T. Vincent
Systematic tendencies in scene viewing Journal Article
In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–18, 2008.
While many current models of scene perception debate the relative roles of low- and high- level factors in eye guidance, systematic tendencies in how the eyes move may be infor- mative. We consider how each saccade and fixation is influenced by that which preceded or followed it, during free inspection of images of natural scenes. We find evidence to suggest periods of localized scanning separated by ‘global' relocations to new regions of the scene. We also find evidence to support the existence of small amplitude ‘corrective' saccades in natural image viewing. Our data reveal statistical dependencies between suc- cessive eye movements, which may be informative in furthering our understanding of eye guidance.
Benjamin W. Tatler; Nicholas J. Wade; Kathrin Kaulard
In: Spatial Vision, vol. 21, no. 1, pp. 165–184, 2008.
When observing art the viewer's understanding results from the interplay between the marks made on the surface by the artist and the viewer's perception and knowledge of it. Here we use a novel set of stimuli to dissociate the influences of the marks on the surface and the viewer's perceptual experience upon the manner in which the viewer inspects art. Our stimuli provide the opportunity to study situations in which (1) the same visual stimulus can give rise to two different perceptual experiences in the viewer, and (2) the visual stimuli differ but give rise to the same perceptual experience in the viewer. We find that oculomotor behaviour changes when the perceptual experience changes. Oculomotor behaviour also differs when the viewer's perceptual experience is the same but the visual stimulus is different. The methodology used and insights gained from this study offer a first step toward an experimental exploration of the relative influences of the artist's creation and viewer's perception when viewing art and also toward a better understanding of the principles of composition in portraiture.
T. Teichert; Steffen Klingenhoefer; T. Wachtler; Frank Bremmer
Depth perception during saccades Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–13, 2008.
A number of studies have investigated the localization of briefly flashed targets during saccades to understand how the brain perceptually compensates for changes in gaze direction. Typical version saccades, i.e., saccades between two points of the horopter, are not only associated with changes in gaze direction, but also with large transient changes of ocular vergence. These transient changes in vergence have to be compensated for just as changes in gaze direction. We investigated depth judgments of perisaccadically flashed stimuli relative to continuously present references and report several novel findings. First, disparity thresholds increased around saccade onset. Second, for horizontal saccades, depth judgments were prone to systematic errors: Stimuli flashed around saccade onset were perceived in a closer depth plane than persistently shown references with the same retinal disparity. Briefly before and after this period, flashed stimuli tended to be perceived in a farther depth plane. Third, depth judgments for upward and downward saccades differed substantially: For upward, but not for downward saccades we observed the same pattern of mislocalization as for horizontal saccades. Finally, unlike localization in the fronto-parallel plane, depth judgments did not critically depend on the presence of visual references. Current models fail to account for the observed pattern of mislocalization in depth.
Masahiko Terao; Junji Watanabe; Akihiro Yagi; Shin'ya Nishida
In: Nature Neuroscience, vol. 11, no. 5, pp. 541–542, 2008.
The neural mechanisms underlying visual estimation of subsecond durations remain unknown, but perisaccadic underestimation of interflash intervals may provide a clue as to the nature of these mechanisms. Here we found that simply reducing the flash visibility, particularly the visibility of transient signals, induced similar time underestimation by human observers. Our results suggest that weak transient responses fail to trigger the proper detection of temporal asynchrony, leading to increased perception of simultaneity and apparent time compression.
Marco Thiel; M. Carmen Romano; Jürgen Kurths; Martin Rolfs; Reinhold Kliegl
Generating surrogates from recurrences Journal Article
In: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 366, pp. 545–557, 2008.
In this paper, we present an approach to recover the dynamics from recurrences of a system and then generate (multivariate) twin surrogate (TS) trajectories. In contrast to other approaches, such as the linear-like surrogates, this technique produces surrogates which correspond to an independent copy of the underlying system, i.e. they induce a trajectory of the underlying system visiting the attractor in a different way. We show that these surrogates are well suited to test for complex synchronization, which makes it possible to systematically assess the reliability of synchronization analyses. We then apply the TS to study binocular fixational movements and find strong indications that the fixational movements of the left and right eye are phase synchronized. This result indicates that there might be only one centre in the brain that produces the fixational movements in both eyes or a close link between the two centres.
P. D. Thiem; Jessica A. Hill; K. -M. Lee; Edward L. Keller
Behavioral properties of saccades generated as a choice response Journal Article
In: Experimental Brain Research, vol. 186, no. 3, pp. 355–364, 2008.
The behavior characterizing choice response decision-making was studied in monkeys to provide background information for ongoing neurophysiological studies of the neural mechanisms underlying saccadic choice decisions. Animals were trained to associate a specific color from a set of colored visual stimuli with a specific spatial location. The visual stimuli (colored disks) appeared briefly at equal eccentricity from a central fixation position and then were masked by gray disks. The correct target association was subsequently cued by the appearance of a colored stimulus at the fixation point. The animal indicated its choice by saccading to the remembered location of the eccentric stimulus, which had matched the color of the cue. The number of alternative associations (NA) varied from 1 to 4 and remained fixed within a block of trials. After the training period, performance (percent correct responses) declined modestly as NA increased (on average 96, 93 or 84% correct for 1, 2 or 4 NA, respectively). Response latency increased logarithmically as a function of NA, thus obeying Hick's law. The spatial extent of the learned association between color and location was investigated by rotating the array of colored stimuli that had remained fixed during the learning phase to various different angles. Error rates in choice saccades increased gradually as a function of the amount of rotation. The learned association biased the direction of the saccadic response toward the quadrant associated with the cue, but saccade direction was always toward one of the actual visual stimuli. This suggests that the learned associations between stimuli and responses were not spatially exact, but instead the association between color and location was distributed with declining strength from the trained locations. These results demonstrate that the saccade system in monkeys also displays the characteristic dependence on NA in choice response latencies, while more basic features of the eye movements are invariant from those in other tasks. The findings also provide behavioral evidence that spatially distributed regions are established for the sensory-to-motor associations during training which are later utilized for choice decisions.
Shery Thomas; Frank A. Proudlock; Nagini Sarvananthan; Eryl O. Roberts; Musarat Awan; Rebecca J. McLean; Mylvaganam Surendran; A. S. Anil Kumar; Shegufta J. Farooq; Christopher Degg; Richard P. Gale; Robert D. Reinecke; Geoffrey Woodruff; Andrea Langmann; Susanne Lindner; Sunila Jain; Patrick Tarpey; F. Lucy Raymond; Irene Gottlob
In: Brain, vol. 131, no. 5, pp. 1259–1267, 2008.
Idiopathic infantile nystagmus (IIN) consists of involuntary oscillations of the eyes. The familial form is most commonly X-linked. We recently found mutations in a novel gene FRMD7 (Xq26.2), which provided an opportunity to investigate a genetically defined and homogeneous group of patients with nystagmus. We compared clinical features and eye movement recordings of 90 subjects with mutation in the gene (FRMD7 group) to 48 subjects without mutations but with clinical IIN (non-FRMD7 group). Fifty-eight female obligate carriers of the mutation were also investigated. The median visual acuity (VA) was 0.2 logMAR (Snellen equivalent 6/9) in both groups and most patients had good stereopsis. The prevalence of strabismus was also similar (FRMD7: 7.8%, non-FRMD7: 10%). The presence of anomalous head posture (AHP) was significantly higher in the non-FRMD7 group (P < 0.0001). The amplitude of nystagmus was more strongly dependent on the direction of gaze in the FRMD7 group being lower at primary position (P < 0.0001), compared to non-FRMD7 group (P = 0.83). Pendular nystagmus waveforms were also more frequent in the FRMD7 group (P = 0.003). Fifty-three percent of the obligate female carriers of an FRMD7 mutation were clinically affected. The VA's in affected females were slightly better compared to affected males (P = 0.014). Subnormal optokinetic responses were found in a subgroup of obligate unaffected carriers, which may be interpreted as a sub-clinical manifestation. FRMD7 is a major cause of X-linked IIN. Most clinical and eye movement characteristics were similar in the FRMD7 group and non-FRMD7 group with most patients having good VA and stereopsis and low incidence of strabismus. Fewer patients in the FRMD7 group had AHPs, their amplitude of nystagmus being lower in primary position. Our findings are helpful in the clinical identification of IIN and genetic counselling of nystagmus patients.
Aidan A. Thompson; Denise Y. P. Henriques
In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2507–2514, 2008.
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Delphine Dahan; Sarah J. Drucker; Rebecca A. Scarborough
In: Cognition, vol. 108, no. 3, pp. 710–718, 2008.
Past research has established that listeners can accommodate a wide range of talkers in understanding language. How this adjustment operates, however, is a matter of debate. Here, listeners were exposed to spoken words from a speaker of an American English dialect in which the vowel /æ/ is raised before /g/, but not before /k/. Results from two experiments showed that listeners' identification of /k/-final words like back (which are unaffected by the dialect) was facilitated by prior exposure to their dialect-affected /g/-final counterparts, e.g., bag. This facilitation occurred because the competition between interpretations, e.g., bag or back, while hearing the initial portion of the input [bæ], was mitigated by the reduced probability for the input to correspond to bag as produced by this talker. Thus, adaptation to an accent is not just a matter of adjusting the speech signal as it is being heard; adaptation involves dynamic adjustment of the representations stored in the lexicon, according to the characteristics of the speaker or the context.
Stephen V. David; Benjamin Y. Hayden; James A. Mazer; Jack L. Gallant
In: Neuron, vol. 59, no. 3, pp. 509–521, 2008.
Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search.
Scott L. Davis; Teresa C. Frohman; C. J. Crandall; M. J. Brown; D. A. Mills; Phillip D. Kramer; O. Stuve; Elliot M. Frohman
In: Neurology, vol. 70, pp. 1098–1106, 2008.
Objective: The goal of this investigation was to demonstrate that internuclear ophthalmoparesis (INO) can be utilized to model the effects of body temperature-induced changes on the fidelity of axonal conduction in multiple sclerosis (Uhthoff's phenomenon). Methods: Ocular motor function was measured using infrared oculography at 10-minute intervals in patients with multiple sclerosis (MS) with INO (MS-INO; n=8), patients with MS without INO (MS-CON; n=8), and matched healthy controls (CON; n=8) at normothermic baseline, during whole-body heating (increase in core temperature 0.8°C as measured by an ingestible temperature probe and transabdominal telemetry), and after whole-body cooling. The versional disconjugacy index (velocity-VDI), the ratio of abducting/adducting eye movements for velocity, was calculated to assess changes in interocular disconjugacy. The first pass amplitude (FPA), the position of the adducting eye when the abducting eye achieves a centrifugal fixation target, was also computed. Results: Velocity-VDI and FPA in MS-INO patients was elevated (p<0.001) following whole body heating with respect to baseline measures, confirming a compromise in axonal electrical impulse transmission properties. Velocity-VDI and FPA in MS-INO patients was then restored to baseline values following whole-body cooling, confirming the reversible and stereotyped nature of this characteristic feature of demyelination. Conclusions: We have developed a neurophysiologic model for objectively understanding temperature-related reversible changes in axonal conduction in multiple sclerosis. Our observations corroborate the hypothesis that changes in core body temperature (heating and cooling) are associated with stereotypic decay and restoration in axonal conduction mechanisms.
Denise D. J. Grave; Constanze Hesse; Anne-Marie Brouwer; Volker H. Franz
Fixation locations when grasping partly occluded objects Journal Article
In: Journal of Vision, vol. 8, no. 7, pp. 1–11, 2008.
When grasping an object, subjects tend to look at the contact positions of the digits (A. M. Brouwer, V. H. Franz, D. Kerzel, & K. R. Gegenfurtner, 2005; R. S. Johansson, G. Westling, A. Bäckström, & J. R. Flanagan, 2001). However, these contact positions are not always visible due to occlusion. Subjects might look at occluded parts to determine the location of the contact positions based on extrapolated information. On the other hand, subjects might avoid looking at occluded parts since no object information can be gathered there. To find out where subjects fixate when grasping occluded objects, we let them grasp flat shapes with the index finger and thumb at predefined contact positions. Either the contact position of the thumb or the finger or both was occluded. In a control condition, a part of the object that does not involve the contact positions was occluded. The results showed that subjects did look at occluded object parts, suggesting that they used extrapolated object information for grasping. Additionally, they preferred to look in the direction of the index finger. When the contact position of the index finger was occluded, this tendency was inhibited. Thus, an occluder does not prevent fixations on occluded object parts, but it does affect fixation locations especially in conditions where the preferred fixation location is occluded.
Sarah Brown-Schmidt; Christine Gunlogson; Michael K. Tanenhaus
In: Cognition, vol. 107, no. 3, pp. 1122–1134, 2008.
Two experiments examined the role of common ground in the production and on-line interpretation of wh-questions such as What's above the cow with shoes? Experiment 1 examined unscripted conversation, and found that speakers consistently use wh-questions to inquire about information known only to the addressee. Addressees were sensitive to this tendency, and quickly directed attention toward private entities when interpreting these questions. A second experiment replicated the interpretation findings in a more constrained setting. These results add to previous evidence that the common ground influences initial language processes, and suggests that the strength and polarity of common ground effects may depend on contributions of sentence type as well as the interactivity of the situation.
Julie N. Buchan; Martin Paré; Kevin G. Munhall
In: Brain Research, vol. 1242, pp. 162–171, 2008.
During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.
Antimo Buonocore; Robert D. McIntosh
Saccadic inhibition underlies the remote distractor effect Journal Article
In: Experimental Brain Research, vol. 191, no. 1, pp. 117–122, 2008.
The remote distractor effect is a robust finding whereby a saccade to a lateralised visual target is delayed by the simultaneous, or near simultaneous, onset of a distractor in the opposite hemifield. Saccadic inhibition is a more recently discovered phenomenon whereby a transient change to the scene during a visual task induces a depression in saccadic frequency beginning within 70 ms, and maximal around 90-100 ms. We assessed whether saccadic inhibition is responsible for the increase in saccadic latency induced by remote distractors. Participants performed a simple saccadic task in which the delay between target and distractor was varied between 0, 25, 50, 100 and 150 ms. Examination of the distributions of saccadic latencies showed that each distractor produced a discrete dip in saccadic frequency, time-locked to distractor onset, conforming closely to the character of saccadic inhibition. We conclude that saccadic inhibition underlies the remote distractor effect.
Manuel G. Calvo; Pedro Avero
In: Cognitive, Affective and Behavioral Neuroscience, vol. 8, no. 1, pp. 41–53, 2008.
This study investigated whether stimulus affective content can be extracted from visual scenes when these appear in parafoveal locations of the visual field and are foveally masked, and whether there is lateralization involved. Parafoveal prime pleasant or unpleasant scenes were presented for 150 msec 2.5° away from fixation and were followed by a foveal probe scene that was either congruent or incongruent in emotional valence with the prime. Participants responded whether the probe was emotionally positive or negative. Affective priming was demonstrated by shorter response latencies for congruent than for incongruent prime-probe pairs. This effect occurred when the prime was presented in the left visual field at a 300-msec prime-probe stimulus onset asynchrony, even when the prime and the probe were different in physical appearance and semantic category. This result reveals that the affective significance of emotional stimuli can be assessed early through covert attention mechanisms, in the absence of overt eye fixations on the stimuli, and suggests that right-hemisphere dominance is involved. Copyright 2008 Psychonomic Society, Inc.
Manuel G. Calvo; Michael W. Eysenck
In: Quarterly Journal of Experimental Psychology, vol. 61, no. 11, pp. 1669–1686, 2008.
To investigate the processing of emotional words by covert attention, threat-related, positive, and neutral word primes were presented parafoveally (2.2 degrees away from fixation) for 150 ms, under gaze-contingent foveal masking, to prevent eye fixations. The primes were followed by a probe word in a lexical-decision task. In Experiment 1, results showed a parafoveal threat-anxiety superiority: Parafoveal prime threat words facilitated responses to probe threat words for high-anxiety individuals, in comparison with neutral and positive words, and relative to low-anxiety individuals. This reveals an advantage in threat processing by covert attention, without differences in overt attention. However, anxiety was also associated with greater familiarity with threat words, and the parafoveal priming effects were significantly reduced when familiarity was covaried out. To further examine the role of word knowledge, in Experiment 2, vocabulary and word familiarity were equated for low- and high-anxiety groups. In these conditions, the parafoveal threat-anxiety advantage disappeared. This suggests that the enhanced covert-attention effect depends on familiarity with words.
Manuel G. Calvo; Lauri Nummenmaa
In: Journal of Experimental Psychology: General, vol. 137, no. 3, pp. 471–494, 2008.
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection.
Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero
In: Experimental Psychology, vol. 55, no. 6, pp. 359–370, 2008.
In a visual search task using photographs of real faces, a target emotional face was presented in an array of six neutral faces. Eye movements were monitored to assess attentional orienting and detection efficiency. Target faces with happy, surprised, and disgusted expressions were: (a) responded to more quickly and accurately, (b) localized and fixated earlier, and (c) detected as different faster and with fewer fixations, in comparison with fearful, angry, and sad target faces. This reveals a happy, surprised, and disgusted-face advantage in visual search, with earlier attentional orienting and more efficient detection. The pattern of findings remained equivalent across upright and inverted presentation conditions, which suggests that the search advantage involves processing of featural rather than configural information. Detection responses occurred generally after having fixated the target, which implies that detection of all facial expressions is post- rather than preattentional
Manuel G. Calvo; Lauri Nummenmaa; Jukka Hyönä
In: Emotion, vol. 8, no. 1, pp. 68–80, 2008.
Emotional-neutral pairs of visual scenes were presented peripherally (with their inner edges 5.2 degrees away from fixation) as primes for 150 to 900 ms, followed by a centrally presented recognition probe scene, which was either identical in specific content to one of the primes or related in general content and affective valence. Results indicated that (a) if no foveal fixations on the primes were allowed, the false alarm rate for emotional probes was increased; (b) hit rate and sensitivity (A') were higher for emotional than for neutral probes only when a fixation was possible on only one prime; and (c) emotional scenes were more likely to attract the first fixation than neutral scenes. It is concluded that the specific content of emotional or neutral scenes is not processed in peripheral vision. Nevertheless, a coarse impression of emotional scenes may be extracted, which then leads to selective attentional orienting or--in the absence of overt attention--causes false alarms for related probes.
David D. Cox; Alexander M. Papanastassiou; Daniel Oreper; Benjamin B. Andken; James J. DiCarlo
In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2966–2976, 2008.
Much of our knowledge of brain function has been gleaned from studies using microelectrodes to characterize the response properties of individual neurons in vivo. However, because it is difficult to accurately determine the location of a microelectrode tip within the brain, it is impossible to systematically map the fine three-dimensional spatial organization of many brain areas, especially in deep structures. Here, we present a practical method based on digital stereo microfocal X-ray imaging that makes it possible to estimate the three-dimensional position of each and every microelectrode recording site in "real time" during experimental sessions. We determined the system's ex vivo localization accuracy to be better than 50 microm, and we show how we have used this method to coregister hundreds of deep-brain microelectrode recordings in monkeys to a common frame of reference with median error of <150 microm. We further show how we can coregister those sites with magnetic resonance images (MRIs), allowing for comparison with anatomy, and laying the groundwork for more detailed electrophysiology/functional MRI comparison. Minimally, this method allows one to marry the single-cell specificity of microelectrode recording with the spatial mapping abilities of imaging techniques; furthermore, it has the potential of yielding fundamentally new kinds of high-resolution maps of brain function.
Matthew T. Crawford; John J. Skowronski; Chris Stiff; Ute Leonards
In: Journal of Experimental Social Psychology, vol. 44, no. 3, pp. 840–847, 2008.
When an informant describes trait-implicative behavior of a target, the informant is often associated with the trait implied by the behavior and can be assigned heightened ratings on that trait (STT effects). Presentation of a target photo along with the description seemingly eliminates these effects. Using three different measures of visual attention, the results of two studies show the elimination of STT effects by target photo presentation cannot be attributed to associative mechanisms linked to enhanced visual attention to targets. Instead, presentation of a target's photo likely prompts perceivers to spontaneously make target inferences in much the same way they make spontaneous inferences about self-describers. As argued by Todorov and Uleman [Todorov, A., & Uleman, J. S. (2004). The person reference process in spontaneous trait inferences. Journal of Personality & Social Psychology, 87, 482-493], such attributional processing can preclude the formation of trait associations to informants.