All EyeLink Publications
All 11,000+ peer-reviewed EyeLink research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
Sangita Dandekar; Claudio M. Privitera; Thom Carney; Stanley A. Klein
In: Journal of Neurophysiology, vol. 107, no. 4, pp. 1776–1790, 2011.
Studying neural activity during natural viewing conditions is not often attempted. Isolating the neural response of a single saccade is necessary to study neural activity during natural viewing; however, the close temporal spacing of saccades that occurs during natural viewing makes it difficult to determine the response to a single saccade. Herein, a general linear model (GLM) approach is applied to estimate the EEG neural saccadic response for different segments of the saccadic main sequence separately. It is determined that, in visual search conditions, neural responses estimated by conventional event-related averaging are significantly and systematically distorted relative to GLM estimates due to the close temporal spacing of saccades during visual search. Before the GLM is applied, analyses are applied that demonstrate that saccades during visual search with intersaccadic spacings as low as 100-150 ms do not exhibit significant refractory effects. Therefore, saccades displaying different intersaccadic spacings during visual search can be modeled using the same regressor in a GLM. With the use of the GLM approach, neural responses were separately estimated for five different ranges of saccade amplitudes during visual search. Occipital responses time locked to the onsets of saccades during visual search were found to account for, on average, 79 percent of the variance of EEG activity in a window 90-200 ms after the onsets of saccades for all five saccade amplitude ranges that spanned a range of 0.2-6.0 degrees. A GLM approach was also used to examine the lateralized ocular artifacts associated with saccades. Possible extensions of the methods presented here to account for the superposition of microsaccades in event-related EEG studies conducted in nominal fixation conditions are discussed.
J. Rhys Davies; Tom C. A. Freeman
In: Vision Research, vol. 51, no. 14, pp. 1637–1647, 2011.
Simultaneously adapting to retinal motion and non-collinear pursuit eye movement produces a motion aftereffect (MAE) that moves in a different direction to either of the individual adapting motions. Mack, Hill and Kahn (1989, Perception, 18, 649-655) suggested that the MAE was determined by the perceived motion experienced during adaptation. We tested the perceived-motion hypothesis by having observers report perceived direction during simultaneous adaptation. For both central and peripheral retinal motion adaptation, perceived direction did not predict the direction of subsequent MAE. To explain the findings we propose that the MAE is based on the vector sum of two components, one corresponding to a retinal MAE opposite to the adapting retinal motion and the other corresponding to an extra-retina MAE opposite to the eye movement. A vector model of this component hypothesis showed that the MAE directions reported in our experiments were the result of an extra-retinal component that was substantially larger in magnitude than the retinal component when the adapting retinal motion was positioned centrally. However, when retinal adaptation was peripheral, the model suggested the magnitude of the components should be about the same. These predictions were tested in a final experiment that used a magnitude estimation technique. Contrary to the predictions, the results showed no interaction between type of adaptation (retinal or pursuit) and the location of adapting retinal motion. Possible reasons for the failure of component hypothesis to fully explain the data are discussed.
R. Contreras; Jamshid Ghajar; S. Bahar; M. Suh
In: Brain Research, vol. 1398, pp. 55–63, 2011.
In mild traumatic brain injury (mTBI), the fiber tracts that connect the frontal cortex with the cerebellum may suffer shear damage, leading to attention deficits and performance variability. This damage also disrupts the enhancement of eye-target synchronization that can be affected by cognitive load when subjects are tested using a concurrent eye-tracking test and word-recall test. We investigated the effect of cognitive load on eye-target synchronization in normal and mTBI patients using the nonlinear dynamical technique of stochastic phase synchronization. Results demonstrate that eye-target synchronization was negatively affected by cognitive load in mTBI subjects. In contrast, eye-target synchronization improved under intermediate cognitive load in young (≤ 40 years old) normal subjects.
Jennifer E. Corbett; Marisa Carrasco
Visual performance fields: Frames of reference Journal Article
In: PLoS ONE, vol. 6, no. 9, pp. e24470, 2011.
Performance in most visual discrimination tasks is better along the horizontal than the vertical meridian (Horizontal-Vertical Anisotropy, HVA), and along the lower than the upper vertical meridian (Vertical Meridian Asymmetry, VMA), with intermediate performance at intercardinal locations. As these inhomogeneities are prevalent throughout visual tasks, it is important to understand the perceptual consequences of dissociating spatial reference frames. In all studies of performance fields so far, allocentric environmental references and egocentric observer reference frames were aligned. Here we quantified the effects of manipulating head-centric and retinotopic coordinates on the shape of visual performance fields. When observers viewed briefly presented radial arrays of Gabors and discriminated the tilt of a target relative to homogeneously oriented distractors, performance fields shifted with head tilt (Experiment 1), and fixation (Experiment 2). These results show that performance fields shift in-line with egocentric referents, corresponding to the retinal location of the stimulus.
Julien Cotti; Gustavo Rohenkohl; Mark Stokes; Anna C. Nobre; Jennifer T. Coull
In: NeuroImage, vol. 54, no. 2, pp. 1221–1230, 2011.
To optimise speed and accuracy of motor behaviour, we can prepare not only the type of movement to be made but also the time at which it will be executed. Previous cued reaction-time paradigms have shown that anticipating the moment in time at which this response will be made ("temporal orienting") or selectively preparing the motor effector with which an imminent response will be made (motor intention or "motor orienting") recruits similar regions of left intraparietal sulcus (IPS), raising the possibility that these two preparatory processes are inextricably co-activated. We used a factorial design to independently cue motor and temporal components of response preparation within the same experimental paradigm. By differentially cueing either ocular or manual response systems, rather than spatially lateralised responses within just one of these systems, potential spatial confounds were removed. We demonstrated that temporal and motor orienting were behaviourally dissociable, each capable of improving performance alone. Crucially, fMRI data revealed that temporal orienting activated the left IPS even if the motor effector that would be used to execute the response was unpredictable. Moreover, temporal orienting activated left IPS whether the target required a saccadic or manual response, and whether this response was left- or right-sided, thus confirming the ubiquity of left IPS activation for temporal orienting. Finally, a small region of left IPS was also activated by motor orienting for manual, though not saccadic, responses. Despite their functional independence therefore, temporal orienting and manual motor orienting nevertheless engage partially overlapping regions of left IPS, possibly reflecting their shared ontogenetic roots.
Julien Cotti; Jean-Louis Vercher; Alain Guillaume
In: Behavioural Brain Research, vol. 218, no. 1, pp. 248–252, 2011.
Execution of a saccadic eye movement towards the goal of a hand pointing movement improves the accuracy of this hand movement. Still controversial is the role of extra-retinal signals, i.e. efference copy of the saccadic command and/or ocular proprioception, in the definition of the hand pointing target. We report here that hand pointing movements produced without visual feedback, with accompanying saccades and towards a target extinguished at saccade onset, were modified after gain change of reactive saccades through saccadic adaptation. As we have previously shown that the adaptation of reactive saccades does not influence the target representations that are common to the eye and the hand motor sub-systems (Cotti J, Guillaume A, Alahyane N, Pelisson D, Vercher JL. Adaptation of voluntary saccades, but not of reactive saccades. Transfers to hand pointing movements. J Neurophysiol 2007;98:602-12), the results of the present study demonstrate that extra-retinal signals participate in defining the target of hand pointing movements.
Reinier Cozijn; Edwin Commandeur; Wietske Vonk; Leo G. M. Noordman
In: Journal of Memory and Language, vol. 64, no. 4, pp. 381–403, 2011.
Several theoretical accounts have been proposed with respect to the issue how quickly the implicit causality verb bias affects the understanding of sentences such as "John beat Pete at the tennis match, because he had played very well" They can be considered as instances of two viewpoints: the focusing and the integration account. The focusing account claims that the bias should be manifest soon after the verb has been processed, whereas the integration account claims that the interpretation is deferred until disambiguating information is encountered. Up to now, this issue has remained unresolved because materials or methods have failed to address it conclusively. We conducted two experiments that exploited the visual world paradigm and ambiguous pronouns in subordinate because clauses. The first experiment presented implicit causality sentences with the task to resolve the ambiguous pronoun. To exclude strategic processing, in the second experiment, the task was to answer simple comprehension questions and only a minority of the sentences contained implicit causality verbs. In both experiments, the implicit causality of the verb had an effect before the disambiguating information was available. This result supported the focusing account.
Trevor J. Crawford; Elisabeth Parker; Ivonne Solis-Trapala; Jenny Mayes
In: Experimental Brain Research, vol. 208, no. 3, pp. 385–397, 2011.
The mechanisms that control eye movements in the antisaccade task are not fully understood. One influential theory claims that the generation of antisaccades is dependent on the capacity of working memory. Previous research also suggests that antisaccades are influenced by the relative processing speeds of the exogenous and endogenous saccadic pathways. However, the relationship between these factors is unclear, in particular whether or not the effect of the relative speed of the pro and antisaccade pathways is mediated by working memory. The present study contrasted the performance of healthy individuals with high and low working memory in the antisaccade and prosaccade tasks. Path analyses revealed that antisaccade errors were strongly predicted by the mean reaction times of prosaccades and that this relationship was not mediated by differences in working memory. These data suggest that antisaccade errors are directly related to the speed of saccadic programming. These findings are discussed in terms of a race competition model of antisaccade control.
Sarah C. Creel; Melanie A. Tumlin
In: Journal of Memory and Language, vol. 65, no. 3, pp. 264–285, 2011.
Recent work demonstrates that listeners utilize talker-specific information in the speech signal to inform real-time language processing. However, there are multiple representational levels at which this may take place. Listeners might use acoustic cues in the speech signal to access the talker's identity and information about what they tend to talk about, which then immediately constrains processing. Alternatively, or simultaneously, listeners might compare the signal to acoustically-detailed representations of words, without awareness of the talker's identity. In a series of eye-tracked comprehension experiments, we explore the circumstances under which listeners utilize talker-specific information. Experiments 1 and 2 demonstrate talker-specific recognition benefits for newly-learned words both in isolation (Experiment 1) and with preceding context (Experiment 2), but suggest that listeners do not strongly semantically associate talkers with referents. Experiment 3 demonstrates that listeners can recognize talkers rapidly, almost as soon as acoustic information is available, and can associate talkers with multiple arbitrary referents. Experiment 4 demonstrates that if talker identity is highly diagnostic on each trial, listeners readily associate talkers with specific referents, but do not seem to make such associations when diagnostic value is low. Implications for speech processing, talker processing, and learning are discussed. textcopyright 2011 Elsevier Inc.
Sebastian J. Crutch; Manja Lehmann; Nikos Gorgoraptis; Diego Kaski; Natalie Ryan; Masud Husain; Elizabeth K. Warrington
Abnormal visual phenomena in posterior cortical atrophy Journal Article
In: Neurocase, vol. 17, no. 2, pp. 160–177, 2011.
Individuals with posterior cortical atrophy (PCA) report a host of unusual and poorly explained visual disturbances. This preliminary report describes a single patient (CRO), and documents and investigates abnormally prolonged colour afterimages (concurrent and prolonged perception of colours complimentary to the colour of an observed stimulus), perceived motion of static stimuli, and better reading of small than large letters. We also evaluate CRO's visual and vestibular functions in an effort to understand the origin of her experience of room tilt illusion, a disturbing phenomenon not previously observed in individuals with cortical degenerative disease. These visual symptoms are set in the context of a 4-year longitudinal neuropsychological and neuroimaging investigation of CRO's visual and other cognitive skills. We hypothesise that prolonged colour after-images are attributable to relative sparing of V1 inhibitory interneurons; perceived motion of static stimuli reflects weak magnocellular function; better reading of small than large letters indicates a reduced effective field of vision; and room tilt illusion effects are caused by disordered integration of visual and vestibular information. This study contributes to the growing characterisation of PCA whose atypical early visual symptoms are often heterogeneous and frequently under-recognised.
Jie Cui; Jorge Otero-Millan; Stephen L. Macknik; Mac King; Susana Martinez-Conde
Social misdirection fails to enhance a magic illusion Journal Article
In: Frontiers in Human Neuroscience, vol. 5, pp. 103, 2011.
Visual, multisensory and cognitive illusions in magic performances provide new windows into the psychological and neural principles of perception, attention, and cognition. We investigated a magic effect consisting of a coin “vanish” (i.e., the perceptual disappear- ance of a coin after a simulated toss from hand to hand). Previous research has shown that magicians can use joint attention cues such as their own gaze direction to strengthen the observers' perception of magic. Herewe presented naïve observers with videos including real and simulated coin tosses to determine if joint attention might enhance the illusory perception of simulated coin tosses. The observers' eye positions were measured, and their perceptual responses simultaneously recorded via button press. To control for the magician's use of joint attention cues, we occluded his head in half of the trials.We found that subjects did not direct their gaze at the magician's face at the time of the coin toss, whether the face was visible or occluded, and that the presence of the magician's face did not enhance the illusion. Thus, our results show that joint attention is not necessary for the perception of this effect.We conclude that social misdirection is redundant and possibly detracting to this very robust sleight-of-hand illusion.We further determined that subjects required multiple trials to effectively distinguish real from simulated tosses; thus the illusion was resilient to repeated viewing.
Lisa Irmen; Eva Schumann
In: Journal of Cognitive Psychology, vol. 23, no. 8, pp. 998–1014, 2011.
Two eye-tracking experiments investigated the effects of masculine versus feminine grammatical gender on the processing of role nouns and on establishing coreference relations. Participants read sentences with the basic structure My kinship term is a role noun prepositional phrase such as My brother is a singer in a band. Role nouns were either masculine or feminine. Kinship terms were lexically male or female and in this way specified referent gender, i.e., the sex of the person referred to. Experiment 1 tested a fully crossed design including items with an incorrect combination of lexically male kinship term and feminine role name. Experiment 2 tested only correct combinations of grammatical and lexical/referential gender to control for possible effects of the incorrect items of Experiment 1. In early stages of processing, feminine role nouns, but not masculine ones, were fixated longer when grammatical and referential gender were contradictory. In later stages of sentence wrap-up there were longer fixations for sentences with masculine than for those with feminine role nouns. Results of both experiments indicate that, for feminine role nouns, cues to referent gender are integrated immediately, whereas a late integration obtains for masculine forms.
David E. Irwin
Where does attention go when you blink? Journal Article
In: Attention, Perception, and Psychophysics, vol. 73, no. 5, pp. 1374–1384, 2011.
Many studies have shown that covert visual attention precedes saccadic eye movements to locations in space. The present research investigated whether the allocation of attention is similarly affected by eye blinks. Subjects completed a partial-report task under blink and no-blink conditions. Experiment 1 showed that blinking facilitated report of the bottom row of the stimulus array: Accuracy for the bottom row increased and mislocation errors decreased under blink, as compared with no-blink, conditions, indicating that blinking influenced the allocation of visual attention. Experiment 2 showed that this was true even when subjects were biased to attend elsewhere. These results indicate that attention moves downward before a blink in an involuntary fashion. The eyes also move downward during blinks, so attention may precede blink-induced eye movements just as it precedes saccades and other types of eye movements.
L. Issen; Krystel R. Huxlin; David C. Knill
In: Journal of Vision, vol. 11, no. 6, pp. 1–16, 2011.
While we know that humans are extremely sensitive to optic flow information about direction of heading, we do not know how they integrate information across the visual field. We adapted the standard cue perturbation paradigm to investigate how young adult observers integrate optic flow information from different regions of the visual field to judge direction of heading. First, subjects judged direction of heading when viewing a three-dimensional field of random dots simulating linear translation through the world. We independently perturbed the flow in one visual field quadrant to indicate a different direction of heading relative to the other three quadrants. We then used subjects' judgments of direction of heading to estimate the relative influence of flow information in each quadrant on perception. Human subjects behaved similarly to the ideal observer in terms of integrating motion information across the visual field with one exception: Subjects overweighted information in the upper half of the visual field. The upper-field bias was robust under several different stimulus conditions, suggesting that it may represent a physiological adaptation to the uneven distribution of task-relevant motion information in our visual world.
In: Frontiers in Human Neuroscience, vol. 5, pp. 97, 2011.
The eye produces saccadic eye movements whose reaction times are perhaps the shortest in humans. Saccade latencies reflect ongoing cortical processing and, generally, shorter latencies are supposed to reflect advanced motor preparation. The dilation of the eye's pupil is reported to reflect cortical processing as well. Eight participants made saccades in a gap and overlap paradigm (in pure and mixed blocks), which we used in order to produce a variety of different saccade latencies. Saccades and pupil size were measured with the EyeLink II. The pattern in pupil dilation resembled that of a gap effect: for gap blocks, pupil dilations were larger compared to overlap blocks; mixing gap and overlap trials reduced the pupil dilation for gap trials thereby inducing a switching cost. Furthermore, saccade latencies across all tasks predicted the magnitude of pupil dilations post hoc: the longer the saccade latency the smaller the pupil dilation before the eye actually began to move. In accordance with observations for manual responses, we conclude that pupil dilations prior to saccade execution reflect advanced motor preparations and therefore provide valid indicator qualities for ongoing cortical processes.
Stephanie Jainta; Anne Dehnert; Sven P. Heinrich; Wolfgang Jaschinski
In: Investigative Ophthalmology & Visual Science, vol. 52, no. 13, pp. 9416–9424, 2011.
PURPOSE: Reading a text requires vergence angle adjustments, so that the images in the two eyes fall on corresponding retinal areas. Vergence adjustments bring the two retinal images into Panum's fusional area and therefore, small remaining errors or regulations do not lead to double vision. The present study evaluated dynamic and static aspects of the binocular coordination when upcoming text was blurred. METHODS: Binocular eye movements and accommodation responses were simultaneously measured for 20 participants while reading single, nonblurred sentences and while the text was blurred as if it were seen by a person in whom the combination of refraction and accommodation deviated from the stimulus plane by 0.5 D. RESULTS: Text comprehension did not change, even though fixation times increased for reading blurred sentences. The disconjugacy during saccades was also not affected by blurred text presentations, but the vergence adjustment during fixations was reduced. Further, for blurred text, the overall vergence angle shifted in the exo direction, and this shift correlated with the individual heterophoria. Accommodation measures showed that the lag of accommodation was slightly larger for reading blurred sentences and that the shift in vergence angle was larger when the individual lag of accommodation was also larger. CONCLUSIONS: The results suggest that reading comprehension is robust against changes in binocular coordination that result from moderate text degradation; nevertheless, these changes are likely to be linked to the development of fatigue and visual strain in near reading conditions.
Yosuke Kita; Atsuko Gunji; Yuki Inoue; Takaaki Goto; Kotoe Sakihara; Makiko Kaga; Masumi Inagaki; Toru Hosokawa
In: Brain and Development, vol. 33, no. 6, pp. 494–503, 2011.
It is assumed that children with autism spectrum disorders (ASD) have specificities for self-face recognition, which is known to be a basic cognitive ability for social development. In the present study, we investigated neurological substrates and potentially influential factors for self-face recognition of ASD patients using near-infrared spectroscopy (NIRS). The subjects were 11 healthy adult men, 13 normally developing boys, and 10 boys with ASD. Their hemodynamic activities in the frontal area and their scanning strategies (eye-movement) were examined during self-face recognition. Other factors such as ASD severities and self-consciousness were also evaluated by parents and patients, respectively. Oxygenated hemoglobin levels were higher in the regions corresponding to the right inferior frontal gyrus than in those corresponding to the left inferior frontal gyrus. In two groups of children these activities reflected ASD severities, such that the more serious ASD characteristics corresponded with lower activity levels. Moreover, higher levels of public self-consciousness intensified the activities, which were not influenced by the scanning strategies. These findings suggest that dysfunction in the right inferior frontal gyrus areas responsible for self-face recognition is one of the crucial neural substrates underlying ASD characteristics, which could potentially be used to evaluate psychological aspects such as public self-consciousness.
Steffen Klingenhoefer; F. Bremmer
In: Vision Research, vol. 51, pp. 881–889, 2011.
Saccades challenge visual perception since they induce large shifts of the image on the retina. Nevertheless, we perceive the outer world as being stable. The saccadic system also can rapidly adapt to changes in the environment (saccadic adaptation). In such case, a dissociation is introduced between a driving visual signal (the original saccade target) and a motor output (the adapted saccade vector). The question arises, how saccadic adaptation interferes with perceptual visual stability. In order to answer this question, we engaged human subjects in a saccade adaptation paradigm and interspersed trials in which the saccade target was displaced perisaccadically to a random position. In these trials subjects had to report on their perception of displacements of the saccade target. Subjects were tested in two conditions. In the 'blank' condition, the saccade target was briefly blanked after the end of the saccade. In the 'no-blank' condition the target was permanently visible. Confirming previous findings, the visual system was rather insensitive to displacements of the saccade target in an unadapted state, an effect termed saccadic suppression of displacement (SSD). In all adaptation conditions, we found spatial perception to correlate with the adaptive changes in saccade landing site. In contrast, small changes in saccade amplitude that occurred on a trial by trial basis did not correlate with perception. In the 'no-blank' condition we observed a prominent increase in suppression strength during backward adaptation. We discuss our findings in the context of existing theories on transsaccadic perceptual stability and its neural basis.
Lisa Kloft; Eva Kischkel; Norbert Kathmann; Benedikt Reuter
In: Psychophysiology, vol. 48, no. 6, pp. 755–761, 2011.
Obsessive-compulsive disorder (OCD) patients show deficits in tasks of executive functioning like the antisaccade (AS) task. These deficits suggest problems in response inhibition or volitional saccade generation. Thirty patients (15 nonmedicated) and 30 healthy subjects performed antisaccades and simple volitional saccades (SVS), that is, centrally cued saccades. In SVS, two aspects of volitional saccade generation were disentangled: response selection and initiation. Latencies of OCD patients were increased in volitional saccades independent of response selection demands. AS performance did not differ. Across groups, latencies in AS were faster than in SVS. Medicated patients did not differ from nonmedicated patients. In sum, response initiation is deficient in OCD patients, which may reflect a general problem in volitional action generation. This deficit did not affect antisaccade performance, possibly due to a lower volitional demand in that task.
Christian Kluge; Markus Bauer; Alexander P. Leff; Hans-Jochen Heinze; Raymond J. Dolan; Jon Driver; Alexander Paul
In: Proceedings of the National Academy of Sciences, vol. 108, no. 30, pp. 12545–12550, 2011.
We used magnetoencephalography (MEG) to assess plasticity of human auditory cortex induced by classical conditioning and contingency reversal. Participants listened to random sequences of high or low tones. A first baseline phase presented these without further associations. In phase 2, one of the frequencies (CS(+)) was paired with shock on half its occurrences, whereas the other frequency (CS(-)) was not. In phase 3, the contingency assigning CS(+) and CS(-) was reversed. Conditioned pupil dilation was observed in phase 2 but extinguished in phase 3. MEG revealed that, during phase-2 initial conditioning, the P1m, N1m, and P2m auditory components, measured from sensors over auditory temporal cortex, came to distinguish between CS(+) and CS(-). After contingency reversal in phase 3, the later P2m component rapidly reversed its selectivity (unlike the pupil response) but the earlier P1m did not, whereas N1m showed some new learning but not reversal. These results confirm plasticity of human auditory responses due to classical conditioning, but go further in revealing distinct constraints on different levels of the auditory hierarchy. The later P2m component can reverse affiliation immediately in accord with an updated expectancy after contingency reversal, whereas the earlier auditory components cannot. These findings indicate distinct cognitive and emotional influences on auditory processing.
Jonas Knöll; Paola Binda; M. Concetta Morrone; Frank Bremmer
Spatiotemporal profile of peri-saccadic contrast sensitivity Journal Article
In: Journal of Vision, vol. 11, no. 14, pp. 1–12, 2011.
Sensitivity to luminance contrast is reduced just before and during saccades (saccadic suppression), whereas sensitivity to color contrast is unimpaired peri-saccadically and enhanced post-saccadically. The exact spatiotemporal map of these perceptual effects is as yet unknown. Here, we measured detection thresholds for briefly flashed Gaussian blobs modulated in either luminance or chromatic contrast, displayed at a range of eccentricities. Sensitivity to luminance contrast was reduced peri-saccadically by a scaling factor, which was almost constant across retinal space. Saccadic suppression followed a similar time course across all tested eccentricities and was maximal shortly after the saccade onset. Sensitivity to chromatic contrast was enhanced post-saccadically at all tested locations. The enhancement was not specifically linked to the execution of saccades, as it was also observed following a displacement of retinal images comparable to that caused by a saccade. We conclude that luminance and chromatic contrast sensitivities are subject to distinct modulations at the time of saccades, resulting from independent neural processes.
Stephan Koenig; Harald Lachnit
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 5, pp. 1164–1177, 2011.
We report how the trajectories of saccadic eye movements are affected by memory interference acquired during associative learning. Human participants learned to perform saccadic choice responses based on the presentation of arbitrary central cues A, B, AC, BC, AX, BY, X, and Y that were trained to predict the appearance of a peripheral target stimulus at 1 of 3 possible locations, right (R), mid (M), or left (L), in the upper hemifield. We analyzed as measures of associative learning the frequency, latency, and curvature of saccades elicited by the cues and directed at the trained locations in anticipation of the targets. Participants were trained on two concurrent discrimination problems A+R, AC+R, AX+M, X+M and B+L, BC+L, BY+M, Y+M. From a connectionist perspective, cues were predicted to acquire associative links connecting the cues to the trained outcomes in memory. Model simulations based on the learning rule of the Rescorla and Wagner (1972) model revealed that for some cues, the prediction of the correct target location was challenged by the interfering prediction of an incorrect location. We observed that saccades directed at the correct location in anticipation of the target curved away from the location that was predicted by the interfering association. Furthermore, changes in curvature during training corresponded to predicted changes in associative memory. We propose that this curvature was caused by the inhibition of the incorrect prediction, as previously has been suggested with the concept of distractor inhibition (Sheliga, Riggio, & Rizzolatti, 1994; Tipper, Howard, & Houghton, 2000). The paradigm provides a new method to examine memory interference during associative learning.
Anke Huckauf; Mario H. Urbina
In: ACM Transactions on Applied Perception, vol. 8, no. 2, pp. 1–14, 2011.
Controlling computers using eye movements can provide a fast and efficient alternative to the computer mouse. However, implementing object selection in gaze-controlled systems is still a challenge. Dwell times or fixations on a certain object typically used to elicit the selection of this object show several disadvantages. We studied deviations of critical thresholds by an individual and task-specific adaptation method. This demonstrated an enormous variability of optimal dwell times. We developed an alternative approach using antisaccades for selection. For selection by antisaccades, highlighted objects are copied to one side of the object. The object is selected when fixating to the side opposed to that copy requiring to inhibit an automatic gaze shift toward new objects. Both techniques were compared in a selection task. Two experiments revealed superior performance in terms of errors for the individually adapted dwell times. Antisaccades provide an alternative approach to dwell time selection, but they did not show an improvement over dwell time. We discuss potential improvements in the antisaccade implementation with which antisaccades might become a serious alternative to dwell times for object selection in gaze-controlled systems.
Vyv C. Huddy; Timothy L. Hodgson; M. A. Ron; Thomas R. E. Barnes; Eileen M. Joyce
In: Psychological Medicine, vol. 41, no. 9, pp. 1805–1814, 2011.
Background. Previous studies have shown that patients with schizophrenia are impaired on executive tasks, where positive and negative feedbacks are used to update task rules or switch attention. However, research to date using saccadic tasks has not revealed clear deficits in task switching in these patients. The present study used an oculomotor ‘ rule switching ' task to investigate the use of negative feedback when switching between task rules in people with schizophrenia. Method. A total of 50 patients with first episode schizophrenia and 25 healthy controls performed a task in which the association between a centrally presented visual cue and the direction of a saccade could change from trial to trial. Rule changes were heralded by an unexpected negative feedback, indicating that the cue-response mapping had reversed. Results. Schizophrenia patients were found to make increased errors following a rule switch, but these were almost entirely the result of executing saccades away from the location at which the negative feedback had been presented on the preceding trial. This impairment in negative feedback processing was independent of IQ. Conclusions. The results not only confirm the existence of a basic deficit in stimulus–response rule switching in schizophrenia, but also suggest that this arises from aberrant processing of response outcomes, resulting in a failure to appropriately update rules. The findings are discussed in the context of neurological and pharmacological abnormalities in the conditions that may disrupt prediction error signalling in schizophrenia.
Lynn Huestegge; Jos J. Adam
In: Attention, Perception, and Psychophysics, vol. 73, no. 3, pp. 702–707, 2011.
Preparation provided by visual location cues is known to speed up behavior. However, the role of concurrent saccades in response to visual cues remains unclear. In this study, participants performed a spatial precueing task by pressing one of four response keys with one of four fingers (two of each hand) while eye movements were monitored. Prior to the stimulus, we presented a neutral cue (baseline), a hand cue (corresponding to left vs. right positions), or a finger cue (corresponding to inner vs. outer positions). Participants either remained fixated on a central fixation point or moved their eyes freely. The results demonstrated that saccades during the cueing interval altered the pattern of cueing effects. Finger cueing trials in which saccades were spatially incompatible (vs. compatible) with the subsequently required manual response exhibited slower manual RTs. We propose that interference between saccades and manual responses affects manual motor preparation.
Lynn Huestegge; Andrea M. Philipp
In: Attention, Perception, and Psychophysics, vol. 73, no. 6, pp. 1903–1915, 2011.
A precondition for efficiently understanding and memorizing graphs is the integration of all relevant graph elements and their meaning. In the present study, we analyzed integration processes by manipulating the spatial compatibility between elements in the data region and the legend. In Experiment 1, participants judged whether bar graphs depicting either statistical main effects or interactions correspond to previously presented statements. In Experiments 2 and 3, the same was tested with line graphs of varying complexity. In Experiment 4, participants memorized line graphs for a subsequent validation task. Throughout the experiments, eye movements were recorded. The results indicated that data-legend compatibility reduced the time needed to understand graphs, as well as the time needed to retrieve relevant graph information from memory. These advantages went hand in hand with a decrease of gaze transitions between the data region and the legend, indicating that data-legend compatibility decreases the difficulty of integration processes.
Falk Huettig; Gerry T. M. Altmann
In: Quarterly Journal of Experimental Psychology, vol. 64, no. 1, pp. 122–145, 2011.
Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
Falk Huettig; James M. Mcqueen
In: Memory and Cognition, vol. 39, no. 6, pp. 1068–1084, 2011.
Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search.
Amelia R. Hunt; P. Cavanagh
Remapped visual masking Journal Article
In: Journal of Vision, vol. 11, no. 1, pp. 1–8, 2011.
Cells in saccade control areas respond if a saccade is about to bring a target into their receptive fields (J. R. Duhamel, C. L. Colby, & M. R. Goldberg, 1992). This remapping process should shift the retinal location from which attention selects target information (P. Cavanagh, A. R. Hunt, S. R. Afraz, & M. Rolfs, 2010). We examined this attention shift in a masking experiment where target and mask were presented just before an eye movement. In a control condition with no eye movement, masks interfered with target identification only when they spatially overlapped. Just before a saccade, however, a mask overlapping the target had less effect, whereas a mask placed in the target's remapped location was quite effective. The remapped location is the retinal position the target will have after the upcoming saccade, which corresponds to neither the retinotopic nor spatiotopic location of the target before the saccade. Both effects are consistent with a pre-saccadic shift in the location from which attention selects target information. In the case of retinally aligned target and mask, the shift of attention away from the target location reduces masking, but when the mask appears at the target's remapped location, attention's shift to that location brings in mask information that interferes with the target identification.
Marc Hurwitz; Derick Valadao; James Danckert
Static versus dynamic judgments of spatial extent Journal Article
In: Experimental Brain Research, vol. 209, no. 2, pp. 271–286, 2011.
Research exploring how scanning affects judgments of spatial extent has produced conflicting results. We conducted four experiments on line bisection judgments measuring ocular and pointing behavior, with line length, position, speed, acceleration, and direction of scanning manipulated. Ocular and pointing judgments produced distinct patterns. For static judgments (i.e., no scanning), the eyes were sensitive to position and line length with pointing much less sensitive to these factors. For dynamic judgments (i.e., scanning the line), bisection biases were influenced by the speed of scanning but not acceleration, while both ocular and pointing results varied with scan direction. We suggest that static and dynamic probes of spatial judgments are different. Furthermore, the substantial differences seen between static and dynamic bisection suggest the two invoke different neural processes for computing spatial extent for ocular and pointing judgments.
Samuel B. Hutton; S. Nolte
The effect of gaze cues on attention to print advertisements Journal Article
In: Applied Cognitive Psychology, vol. 25, no. 6, pp. 887–892, 2011.
Print advertisements often employ images of humans whose gaze may be focussed on an object or region within the advertisement. Gaze cues are powerful factors in determining the focus of our attention, but there have been no systematic studies exploring the impact of gaze cues on attention to print advertisements. We tracked participants' eyes whilst they read an on-screen magazine containing advertisements in which the model either looked at the product being advertised or towards the viewer. When the model's gaze was directed at the product, participants spent longer looking at the product, the brand logo and the rest of the advertisement compared to when the model's gaze was directed towards the viewer. These results demonstrate that the focus of reader's attention can be readily manipulated by gaze cues provided by models in advertisements, and that these influences go beyond simply drawing attention to the cued area of the advertisement.
Alex D. Hwang; Hsueh-Cheng Wang; Marc Pomplun
Semantic guidance of eye movements in real-world scenes Journal Article
In: Vision Research, vol. 51, no. 10, pp. 1192–1205, 2011.
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control.
Jukka Hyönä; Raymond Bertram
Optimal viewing position effects in reading Finnish Journal Article
In: Vision Research, vol. 51, no. 11, pp. 1279–1287, 2011.
The present study examined effects of the initial landing position in words on eye behavior during reading of long and short Finnish compound words. The study replicated OVP and IOVP effects previously found in French, German and English - languages structurally distinct from Finnish, suggesting that the effects generalize across structurally different alphabetic languages. The results are consistent with the view that the landing position effects appear at the prelexical stage of word processing, as landing position effects were not modulated by word frequency. Moreover, the OVP effects are in line with a visuomotor explanation making recourse to visual acuity constraints.
Albrecht W. Inhoff; Matthew S. Solomon; Ralph Radach; Bradley A. Seymour
In: Journal of Cognitive Psychology, vol. 23, no. 5, pp. 543–558, 2011.
The distance between eye movements and articulation during oral reading, commonly referred to as the eye?voice span, has been a classic issue of experimental reading research since Buswell (1921). To examine the influence of the span on eye movement control, synchronised recordings of eye position and speech production were obtained during fluent oral reading. The viewing of a word almost always preceded its articulation, and the interval between the onset of a word's fixation and the onset of its articulation was approximately 500 ms. The identification and articulation of a word were closely coupled, and the fixation?speech interval was regulated through immediate adjustments of word viewing duration, unless the interval was relatively long. In this case, the lag between identification and articulation was often reduced through a regression that moved the eyes back in the text. These results indicate that models of eye movement control during oral reading need to include a mechanism that maintains a close linkage between the identification and articulation of words through continuous oculomotor adjustments.
In: Language and Cognitive Processes, vol. 26, no. 10, pp. 1625–1666, 2011.
We report two visual-world eye-tracking experiments that investigated the effects of subjecthood, pronominalisation, and contrastive focus on the interpretation of pronouns in subsequent discourse. By probing the effects of these factors on real-time pronoun interpretation, we aim to contribute to our understanding of how topicality-related factors (subjecthood, givenness) interact with contrastive focus effects, and to investigate whether the seemingly mixed results obtained in prior work on topicality and focusing could be related to effects of subjecthood. Our results indicate that structural and semantic prominence (specifically, agentive subjects) influence pronoun interpretation even when separated from information-structural notions, and thus need to be taken into account when investigating topicality and focusing. We discuss how our results allow us to reconcile the distinct findings of prior studies. More generally, this research contributes to our understanding of how the language comprehension system integrates different kinds of information during real-time reference resolution.nWe report two visual-world eye-tracking experiments that investigated the effects of subjecthood, pronominalisation, and contrastive focus on the interpretation of pronouns in subsequent discourse. By probing the effects of these factors on real-time pronoun interpretation, we aim to contribute to our understanding of how topicality-related factors (subjecthood, givenness) interact with contrastive focus effects, and to investigate whether the seemingly mixed results obtained in prior work on topicality and focusing could be related to effects of subjecthood. Our results indicate that structural and semantic prominence (specifically, agentive subjects) influence pronoun interpretation even when separated from information-structural notions, and thus need to be taken into account when investigating topicality and focusing. We discuss how our results allow us to reconcile the distinct findings of prior studies. More generally, this research contributes to our understanding of how the language comprehension system integrates different kinds of information during real-time reference resolution.
Joke P. Kalisvaart; Sumientra M. Rampersad; Jeroen Goossens
In: PLoS ONE, vol. 6, no. 6, pp. e20017, 2011.
Recent studies suggest that binocular rivalry at stimulus onset, so called onset rivalry, differs from rivalry during sustained viewing. These observations raise the interesting question whether there is a relation between onset rivalry and rivalry in the presence of eye movements. We therefore studied binocular rivalry when stimuli jumped from one visual hemifield to the other, either through a saccade or through a passive stimulus displacement, and we compared rivalry after such displacements with onset and sustained rivalry. We presented opponent motion, orthogonal gratings and face/house stimuli through a stereoscope. For all three stimulus types we found that subjects showed a strong preference for stimuli in one eye or one hemifield (Experiment 1), and that these subject-specific biases did not persist during sustained viewing (Experiment 2). These results confirm and extend previous findings obtained with gratings. The results from the main experiment (Experiment 3) showed that after a passive stimulus jump, switching probability was low when the preferred eye was dominant before a stimulus jump, but when the non-preferred eye was dominant beforehand, switching probability was comparatively high. The results thus showed that dominance after a stimulus jump was tightly related to eye dominance at stimulus onset. In the saccade condition, however, these subject-specific biases were systematically reduced, indicating that the influence of saccades can be understood from a systematic attenuation of the subjects' onset rivalry biases. Taken together, our findings demonstrate a relation between onset rivalry and rivalry after retinal shifts and involvement of extra-retinal signals in binocular rivalry.
Sakari Kallio; Jukka Hyönä; Antti Revonsuo; Pilleriin Sikka; Lauri Nummenmaa
The existence of a hypnotic state revealed by eye movements Journal Article
In: PLoS ONE, vol. 6, no. 10, pp. e26374, 2011.
Hypnosis has had a long and controversial history in psychology, psychiatry and neurology, but the basic nature of hypnotic phenomena still remains unclear. Different theoretical approaches disagree as to whether or not hypnosis may involve an altered mental state. So far, a hypnotic state has never been convincingly demonstrated, if the criteria for the state are that it involves some objectively measurable and replicable behavioural or physiological phenomena that cannot be faked or simulated by non-hypnotized control subjects. We present a detailed case study of a highly hypnotizable subject who reliably shows a range of changes in both automatic and volitional eye movements when given a hypnotic induction. These changes correspond well with the phenomenon referred to as the "trance stare" in the hypnosis literature. Our results show that this 'trance stare' is associated with large and objective changes in the optokinetic reflex, the pupillary reflex and programming a saccade to a single target. Control subjects could not imitate these changes voluntarily. For the majority of people, hypnotic induction brings about states resembling normal focused attention or mental imagery. Our data nevertheless highlight that in some cases hypnosis may involve a special state, which qualitatively differs from the normal state of consciousness.
Sunjeev K. Kamboj; Rachel Massey-Chase; Lydia Rodney; Ravi K. Das; Basil Almahdi; H. Valerie Curran; Celia J. A. Morgan
In: Psychopharmacology, vol. 217, no. 1, pp. 25–37, 2011.
Rationale: The effects of D-cycloserine (DCS) in animal models of anxiety disorders and addiction indicate a role for N-methyl D-aspartate (NMDA) receptors in extinction learning. Exposure/response prevention treatments for anxiety disorders in humans are enhanced by DCS, suggesting a promising co-therapy regime, mediated by NMDA receptors. Exposure/response prevention may also be effective in problematic drinkers, and DCS might enhance habituation to cues in these individuals. Since heavy drinkers show ostensible conditioned responses to alcohol cues, habituation following exposure/response prevention should be evident in these drinkers, with DCS enhancing this effect. Objectives: The objective of this study is to investigate the effect of DCS on exposure/response prevention in heavy drinkers. Methods: In a randomised, double-blind, placebo- controlled study, heavy social drinkers recruited from the community received either DCS (125 mg; n=19) or placebo (n=17) 1 h prior to each of two sessions of exposure/response prevention. Cue reactivity and attentional bias were assessed during these two sessions and at a third follow-up session. Between-session drinking behaviour was recorded. Results: Robust cue reactivity and attentional bias to alcohol cues was evident, as expected of heavy drinkers. Within- and between-session habituation of cue reactivity, as well as a reduction in attentional bias to alcohol cues over time was found. However, there was no evidence ofgreater habituation in the DCS group. Subtle stimulant effects (increased subjective contentedness and euphoria) which were unrelated to exposure/response prevention were found following DCS. Conclusions: DCS does not appear to enhance habituation of alcohol cue reactivity in heavy non-dependent drinkers. Its utility in enhancing treatments based on exposure/ response prevention in dependent drinkers or drug users remains open.
Zoï Kapoula; Qing Yang; Norman Sabbah; Marine Vernet
In: Frontiers in Human Neuroscience, vol. 5, pp. 114, 2011.
Gap and overlap tasks are widely used to promote automatic versus controlled saccades. This study examines the hypothesis that the right posterior parietal cortex (PPC) is differently involved in the two tasks. Twelve healthy students participated in the experiment. We used double-pulse transcranial magnetic stimulation (dTMS) on the right PPC, the first pulse delivered at the target onset and the second 65 or 80 ms later. Each subject performed several blocks of gap or overlap task with or without dTMS. Eye movements were recorded with an Eyelink device. The results show an increase of latency of saccades after dTMS of the right PPC for both tasks but for different time windows (0-80 ms for the gap task, 0-65 ms for the overlap task). Moreover, for rightward saccades the coefficient of variation of latency increased in the gap task but decreased in the overlap task. Finally, in the gap task and for leftward saccades only, dTMS at 0-80 ms decreased the amplitude and the speed of saccades. Although the study is preliminary and needs further investigation in detail, the results support the hypothesis that the right PPC is involved differently in the initiation of the saccades for the two tasks: in the gap task the PPC controls saccade triggering while in the overlap task it could be a relay to the Frontal Eye Fields which is known to control voluntary saccades, e.g., memory-guided and perhaps the controlled saccades in the overlap task The results have theoretical and clinical significance as gap-overlap tasks are easy to perform even in advanced age and in patients with neurodegenerative diseases.
Kai Kaspar; Peter Konig
In: Journal of Vision, vol. 11, no. 13, pp. 1–29, 2011.
Studies on bottom-up mechanisms in human overt attention support the significance of basic image features for fixation behavior on visual scenes. In this context, a decisive question has been neglected so far: How stable is the impact of basic image features on overt attention across repeated image observation? To answer this question, two eye-tracking studies were conducted in which 79 subjects were repeatedly exposed to several types of visual scenes differing in gist and complexity. Upon repeated presentations, viewing behavior changed significantly. Subjects neither performed independent scanning eye movements nor scrutinized complementary image regions but tended to view largely overlapping image regions, but this overlap significantly decreased over time. Importantly, subjects did not uncouple their scanning pathways substantially from basic image features. In contrast, the effect of image type on feature–fixation correlations was much bigger than the effect of memory-mediated scene familiarity. Moreover, feature–fixation correlations were moderated by actual saccade length, and this phenomenon remained constant across repeated viewings. We also demonstrated that this saccade length effect was not an exclusive within-subject phenomenon. We conclude that the present results bridge a substantial gap in attention research and are important for future research and modeling processes of human overt attention. Additionally, we advise considering interindividual differences in viewing behavior.
Kai Kaspar; Peter König
In: PLoS ONE, vol. 6, no. 7, pp. e21719, 2011.
The present study investigated the dynamic of the attention focus during observation of different categories of complex scenes and simultaneous consideration of individuals' memory and motivational state. We repeatedly presented four types of complex visual scenes in a pseudo-randomized order and recorded eye movements. Subjects were divided into groups according to their motivational disposition in terms of action orientation and individual rating of scene interest.Statistical analysis of eye-tracking data revealed that the attention focus successively became locally expressed by increasing fixation duration; decreasing saccade length, saccade frequency, and single subject's fixation distribution over images; and increasing inter-subject variance of fixation distributions. The validity of these results was supported by verbal reports. This general tendency was weaker for the group of subjects who rated the image set as interesting as compared to the other group. Additionally, effects were partly mediated by subjects' motivational disposition. Finally, we found a generally strong impact of image type on eye movement parameters. We conclude that motivational tendencies linked to personality as well as individual preferences significantly affected viewing behaviour. Hence, it is important and fruitful to consider inter-individual differences on the level of motivation and personality traits within investigations of attention processes. We demonstrate that future studies on memory's impact on overt attention have to deal appropriately with several aspects that had been out of the research focus until now.
Gert Kootstra; Bart Boer; Lambert R. B. Schomaker
In: Cognitive Computation, vol. 3, pp. 223–240, 2011.
Most bottom-up models that predict human eye fixations are based on contrast features. The saliency model of Itti, Koch and Niebur is an example of such contrast-saliency models. Although the model has been successfully compared to human eye fixations, we show that it lacks preciseness in the prediction of fixations on mirror-symmetrical forms. The contrast model gives high response at the borders, whereas human observers consistently look at the symmetrical center of these forms. We propose a saliency model that predicts eye fixations using local mirror symmetry. To test the model, we performed an eye-tracking experiment with participants viewing complex photographic images and compared the data with our symmetry model and the contrast model. The results show that our symmetry model predicts human eye fixations significantly better on a wide variety of images including many that are not selected for their symmetrical content. Moreover, our results show that especially early fixations are on highly symmetrical areas of the images. We conclude that symmetry is a strong predictor of human eye fixations and that it can be used as a predictor of the order of fixation.
In: Applied Cognitive Psychology, vol. 25, no. 6, pp. 893–905, 2011.
Hierarchical graphs (e.g. file system browsers, family trees) represent objects (e.g. files, folders) as graph nodes, and relations (subfolder relations) between them as lines. In three experiments, participants viewed such graphs and carried out tasks that either required search for two target nodes (Experiment 1A), reasoning about their relation (Experiment 1B), or both (Experiment 2). We recorded eye movements and used the number of fixations in different phases to identify distinct stages of comprehension. Search in graphs proceeded like search in standard visual search tasks and was mostly unaffected by graph properties. Reasoning occurred typically in a separate stage at the end ofcomprehension and was affected by intersecting graph lines. The alignment ofnodes, together with linguistic factors, may also affect comprehension. Overall, there was good evidence to suggest that participants read graphs in a sequential manner, and that this is an economical approach of comprehension.
Sid Kouider; Vincent Berthet; Nathan Faivre; Sid Kouider; Vincent Berthet; Nathan Faivre
Preference is biased by crowded facial expressions Journal Article
In: Psychological Science, vol. 22, no. 2, pp. 184–189, 2011.
Crowding occurs when nearby flankers impede the identification of a peripheral stimulus. Here, we studied whether crowded features containing inaccessible emotional information can nevertheless affect preference judgments. We relied on gaze-contingent crowding, a novel method allowing for constant perceptual unawareness through eye-tracking control, and we found that crowded facial expressions can bias evaluative judgments of neutral pictographs. Furthermore, this emotional bias was effective not only for static images of faces, but also for videos displaying dynamic facial expressions. In addition to showing an alternative approach for probing nonconscious cognition, this study reveals that crowded information, instead of being fully suppressed, can have important influences on decisions.
Michael J. Koval; Stephen G. Lomber; Stefan Everling
In: Journal of Neuroscience, vol. 31, no. 23, pp. 8659–8668, 2011.
The cognitive control of action requires both the suppression of automatic responses to sudden stimuli and the generation of behavior specified by abstract instructions. Though patient, functional imaging and neurophysiological studies have implicated the dorsolateral prefrontal cortex (dlPFC) in these abilities, the mechanism by which the dlPFC exerts this control remains unknown. Here we examined the functional interaction of the dlPFC with the saccade circuitry by deactivating area 46 of the dlPFC and measuring its effects on the activity of single superior colliculus neurons in monkeys performing a cognitive saccade task. Deactivation of the dlPFC reduced preparatory activity and increased stimulus-related activity in these neurons. These changes in neural activity were accompanied by marked decreases in task performance as evidenced by longer reaction times and more task errors. The results suggest that the dlPFC participates in the cognitive control of gaze by suppressing stimulus-evoked automatic saccade programs.
Lianne C. Krab; Arja Goede-Bolder; Femke K. Aarsen; Henriëtte A. Moll; Chris I. De Zeeuw; Ype Elgersma; Josef N. Geest
Motor learning in children with Neurofibromatosis Type I Journal Article
In: Cerebellum, vol. 10, no. 1, pp. 14–21, 2011.
The aim of this study was to quantify the frequently observed problems in motor control in Neurofibromatosis type 1 (NF1) using three tasks on motor performance and motor learning. A group of 70 children with NF1 was compared to age-matched controls. As expected, NF1 children showed substantial problems in visuo-motor integration (Beery VMI). Prism-induced hand movement adaptation seemed to be mildly affected. However, no significant impairments in the accuracy of simple eye or hand movements were observed. Also, saccadic eye movement adaptation, a cerebellum dependent task, appeared normal. These results suggest that the motor problems of children with NF1 in daily life are unlikely to originate solely from impairments in motor learning. Our findings, therefore, do not support a general dysfunction of the cerebellum in children with NF1.
Stefanie E. Kuchinsky; Kathryn Bock; David E. Irwin
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 3, pp. 748–756, 2011.
To describe a scene, speakers must map visual information to a linguistic plan. Eye movements capture features of this linkage in a tendency for speakers to fixate referents just before they are mentioned. The current experiment examined whether and how this pattern changes when speakers create atypical mappings. Eye movements were monitored as participants told the time from analog clocks. Half of the participants did this in the usual manner. For the other participants, the denotations of the clock hands were reversed, making the big hand the hour and the little hand the minute. Eye movements revealed that it was not the visual features or configuration of the hands that determined gaze patterns, but rather top-down control from upcoming referring expressions. Differences in eye-voice spans further suggested a process in which scene elements are relationally structured before a linguistic plan is executed. This provides evidence for structural rather than lexical incrementality in planning and supports a "seeing-for-saying" hypothesis in which the visual system is harnessed to the linguistic demands of an upcoming utterance.
Gustav Kuhn; Lauren Tewson; Lea Morpurgo; Susannah F. Freebody; Anna S. Musil; Susan R. Leekam
In: Quarterly Journal of Experimental Psychology, vol. 64, no. 10, pp. 1919–1929, 2011.
We investigated developmental differences in oculomotor control between 10-year-old children and adults using a central interference task. In this task, the colour of a fixation point instructed participants to saccade either to the left or to the right. These saccade directions were either congruent or incongruent with two types of distractor cue: either the direction of eye gaze of a centrally presented schematic face, or the direction of arrows. Children had greater difficulties inhibiting the distractor cues than did adults, which revealed itself in longer saccade latencies for saccades that were incongruent with the distractor cues as well as more errors on these incongruent trials than on congruent trials. Counter to our prediction, in terms of saccade latencies, both children and adults had greater difficulties inhibiting the arrow than the eye gaze distractors.
Gustav Kuhn; Jason Tipples
In: Psychonomic Bulletin & Review, vol. 18, no. 1, pp. 89–95, 2011.
An oculomotor visual search task was used to investigate how participants follow the gaze of a nonpredictive and task irrelevant distractor gaze, and the way in which this gaze following is influenced by the emotional expression (fearful vs. happy) as well as participants' goal. Previous research has suggested that fearful emotions should result in stronger cueing effects than happy faces. Our results demonstrated that the degree to which the emotional expression influenced this gaze following varied as a function of the search target. When searching for a threatening target, participants were more likely to look in the direction of eye gaze on a fearful compared to a happy face. However, when searching for a pleasant target, this stronger cueing effect for fearful faces disappeared. Therefore, gaze following is influenced by contextual factors such as the emotional expression, as well as the participant's goal.
Marcus L. Johnson; Matthew W. Lowder; Peter C. Gordon
In: Journal of Experimental Psychology: General, vol. 140, no. 4, pp. 707–724, 2011.
In 2 experiments, the authors used an eye tracking while reading methodology to examine how different configurations of common noun phrases versus unusual noun phrases (NPs) influenced the difference in processing difficulty between sentences containing object- and subject-extracted relative clauses. Results showed that processing difficulty was reduced when the head NP was unusual relative to the embedded NP, as manipulated by lexical frequency. When both NPs were common or both were unusual, results showed strong effects of both commonness and sentence structure, but no interaction. In contrast, when 1 NP was common and the other was unusual, results showed the critical interaction. These results provide evidence for a sentence-composition effect analogous to the list-composition effect that has been well documented in memory research, in which the pattern of recall for common versus unusual items is different, depending on whether items are studied in a pure or mixed list context. This work represents an important step in integrating the list-memory and sentence-processing literatures and provides additional support for the usefulness of studying complex sentence processing from the perspective of memory-based models.
Jacob Jolij; H. Steven Scholte; Simon Gaal; Timothy L. Hodgson; Victor A. F. Lamme
In: Journal of Cognitive Neuroscience, vol. 23, no. 12, pp. 3734–3745, 2011.
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
Clare N. Jonas; Alisdair J. G. Taylor; Samuel B. Hutton; Peter H. Weiss; Jamie Ward
In: Journal of Neuropsychology, vol. 5, no. 2, pp. 302–322, 2011.
Visuo-spatial representations of the alphabet (so-called 'alphabet forms') may be as common as other types of sequence–space synaesthesia, but little is known about them or the way they relate to implicit spatial associations in the general population. In the first study, we describe the characteristics of a large sample of alphabet forms visualized by synaesthetes. They most often run from left to right and have salient features (e.g., bends, breaks) at particular points in the sequence that correspond to chunks in the 'Alphabet Song' and at the alphabet mid-point. The Alphabet Song chunking suggests that the visuo-spatial characteristics are derived, at least in part, from those of the verbal sequence learned earlier in life. However, these synaesthetes are no faster at locating points in the sequence (e.g., what comes before/after letter X?) than controls. They tend to be more spatially consistent (measured by eye tracking) and letters can act as attentional cues to left/right space in synaesthetes with alphabet forms (measured by saccades), but not in non-synaesthetes. This attentional cueing suggests dissociation between numbers (which reliably act as attentional cues in synaesthetes and non-synaesthetes) and letters (which act as attentional cues in synaesthetes only).
Donatas Jonikaitis; Heiner Deubel
In: Psychological Science, vol. 22, no. 3, pp. 339–347, 2011.
When reaching for objects, people frequently look where they reach. This raises the question of whether the targets for the eye and hand in concurrent eye and hand movements are selected by a unitary attentional system or by independent mechanisms. We used the deployment of visual attention as an index of the selection of movement targets and asked observers to reach and look to either the same location or separate locations. Results show that during the preparation of coordinated movements, attention is allocated in parallel to the targets of a saccade and a reaching movement. Attentional allocations for the two movements interact synergistically when both are directed to a common goal. Delaying the eye movement delays the attentional shift to the saccade target while leaving attentional deployment to the reach target unaffected. Our findings demonstrate that attentional resources are allocated independently to the targets of eye and hand movements and suggest that the goals for these effectors are selected by separate attentional mechanisms.
Barbara J. Juhasz; Rachel N. Berkowitz
In: Language and Cognitive Processes, vol. 26, no. 4-6, pp. 653–682, 2011.
Three experiments examined the influence of first lexeme morphological family size on English compound word recognition. Concatenated compound words whose first lexemes were from large morphological families were responded to faster in word naming and lexical decision than compounds from small morphological families. In addition, an eye movement experiment showed that gaze durations were shorter on compounds from large morphological families during sentence reading. This was mainly due to more refixations on compounds from small morphological families. Posthoc analyses and re-analysis of past studies suggested that compounds with a larger number of higher frequency family members (HFFM) are read more slowly than compounds with fewer HFFM. Thus, while morphological family size is generally facilitative, the presence of HFFM has an inhibitory effect on eye movement behaviour. The time-course of these effects is discussed.
Barbara J. Juhasz; Margaret M. Gullick; Leah W. Shesler
In: Journal of Eye Movement Research, vol. 4, no. 1, pp. 1–14, 2011.
Words that are rated as acquired earlier in life receive shorter fixation durations than later acquired words, even when word frequency is adequately controlled (Juhasz & Rayner, 2003; 2006). Some theories posit that age-of-acquisition (AoA) affects the semantic representation of words (e.g., Steyvers & Tenenbaum, 2005), while others suggest that AoA should have an influence at multiple levels in the mental lexicon (e.g. Ellis & Lambon Ralph, 2000). In past studies, early and late AoA words have differed from each other in orthography, phonology, and meaning, making it difficult to localize the influence of AoA. Two experiments are reported which examined the locus of AoA effects in reading. Both experiments used balanced ambiguous words which have two equally-frequent meanings acquired at different times (e.g. pot, tick). In Experiment 1, sentence context supporting either the early- or late-acquired meaning was presented prior to the ambiguous word; in Experiment 2, disambiguating context was presented after the ambiguous word. When prior context disambiguated the ambiguous word, meaning AoA influenced the processing of the target word. However, when disambiguating sentence context followed the ambiguous word, meaning frequency was the more important variable and no effect of meaning AoA was observed. These results, when combined with the past results of Juhasz and Rayner (2003; 2006) suggest that AoA influences access to multiple levels of representation in the mental lexicon. The results also have implications for theories of lexical ambiguity resolution, as they suggest that variables other than meaning frequency and context can influence resolution of noun-noun ambiguities.
Johanna K. Kaakinen; Jukka Hyönä; Minna Viljanen
In: Quarterly Journal of Experimental Psychology, vol. 64, no. 7, pp. 1372–1387, 2011.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.
David J. Kelly; Rachael R. Jack; Sébastien Miellet; Emanuele Luca De; Kay Foreman; Roberto Caldara
In: Frontiers in Psychology, vol. 2, pp. 95, 2011.
Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed "Eastern" eye movement strategies, while approximately 25% of participants displayed "Western" strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that "culture" alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required.
David J. Kelly; Shaoying Liu; Helen Rodger; Sébastien Miellet; Liezhong Ge; Roberto Caldara
Developing cultural differences in face processing Journal Article
In: Developmental Science, vol. 14, no. 5, pp. 1176–1184, 2011.
Perception and eye movements are affected by culture. Adults from Eastern societies (e.g. China) display a disposition to process information holistically, whereas individuals from Western societies (e.g. Britain) process information analytically. Recently, this pattern of cultural differences has been extended to face processing. Adults from Eastern cultures fixate centrally towards the nose when learning and recognizing faces, whereas adults from Western societies spread fixations across the eye and mouth regions. Although light has been shed on how adults can fixate different areas yet achieve comparable recognition accuracy, the reason why such divergent strategies exist is less certain. Although some argue that culture shapes strategies across development, little direct evidence exists to support this claim. Additionally, it has long been claimed that face recognition in early childhood is largely reliant upon external rather than internal face features, yet recent studies have challenged this theory. To address these issues, we tested children aged 7-12 years of age from the UK and China with an old/new face recognition paradigm while simultaneously recording their eye movements. Both populations displayed patterns of fixations that were consistent with adults from their respective cultural groups, which 'strengthened' across development as qualified by a pattern classifier analysis. Altogether, these observations suggest that cultural forces may indeed be responsible for shaping eye movements from early childhood. Furthermore, fixations made by both cultural groups almost exclusively landed on internal face regions, suggesting that these features, and not external features, are universally used to achieve face recognition in childhood.
Daniel P. Kennedy; Ralph Adolphs
In: Neuropsychologia, vol. 49, no. 4, pp. 589–595, 2011.
SM is a patient with complete bilateral amygdala lesions who fails to fixate the eyes in faces and is consequently impaired in recognizing fear (Adolphs et al., 2005). Here we first replicated earlier findings in SM of reduced gaze to the eyes when seen in whole faces. Examination of the time course of fixations revealed that SM's reduced eye contact is particular pronounced in the first fixation to the face, and less abnormal in subsequent fixations. In a second set of experiments, we used a gaze-contingent presentation of faces with real time eye tracking, wherein only a small region of the face is made visible at the center of gaze. In essence, viewers explore the face by moving a small searchlight over the face with their gaze. Under such viewing conditions, SM's fixations to eye region of faces became entirely normalized. We suggest that this effect arises from the absence of bottom-up effects due to the facial features, allowing gaze location to be driven entirely by top-down control. Together with SM's failure to fixate the eyes in whole faces primarily at the very first saccade, the findings suggest that the saliency of the eyes normally attract our gaze in an amygdala-dependent manner. Impaired eye gaze is also a prominent feature of several psychiatric illnesses in which the amygdala has been hypothesized to be dysfunctional, and our findings and experimental manipulation may hold promise for interventions in such populations, including autism and fragile X syndrome.
Aarlenne Zein Khan; Joo-Hyun Song; Robert M. McPeek
In: Journal of Vision, vol. 11, no. 1, pp. 1–14, 2011.
Prior to the onset of a saccade or a reach, attention is directed to the goal of the upcoming movement. However, it remains unknown whether attentional resources are shared across effectors for simultaneous eye and hand movements. Using a 4-AFC shape discrimination task, we investigated attentional allocation during the planning of a saccade alone, reach alone, or combined saccade and reach to one of five peripheral locations. Target discrimination was better when the probe appeared at the goal of the impending movement than when it appeared elsewhere. However, discrimination performance at the movement goal was not better for combined eye-hand movements compared to either effector alone, suggesting a shared limited attentional resource rather than separate pools of effector-specific attention. To test which effector dominates in guiding attention, we then separated eye and hand movement goals in two conditions: (1) cued reach/fixed saccade--subjects made saccades to the same peripheral location throughout the block, while the reach goal was cued and (2) cued saccade/fixed reach--subjects made reaches to the same location, while the saccade goal was cued. For both conditions, discrimination performance was consistently better at the eye goal than the hand goal. This indicates that shared attentional resources are guided predominantly by the eye during the planning of eye and hand movements.
Manizeh Khan; Meredyth Daneman
In: Journal of Psycholinguistic Research, vol. 40, no. 5, pp. 351–366, 2011.
This study investigated whether readers are more likely to assign a male referent to man-suffix terms (e.g. chairman) than to gender-neutral alternatives (e.g., chairperson) during reading, and whether this bias differs as a function of age. Younger and older adults' eye movements were monitored while reading passages containing phrases such as "The chairman/chairperson familiarized herself withłdots" On-line eye fixation data provided strong evidence that man-suffix words were more likely to evoke the expectation of a male referent in both age groups. Younger readers demonstrated inflated processing times when first encountering herself after chairman relative to chairperson, and they tended to make more regressive fixations to chairman. Older readers did not show the effect when initially encountering herself, but they spent disproportionately longer looking back to chairman and herself. The study provides empirical support for copy-editing policies that mandate the use of explicitly gender-neutral suffix terms in place of man-suffix terms.
M. M. Kibbe; Eileen Kowler
In: Journal of Vision, vol. 11, no. 3, pp. 1–21, 2011.
Limitations of working memory force a reliance on motor exploration to retrieve forgotten features of the visual array. A category search task was devised to study tradeoffs between exploration and memory in the face of significant cognitive and motor demands. The task required search through arrays of hidden, multi-featured objects to find three belonging to the same category. Location contents were revealed briefly by either a: (1) mouseclick, or (2) saccadic eye movement with or without delays between saccade offset and object appearance. As the complexity of the category rule increased, search favored exploration, with more visits and revisits needed to find the set. As motor costs increased (mouseclick search or oculomotor search with delays) search favored reliance on memory. Application of the model of J. Epelboim and P. Suppes (2001) to the revisits produced an estimate of immediate memory span (M) of about 4-6 objects. Variation in estimates of M across category rules suggested that search was also driven by strategies of transforming the category rule into concrete perceptual hypotheses. The results show that tradeoffs between memory and exploration in a cognitively demanding task are determined by continual and effective monitoring of perceptual load, cognitive demand, decision strategies and motor effort.
Tim C. Kietzmann; Stephan Geuter; Peter König
In: PLoS ONE, vol. 6, no. 7, pp. e22614, 2011.
Next-generation sequencing (NGS) technologies provide a revolutionary tool with numerous applications in transcriptome studies. The power of NGS technologies to address diverse biological questions has already been proved in many studies. One of the most important applications of NGS is the sequencing and characterization of transcriptome of a non-model species using RNA-seq. This application of NGS technologies can be used to dissect the complete expressed gene content of an organism. In this article, I illustrate the use of NGS technologies in transcriptome characterization of a non-model species taking example of chickpea from our recent studies.
Michael Zehetleitner; Michael Hegenloh; Hermann J. Muller
In: Journal of Vision, vol. 11, no. 1, pp. 24–24, 2011.
Visual salience maps are assumed to mediate target selection decisions in a motor-unspecific manner; accordingly, modulations of salience influence yes/no target detection or left/right localization responses in manual key-press search tasks, as well as ocular or skeletal movements to the target. Although widely accepted, this core assumption is based on little psychophysical evidence. At least four modulations of salience are known to influence the speed of visual search for feature singletons: (i) feature contrast, (ii) cross-trial dimension sequence and (iii) semantic pre-cueing of the target dimension, and (iv) dimensional target redundancy. If salience guides also manual pointing movements, their initiation latencies (and durations) should be affected by the same four manipulations of salience. Four experiments, each examining one of these manipulations, revealed this to be the case. Thus, these effects are seen independently of the motor response required to signal the perceptual decision (e.g., directed manual pointing as well as simple yes/no detection responses). This supports the notion of a motor-unspecific salience map, which guides covert attention as well as overt eye and hand movements.
Yang Zhang; Ming Zhang
In: Vision Research, vol. 51, no. 1, pp. 147–153, 2011.
Although spatial working memory has been shown to play a central role in manual IOR (Castel, Pratt, & Craik, 2003), it is so far unclear whether spatial working memory is involved in saccadic IOR. The present study sought to address this question by using a dual task paradigm, in which the participants performed an IOR task while keeping a set of locations in spatial working memory. While manual IOR was eliminated, saccadic IOR was not affected by spatial working memory load. These findings suggest that saccadic IOR does not rely on spatial working memory to process inhibitory tagging.
Qing Yang; Zoï Kapoula
In: PLoS ONE, vol. 6, no. 5, pp. e20322, 2011.
BACKGROUND: The initiation of memory guided saccades is known to be controlled by the frontal eye field (FEF). Recent physiological studies showed the existence of an area close to FEF that controls also vergence initiation and execution. This study is to explore the effect of transcranial magnetic simulation (TMS) over FEF on the control of memory-guided saccade-vergence eye movements. METHODOLOGY/PRINCIPAL FINDINGS: Subjects had to make an eye movement in dark towards a target flashed 1 sec earlier (memory delay); the location of the target relative to fixation point was such as to require either a vergence along the median plane, or a saccade, or a saccade with vergence; trials were interleaved. Single pulse TMS was applied on the left or right FEF; it was delivered at 100 ms after the end of memory delay, i.e. extinction of fixation LED that was the "go" signal. Twelve healthy subjects participated in the study. TMS of left or right FEF prolonged the latency of all types of eye movements; the increase varied from 21 to 56 ms and was particularly strong for the divergence movements. This indicates that FEF is involved in the initiation of all types of memory guided movement in the 3D space. TMS of the FEF also altered the accuracy but only for leftward saccades combined with either convergence or divergence; intrasaccadic vergence also increased after TMS of the FEF. CONCLUSIONS/SIGNIFICANCE: The results suggest anisotropy in the quality of space memory and are discussed in the context of other known perceptual motor anisotropies.
Shun-Nan Yang; Yu-Chi Tai; Hannu Laukkanen; James E. Sheedy
In: Vision Research, vol. 51, no. 21-22, pp. 2273–2281, 2011.
Transverse chromatic aberration (TCA) smears the retinal image of peripheral stimuli. We previously found that TCA significantly reduces the ability to recognize letters presented in the near fovea by degrading image quality and exacerbating crowding effect from adjacent letters. The present study examined whether TCA has a significant effect on near foveal and peripheral word identification, and whether within-word orthographic facilitation interacts with TCA effect to affect word identification. Subjects were briefly presented a 6- to 7-letter word of high or low frequency in each trial. Target words were generated with weak or strong horizontal color fringe to attenuate the TCA in the right periphery and exacerbate it in the left. The center of the target word was 1°, 2°, 4°, and 6° to the left or right of a fixation point. Subject's eye position was monitored with an eye-tracker to ensure proper fixation before target presentation. They were required to report the identity of the target word as soon and accurately as possible. Results show significant effect of color fringe on the latency and accuracy of word recognition, indicating existing TCA effect. Observed TCA effect was more salient in the right periphery, and was affected by word frequency more there. Individuals' subjective preference of color-fringed text was correlated to the TCA effect in the near periphery. Our results suggest that TCA significantly affects peripheral word identification, especially when it is located in the right periphery. Contextual facilitation such as word frequency interacts with TCA to influence the accuracy and latency of word recognition.
Victoria Yanulevskaya; Jan Bernard C. Marsman; Frans W. Cornelissen; Jan Mark Geusebroek
An image statistics-based model for fixation prediction Journal Article
In: Cognitive Computation, vol. 3, no. 1, pp. 94–104, 2011.
The problem of predicting where people look at, or equivalently salient region detection, has been related to the statistics of several types of low-level image features. Among these features, contrast and edge information seem to have the highest correlation with the fixation locations. The contrast distribution of natural images can be adequately characterized using a two-parameter Weibull distribution. This distribution catches the structure of local contrast and edge frequency in a highly meaningful way. We exploit these observations and investigate whether the parameters of the Weibull distribution constitute a simple model for predicting where people fixate when viewing natural images. Using a set of images with associated eye movements, we assess the joint distribution of the Weibull parameters at fixated and non-fixated regions. Then, we build a simple classifier based on the log-likelihood ratio between these two joint distributions. Our results show that as few as two values per image region are already enough to achieve a performance comparable with the state-of-the-art in bottom-up saliency prediction.
Bo Yao; Christoph Scheepers
In: Cognition, vol. 121, no. 3, pp. 447–453, 2011.
In human communication, direct speech (e.g., Mary said: " I'm hungry" ) is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2, eye-tracking) read written stories that contained either a direct speech or an indirect speech quotation. The context preceding those quotations described a situation that implied either a fast-speaking or a slow-speaking quoted protagonist. It was found that this context manipulation affected reading rates (in both oral and silent reading) for direct speech quotations, but not for indirect speech quotations. This suggests that readers are more likely to engage in perceptual simulations of the reported speech act when reading direct speech as opposed to meaning-equivalent indirect speech quotations, as part of a more vivid representation of the former.
Jian-Gao Yao; Xin Gao; Hong-Mei Yan; Chao-Yi Li
Field of attention for instantaneous object recognition Journal Article
In: PLoS ONE, vol. 6, no. 1, pp. e16343, 2011.
Instantaneous object discrimination and categorization are fundamental cognitive capacities performed with the guidance of visual attention. Visual attention enables selection of a salient object within a limited area of the visual field; we referred to as "field of attention" (FA). Though there is some evidence concerning the spatial extent of object recognition, the following questions still remain unknown: (a) how large is the FA for rapid object categorization, (b) how accuracy of attention is distributed over the FA, and (c) how fast complex objects can be categorized when presented against backgrounds formed by natural scenes.
Eiling Yee; Stacy Huffstetler; Sharon L. Thompson-Schill
In: Journal of Experimental Psychology: General, vol. 140, no. 3, pp. 348–363, 2011.
Most theories of semantic memory characterize knowledge of a given object as comprising a set of semantic features. But how does conceptual activation of these features proceed during object identification? We present the results of a pair of experiments that demonstrate that object recognition is a dynamically unfolding process in which function follows form. We used eye movements to explore whether activating one object's concept leads to the activation of others that share perceptual (shape) or abstract (function) features. Participants viewed 4-picture displays and clicked on the picture corresponding to a heard word. In critical trials, the conceptual representation of 1 of the objects in the display was similar in shape or function (i.e., its purpose) to the heard word. Importantly, this similarity was not apparent in the visual depictions (e.g., for the target Frisbee, the shape-related object was a triangular slice of pizza, a shape that a Frisbee cannot take); preferential fixations on the related object were therefore attributable to overlap of the conceptual representations on the relevant features. We observed relatedness effects for both shape and function, but shape effects occurred earlier than function effects. We discuss the implications of these findings for current accounts of the representation of semantic memory.
Li-Hao Yeh; Ana I. Schwartz; Aaron L. Baule
In: Reading Psychology, vol. 32, no. 6, pp. 495–519, 2011.
Previous studies have demonstrated the efficacy of the Text Structure Strategy for improving text recall. The strategy emphasizes the identification of text structure for encoding and recalling information. Traditionally, the efficacy of this strategy has been measured through free recall. The present study examined whether recall and eye-movement patterns of second language English readers would benefit from training on the strategy. Participants' free recall and eye-movement patterns were measured before and after training. There was a significant increase in recall at posttest and a change in eye-movement patterns, reflecting additional processing time of phrases and words signaling the text structure.
Serap Yiǧit-Elliott; John Palmer; Cathleen M. Moore
In: Psychological Science, vol. 22, no. 6, pp. 771–780, 2011.
Sensory information must be processed selectively in order to represent the world and guide behavior. How does such selection occur? Here we consider two alternative classes of selection mechanisms: In blocking, unattended stimuli are blocked entirely from access to downstream processes, and in attenuation, unattended stimuli are reduced in strength but if strong enough can still access downstream processes. Existing evidence as to whether blocking or attenuation is a more accurate model of human performance is mixed. Capitalizing on a general distinction between blocking and attenuation—blocking cannot be overcome by strong stimuli, whereas attenuation can—we measured how attention interacted with the strength of stimuli in two spatial selection paradigms, spatial filtering and spatial monitoring. The evidence was consistent with blocking for the filtering paradigm and with attenuation for the monitoring paradigm. This approach provides a general measure of the fate of unattended stimuli.
Shlomit Yuval-Greenberg; Leon Y. Deouell
In: Brain Topography, vol. 24, no. 1, pp. 30–39, 2011.
We previously showed that the transient broadband induced gamma-band response in EEG (iGBRtb) appearing around 200-300 ms following a visual stimulus reflects the contraction of extra-ocular muscles involved in the execution of saccades, rather than neural oscillations. Several previous studies reported induced gamma-band responses also following auditory stimulation. It is still an open question whether, similarly to visual paradigms, such auditory paradigms are also sensitive to the saccadic confound. In the current study we address this question using simultaneous eye-tracking and EEG recordings during an auditory oddball paradigm. Subjects were instructed to respond to a rare target defined by sound source location, while fixating on a central screen. Results show that, similar to what was found in visual paradigms, saccadic rate displayed typical temporal dynamics including a post-stimulus decrease followed by an increase. This increase was more moderate, had a longer latency, and was less consistent across subjects than was found in the visual case. Crucially, the temporal dynamics of the induced gamma response were similar to those of saccadic-rate modulation. This suggests that the auditory induced gamma-band responses recorded on the scalp may also be affected by saccadic muscle activity.
C. Yu-Wai-Man; K. Petheram; A. W. Davidson; T. Williams; P. G. Griffiths
In: Neuro-Ophthalmology, vol. 35, no. 1, pp. 38–39, 2011.
A case is described of motor neurone disease presenting with an ocular motor disorder characterised by saccadic intrusions, impaired horizontal and vertical saccades, and apraxia of eyelid opening. The occurrence of eye movement abnormalities in motor neurone disease is discussed.
Ben D. B. Willmore; James A. Mazer; Jack L. Gallant
Sparse coding in striate and extrastriate visual cortex Journal Article
In: Journal of Neurophysiology, vol. 105, no. 6, pp. 2907–2919, 2011.
Theoretical studies of mammalian cortex argue that efficient neural codes should be sparse. However, theoretical and experimental studies have used different definitions of the term "sparse" leading to three assumptions about the nature of sparse codes. First, codes that have high lifetime sparseness require few action potentials. Second, lifetime-sparse codes are also population-sparse. Third, neural codes are optimized to maximize lifetime sparseness. Here, we examine these assumptions in detail and test their validity in primate visual cortex. We show that lifetime and population sparseness are not necessarily correlated and that a code may have high lifetime sparseness regardless of how many action potentials it uses. We measure lifetime sparseness during presentation of natural images in three areas of macaque visual cortex, V1, V2, and V4. We find that lifetime sparseness does not increase across the visual hierarchy. This suggests that the neural code is not simply optimized to maximize lifetime sparseness. We also find that firing rates during a challenging visual task are higher than theoretical values based on metabolic limits and that responses in V1, V2, and V4 are well-described by exponential distributions. These findings are consistent with the hypothesis that neurons are optimized to maximize information transmission subject to metabolic constraints on mean firing rate.
Sara A. Winges; John F. Soechting
In: Experimental Brain Research, vol. 211, no. 1, pp. 27–36, 2011.
It is well known that prediction is used to overcome processing delays within the motor system and ocular control is no exception. Motion extrapolation is one mechanism that can be used to overcome the visual processing delay. Expectations based on previous experience or cognitive cues are also capable of overcoming this delay. The present experiment was designed to examine how smooth pursuit is altered by cognitive information about the time and/or direction of an upcoming change in target direction. Subjects visually tracked a cursor as it moved at a constant velocity on a computer screen. The target initially moved from left to right and then abruptly reversed horizontal direction and traveled along one of seven possible oblique paths. In half of the trials, a cue was present throughout the trial to signal the position (as well as the time), and/or the direction of the upcoming change. Whenever a position cue (which will be referred to as a timing cue throughout the paper) was present, there were clear anticipatory adjustments to the horizontal velocity component of smooth pursuit. In the presence of a timing cue, a directional cue also led to anticipatory adjustments in the vertical velocity, and hence the direction of smooth pursuit. However, without the timing cue, a directional cue alone produced no anticipation. Thus, in this task, a cognitive spatial cue about the new direction could not be used unless it was made explicit in the time domain.
In: Applied Psycholinguistics, vol. 32, no. 4, pp. 739–759, 2011.
Four eye movement experiments investigated whether readers use parafoveal input to gain information about the phonological or orthographic forms of consonants, vowels, and tones in word recognition when reading Thai silently. Target words were presented in sentences preceded by parafoveal previews in which consonant, vowel, or tone information was manipulated. Previews of homophonous consonants (Experiment I) and concordant vowels (Experiment 2) did not substantially facilitate processing of the target word, whereas the identical previews did. Hence, orthography appears to be playing the prominent role in early word recognition for consonants and vowels. Incorrect tone marker previews (Experiment 3) substantially retarded the subsequent processing of the target word, indicating that lexical tone plays an important role in early word recognition. Vowels in VOP (Experiment 4) did not facilitate processing, which points to vowel position being a significant factor. Primarily, orthographic codes of consonants and vowels (HOP) in conjunction with tone information are assembled from parafoveal input and used for early lexical access.
Andi K. Winterboer; Martin I. Tietze; Maria K. Wolters; Johanna D. Moore
In: Computer Speech and Language, vol. 25, no. 2, pp. 175–191, 2011.
A common task for spoken dialog systems (SDS) is to help users select a suitable option (e.g., flight, hotel, and restaurant) from the set of options available. As the number of options increases, the system must have strategies for generating summaries that enable the user to browse the option space efficiently and successfully. In the user-model based summarize and refine approach (UMSR, Demberg and Moore, 2006), options are clustered to maximize utility with respect to a user model, and linguistic devices such as discourse cues and adverbials are used to highlight the trade-offs among the presented items. In a Wizard-of-Oz experiment, we show that the UMSR approach leads to improvements in task success, efficiency, and user satisfaction compared to an approach that clusters the available options to maximize coverage of the domain (Polifroni et al., 2003). In both a laboratory experiment and a web-based experimental paradigm employing the Amazon Mechanical Turk platform, we show that the discourse cues in UMSR summaries help users compare different options and choose between options, even though they do not improve verbatim recall. This effect was observed for both written and spoken stimuli.
C. Witzel; Karl R. Gegenfurtner
Is there a lateralized category effect for color? Journal Article
In: Journal of Vision, vol. 11, no. 12, pp. 16–16, 2011.
According to the lateralized category effect for color, the influence of color category borders on color perception in fast reaction time tasks is significantly stronger in the right visual field than in the left. This finding has directly related behavioral category effects to the hemispheric lateralization of language. Multiple succeeding articles have built on these findings. We ran ten different versions of the two original experiments with overall 230 naive observers. We carefully controlled the rendering of the stimulus colors and determined the genuine color categories with an appropriate naming method. Congruent with the classical pattern of a category effect, reaction times in the visual search task were lower when the two colors to be discriminated belonged to different color categories than when they belonged to the same category. However, these effects were not lateralized: They appeared to the same extent in both visual fields.
Lynsey Wolter; Kristen Skovbroten Gorman; Michael K. Tanenhaus
In: Journal of Memory and Language, vol. 65, no. 3, pp. 299–317, 2011.
Listeners expect that a definite noun phrase with a pre-nominal scalar adjective (e.g., the big ...) will refer to an entity that is part of a set of objects contrasting on the scalar dimension, e.g., size (Sedivy, Tanenhaus, Chambers, & Carlson, 1999). Two visual world experiments demonstrate that uttering a referring expression with a scalar adjective makes all members of the relevant contrast set more salient in the discourse model, facilitating subsequent reference to other members of that contrast set. Moreover, this discourse effect is caused primarily by linguistic mention of a scalar adjective and not by the listener's prior visual or perceptual experience. These experiments demonstrate that language processing is sensitive to which information was introduced by linguistic mention, and that the visual world paradigm can be use to tease apart the separate contributions of visual and linguistic information to reference resolution.
Jason H. Wong; Matthew S. Peterson
In: Attention, Perception, and Psychophysics, vol. 73, no. 6, pp. 1768–1779, 2011.
Recent evidence has been found for a source of task-irrelevant oculomotor capture (defined as when a salient event draws the eyes away from a primary task) that originates from working memory. An object memorized for a nonsearch task can capture the eyes during search. Here, an experiment was conducted that generated interactions between the presence of a memorized object (a colored disk) with the abrupt onset of a new object during visual search. The goal was to compare memory-driven oculomotor capture to oculomotor capture caused by an abrupt onset. This has implications for saccade programming theories, which have little to say about saccades that are influenced by object working memory. Results showed that memorized objects capture the eyes at nearly the same rate as abrupt onsets. When the abrupt onset and a memorized color coincide in the same object, this combination leads to even greater oculomotor capture. Finally, latencies support the competitive integration model: Shorter saccade latencies were found when the memorized color combined with the onset captured the eyes, as compared to either color or onset only. Longer latencies were also found when the color and onset occurred in the same display but were spatially separated.
Z. V. J. Woodhead; S. L. E. Brownsett; N. S. Dhanjal; C. Beckmann; Richard J. S. Wise
The visual word form system in context Journal Article
In: Journal of Neuroscience, vol. 31, no. 1, pp. 193–199, 2011.
According to the “modular” hypothesis, reading is a serial feedforward process, with part of left ventral occipitotemporal cortex the earliest component tuned to familiar orthographic stimuli. Beyond this region, the model predicts no response to arrays of false font in reading-related neural pathways. An alternative “connectionist” hypothesis proposes that reading depends on interactions between feedforward projections from visual cortex and feedback projections from phonological and semantic systems, with no visual component exclusive to orthographic stimuli. This is compatible with automatic processing of false font throughout visual and heteromodal sensory pathways that support reading, in which responses to words may be greater than, but not exclusive of, responses to false font. This functional imaging study investigated these alternative hypotheses by using narrative texts and equivalent arrays of false font and varying the hemifield of presentation using rapid serial visual presentation. The “null” baseline comprised a decision on visually presented numbers. Preferential activity for narratives relative to false font, insensitive to hemifield of presentation, was distributed along the ventral left temporal lobe and along the extent of both superior temporal sulci. Throughout this system, activity during the false font conditions was significantly greater than during the number task, with activity specific to the number task confined to the intraparietal sulci. Therefore, both words and false font are extensively processed along the same temporal neocortical pathways, separate from the more dorsal pathways that process numbers. These results are incompatible with a serial, feedforward model of reading.
Jessica M. Wright; Adam P. Morris; Bart Krekelberg
Weighted integration of visual position information Journal Article
In: Journal of Vision, vol. 11, no. 14, pp. 11–11, 2011.
The ability to localize visual objects is a fundamental component of human behavior and requires the integration of position information from object components. The retinal eccentricity of a stimulus and the locus of spatial attention can affect object localization, but it is unclear whether these factors alter the global localization of the object, the localization of object components, or both. We used psychophysical methods in humans to quantify behavioral responses in a centroid estimation task. Subjects located the centroid of briefly presented random dot patterns (RDPs). A peripheral cue was used to bias attention toward one side of the display. We found that although subjects were able to localize centroid positions reliably, they typically had a bias toward the fovea and a shift toward the locus of attention. We compared quantitative models that explain these effects either as biased global localization of the RDPs or as anisotropic integration of weighted dot component positions. A model that allowed retinal eccentricity and spatial attention to alter the weights assigned to individual dot positions best explained subjects' performance. These results show that global position perception depends on both the retinal eccentricity of stimulus components and their positions relative to the current locus of attention.
Minnan Xu-Wilson; Jing Tian; Reza Shadmehr; David S. Zee
In: Journal of Neuroscience, vol. 31, no. 32, pp. 11537–11546, 2011.
When we applied a single pulse of transcranial magnetic stimulation (TMS) to any part of the human head during a saccadic eye movement, the ongoing eye velocity was reduced as early as 45 ms after the TMS, and lasted ∼32 ms. The perturbation to the saccade trajectory was not due to a mechanical effect of the lid on the eye (e.g., from blinks). When the saccade involved coordinated movements of both the eyes and the lids, e.g., in vertical saccades, TMS produced a synchronized inhibition of the motor commands to both eye and lid muscles. The TMS-induced perturbation of the eye trajectory did not show habituation with repetition, and was present in both pro-saccades and anti-saccades. Despite the perturbation, the eye trajectory was corrected within the same saccade with compensatory motor commands that guided the eyes to the target. This within-saccade correction did not rely on visual input, suggesting that the brain monitored the oculomotor commands as the saccade unfolded, maintained a real-time estimate of the position of the eyes, and corrected for the perturbation. TMS disrupted saccades regardless of the location of the coil on the head, suggesting that the coil discharge engages a nonhabituating startle-like reflex system. This system affects ongoing motor commands upstream of the oculomotor neurons, possibly at the level of the superior colliculus or omnipause neurons. Therefore, a TMS pulse centrally perturbs saccadic motor commands, which are monitored possibly via efference copy and are corrected via internal feedback.
Huihui Zhou; Robert Desimone
In: Neuron, vol. 70, no. 6, pp. 1205–1217, 2011.
When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target.
Eckart Zimmermann; David C. Burr; M. Concetta Morrone
In: Current Biology, vol. 21, no. 16, pp. 1380–1384, 2011.
Saccadic adaptation  is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system [2, 3], but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error [4-7]. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals.
Marc Zirnsak; R. G. K. Gerhards; Roozbeh Kiani; Markus Lappe; Fred H. Hamker
In: Journal of Neuroscience, vol. 31, no. 49, pp. 17887–17891, 2011.
As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007).
Hamed Zivari Adab; Rufin Vogels
In: Current Biology, vol. 21, no. 19, pp. 1661–1666, 2011.
Practice improves the performance in visual tasks, but mechanisms underlying this adult brain plasticity are unclear. Single-cell studies reported no , weak , or moderate [3, 4] perceptual learning-related changes in macaque visual areas V1 and V4, whereas none were found in middle temporal (MT) . These conflicting results and modeling of human (e.g., [6, 7]) and monkey data  suggested that changes in the readout of visual cortical signals underlie perceptual learning, rather than changes in these signals. In the V4 learning studies, monkeys discriminated small differences in orientation, whereas in the MT study, the animals discriminated opponent motion directions. Analogous to the latter study, we trained monkeys to discriminate static orthogonal orientations masked by noise. V4 neurons showed robust increases in their capacity to discriminate the trained orientations during the course of the training. This effect was observed during discrimination and passive fixation but specifically for the trained orientations. The improvement in neural discrimination was due to decreased response variability and an increase of the difference between the mean responses for the two trained orientations. These findings demonstrate that perceptual learning in a coarse discrimination task indeed can change the response properties of a cortical sensory area.
Agnieszka Szarkowska; Izabela Krejtz; Zuzanna Klyszejko; Anna Wieczorek
In: American Annals of the Deaf, vol. 156, no. 4, pp. 363–378, 2011.
One of the most frequently recurring themes in captioning is whether captions should be edited or verbatim. The authors report on the results of an eye-tracking study of captioning for deaf and hard of hearing viewers reading different types of captions. By examining eye movement patterns when these viewers were watching clips with verbatim, standard, and edited captions, the authors tested whether the three different caption styles were read differently by the study participants (N = 40): 9 deaf, 21 hard of hearing, and 10 hearing individuals. Interesting interaction effects for the proportion of dwell time and fixation count were observed. In terms of group differences, deaf participants differed from the other two groups only in the case of verbatim captions. The results are discussed with reference to classical reading studies, audiovisual translation, and a new concept of viewing speed.
Martin Szinte; Patrick Cavanagh
In: Journal of Vision, vol. 11, no. 2, pp. 1–20, 2011.
While participants made 10° horizontal saccades, two dots were presented, one before and one after the saccade. Each dot was presented for 400 ms, the first turned off about 100 ms before, while the second turned on about 100 ms after the saccade. The two dots were separated vertically by 3°, but because of the intervening eye movement, they were also separated horizontally on the retina by an additional 10°. Participants nevertheless reported that the perceived motion was much more vertical than horizontal, suggesting that the trans-saccadic displacement was corrected, at least to some extent, for the retinal displacement caused by the eye movement. The corrections were not exact, however, showing significant biases that corresponded to about 5% of the saccade amplitude. The perceived motion between the probes was tested at 9 different locations and the biases, the deviations from accurate correction, varied significantly across locations. Two control experiments for judgments of position and of verticality of motion without eye movement confirmed that these biases are specific to the correction for the saccade. The local variations in the correction for saccades are consistent with physiological "remapping" proposals for space constancy that individually correct only a few attended targets but are not consistent with global mechanisms that predict the same correction at all locations.
Bernard Marius Hart; Tilman Gerrit Jakob Abresch; Wolfgang Einhäuser
In: PLoS ONE, vol. 6, no. 10, pp. e25373, 2011.
The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications ("Viola-Jones" algorithm) achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections ("real faces"), false positives ("illusory faces") and correctly rejected locations ("non faces"). Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.
Zheng Tai; Richard W. Hertle; Richard A. Bilonick; Dongsheng Yang
A new algorithm for automated nystagmus acuity function analysis Journal Article
In: British Journal of Ophthalmology, vol. 95, no. 6, pp. 832–836, 2011.
Aims: We developed a new data analysis algorithm called the automated nystagmus acuity function (ANAF) to automatically assess nystagmus acuity function. We compared results from the ANAF with those of the well-known expanded nystagmus acuity function (NAFX). Methods: Using the ANAF and NAFX, we analysed 60 segments of nystagmus data collected with a video-based eye tracking system (EyeLink 1000) from 30 patients with infantile or mal-development fusional nystagmus. The ANAF algorithm used the best-foveation positions (not true foveation positions) and all data points in each nystagmus cycle to calculate a nystagmus acuity function. Results: The ANAF automatically produced a nystagmus acuity function in a few seconds because manual identification of foveation eye positions is not required. A structural equation model was used to compare the ANAF and NAFX. Both ANAF and NAFX have similar measurement imprecision and relatively little bias. The estimated bias was not statistically significant for either methods or replicates. Conclusions: We conclude that the ANAF is a valid and efficient algorithm for determining a nystagmus acuity function.
Kohske Takahashi; Haruaki Fukuda; Hanako Ikeda; Hirokazu Doi; Katsumi Watanabe; Kazuhiro Ueda; Kazuyuki Shinohara
In: Journal of Vision, vol. 11, no. 14, pp. 1–13, 2011.
We can easily recognize human movements from very limited visual information (biological motion perception). The present study investigated how upper and lower body areas contribute to direction discrimination of a point-light (PL) walker. Observers judged the direction that the PL walker was facing. The walker performed either normal walking or hakobi, a walking style used in traditional Japanese performing arts, in which the amount of the local motion of extremities is much smaller than that in normal walking. Either the upper, lower, or full body of the PL walker was presented. Discrimination performance was found to be better for the lower body than for the upper body. We also found that discrimination performance for the lower body was affected by walking style and/or the amount of local motion signals. Additional eye movement analyses indicated that the observers initially inspected the region corresponding to the upper body, and then the gaze shifted toward the lower body. This held true even when the upper body was absent. We conjectured that the upper body subserved to localize the PL walker and the lower body to discriminate walking direction. We concluded that the upper and lower bodies play different roles in direction discrimination of a PL walker.
Luminita Tarita-Nistor; Michael H. Brent; Martin J. Steinbach; Esther G. González
In: Investigative Ophthalmology & Visual Science, vol. 52, no. 3, pp. 1887–1893, 2011.
PURPOSE: The authors examined the fixation stability of patients with age-related macular degeneration (AMD) and large interocular acuity differences, testing them in monocular and binocular viewing conditions. The relationship between fixation stability and visual performance during monocular and binocular viewing was also studied. METHODS: Twenty patients with AMD participated. Their monocular and binocular distance acuities were measured with the ETDRS charts. Fixation stability of the better and worse eye were recorded monocularly with the MP-1 microperimeter (Nidek Technologies Srl., Vigonza, PD, Italy) and binocularly with an EyeLink eye tracker (SR Research Ltd., Mississauga, Ontario, Canada). Additional recordings of monocular fixations were obtained with the EyeLink in viewing conditions when one eye viewed the target while the fellow eye was covered by an infrared filter so it could not see the target. RESULTS: Fixation stability of the better eye did not change across viewing conditions. Fixation stability of the worse eye was 84% to 100% better in the binocular condition than in monocular conditions. Fixation stability of the worse eye was significantly larger (P < 0.05) than that of the better eye when recorded monocularly with the MP-1 microperimeter. This difference was dramatically reduced in the binocular condition but remained marginally significant (95% confidence interval, -0.351 to -0.006). For the better eye, there was a moderate relationship between fixation stability and visual acuity, both monocular and binocular, in all conditions in which this eye viewed the target. CONCLUSIONS: Fixational ocular motor control and visual acuity are driven by the better-seeing eye when patients with AMD and large interocular acuity differences perform the tasks binocularly.
Alisdair J. G. Taylor; Samuel B. Hutton
Error awareness and antisaccade performance Journal Article
In: Experimental Brain Research, vol. 213, no. 1, pp. 27–34, 2011.
In the antisaccade task, healthy participants often make errors by saccading towards the sudden onset target, despite instructions to saccade to the mirror image location. One interesting and relatively unexplored feature of antisaccade performance is that participants are typically unaware of a large proportion of the errors they make. Across two experiments, we explored the extent to which error awareness is altered by manipulations known to affect antisaccade error rate. In experiment 1, participants performed the antisaccade task under standard instructions, instructions to respond as quickly as possible or instructions to delay responding. Delay instructions significantly reduced antisaccade error rate compared to the other instructions, but this reduction was driven by a decrease only in the number of errors that participants were aware of-the number of errors of which participants were unaware remained constant across instruction condition. In experiment 2, participants performed antisaccades alone, or concurrently with two different distractor tasks-spatial tapping and random number generation task. Both the dual task conditions increased the number of errors of which participants were aware, but again, unaware error rates remained unchanged. These results are discussed in the light of recent models of antisaccade performance and the role of conscious awareness in error correction.
Masahiko Terao; Ikuya Murakami
In: Journal of Vision, vol. 11, no. 6, pp. 1–12, 2011.
Motion perception is compromised at equiluminance. Because previous investigations have been primarily carried out under fixation conditions, it remains unknown whether and how equiluminant color motion comes into play in the velocity compensation for retinal image motion due to smooth pursuit eye movement. We measured the retinal image velocity required to reach subjective stationarity for a horizontally drifting sinusoidal grating in the presence of horizontal smooth pursuit. The grating was defined by luminance or chromatic modulation. When the subjective stationarity of the color motion was shifted toward environmental stationarity, compared with the subjective stationarity of luminance motion, that of color motion was farther from retinal stationarity, indicating that a slowing of color motion occurred before this factor contributed to the process by which retinal motion was integrated with a biological estimate of eye velocity during pursuit. The gain in the estimate of eye velocity per se was unchanged irrespective of whether the stimulus was defined by luminance or by color. Indeed, the subjective reduction in the speed of color motion during fixation was accounted for by the same amount of deterioration in speed. From these results, we conclude that the motion deterioration at equiluminance takes place prior to the velocity comparison.
Katharine N. Thakkar; Jeffrey D. Schall; Leanne Boucher; Gordon D. Logan; Sohee Park
In: Biological Psychiatry, vol. 69, no. 1, pp. 55–62, 2011.
Background: Cognitive control deficits are pervasive in individuals with schizophrenia (SZ) and are reliable predictors of functional outcome, but the specificity of these deficits and their underlying neural mechanisms have not been fully elucidated. The objective of the present study was to determine the nature of response inhibition and response monitoring deficits in SZ and their relationship to symptoms and social and occupational functioning with a behavioral paradigm that provides a translational approach to investigating cognitive control. Methods: Seventeen patients with SZ and 16 demographically matched healthy control subjects participated in a saccadic countermanding task. Performance on this task is approximated as a race between movement generation and inhibition processes; this race model provides an estimate of the time needed to cancel a planned movement. Response monitoring can be assessed by reaction time adjustments on the basis of trial history. Results: Saccadic reaction time was normal, but patients required more time to inhibit a planned saccade. The latency of the inhibitory process was associated with the severity of negative symptoms and poorer occupational functioning. Both groups slowed down significantly after correctly cancelled and erroneously noncancelled stop signal trials, but patients slowed down more than control subjects after correctly inhibited saccades. Conclusions: These results suggest that SZ is associated with a difficulty in inhibiting planned movements and an inflated response adjustment effect after inhibiting a saccade. Furthermore, behavioral results are consistent with potential abnormalities in frontal and supplementary eye fields in patients with SZ.
Fabian Schnier; Markus Lappe
In: Journal of Neurophysiology, vol. 106, no. 3, pp. 1399–1410, 2011.
Saccadic adaptation is a mechanism to increase or decrease the ampli- tude gain of subsequent saccades, if a saccade is not on target. Recent research has shown that the mechanism of gain increasing, or outward adaptation, and the mechanism of gain decreasing, or inward adapta- tion, rely on partly different processes. We investigate how outward and inward adaptation of reactive saccades transfer to other types of saccades, namely scanning, overlap, memory-guided, and gap sac- cades. Previous research has shown that inward adaptation of reactive saccades transfers only partially to these other saccade types, suggest- ing differences in the control mechanisms between these saccade categories. We show that outward adaptation transfers stronger to scanning and overlap saccades than inward adaptation, and that the strength of transfer depends on the duration for which the saccade target is visible before saccade onset. Furthermore, we show that this transfer is mainly driven by an increase in saccade duration, which is apparent for all saccade categories. Inward adaptation, in contrast, is accompanied by a decrease in duration and in peak velocity, but only the peak velocity decrease transfers from reactive saccades to other saccade categories, i.e., saccadic duration remains constant or even increases for test saccades of the other categories. Our results, there- fore, show that duration and peak velocity are independent parameters of saccadic adaptation and that they are differently involved in the transfer of adaptation between saccade categories. Furthermore, our results add evidence that inward and outward adaptation are different processes.
Alexander C. Schütz
In: Journal of Vision, vol. 11, no. 14, pp. 1–19, 2011.
When two overlapping, transparent surfaces move in different directions, there is ambiguity with respect to the depth ordering of the surfaces. Little is known about the surface features that are used to resolve this ambiguity. Here, we investigated the influence of different surface features on the perceived depth order and the direction of smooth pursuit eye movements. Surfaces containing more dots, moving opposite to an adapted direction, moving at a slower speed, or moving in the same direction as the eyes were more likely to be seen in the back. Smooth pursuit eye movements showed an initial preference for surfaces containing more dots, moving in a non-adapted direction, moving at a faster speed, and being composed of larger dots. After 300 to 500 ms, smooth pursuit eye movements adjusted to perception and followed the surface whose direction had to be indicated. The differences between perceived depth order and initial pursuit preferences and the slow adjustment of pursuit indicate that perceived depth order is not determined solely by the eye movements. The common effect of dot number and motion adaptation suggests that global motion strength can induce a bias to perceive the stronger motion in the back.