全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2011 |
Mark B. Neider; Gregory J. Zelinsky Cutting through the clutter: Searching for targets in evolving realistic scenes Journal Article In: Journal of Vision, vol. 11, no. 14, pp. 1–16, 2011. @article{Neider2011a, We evaluated the use of visual clutter as a surrogate measure of set size effects in visual search by comparing the effects of subjective clutter (determined by independent raters) and objective clutter (as quantified by edge count and feature congestion) using "evolving" scenes, ones that varied incrementally in clutter while maintaining their semantic continuity. Observers searched for a target building in rural, suburban, and urban city scenes created using the game SimCity. Stimuli were 30 screenshots obtained for each scene type as the city evolved over time. Reaction times and search guidance (measured by scan path ratio) were fastest/strongest for sparsely cluttered rural scenes, slower/weaker for more cluttered suburban scenes, and slowest/weakest for highly cluttered urban scenes. Subjective within-city clutter estimates also increased as each city matured and correlated highly with RT and search guidance. However, multiple regression modeling revealed that adding objective estimates failed to better predict search performance over the subjective estimates alone. This suggests that within-city clutter may not be explained exclusively by low-level feature congestion; conceptual congestion (e.g., the number of different types of buildings in a scene), part of the subjective clutter measure, may also be important in determining the effects of clutter on search. |
Nhung X. Nguyen; Andrea Stockum; Gesa A. Hahn; Susanne Trauzettel-Klosinski Training to improve reading speed in patients with juvenile macular dystrophy: A randomized study comparing two training methods Journal Article In: Acta Ophthalmologica, vol. 89, no. 1, pp. 82–88, 2011. @article{Nguyen2011, Purpose: In this study, we examined the clinical application of two training methods for optimizing reading ability in patients with juvenile macular dystrophy with established eccentric preferred reti- nal locus and optimal use of low-vision aids. Method: This randomized study included 36 patients with juvenile macular dystrophy (35 with Stargardt's disease and one with Best's disease). All patients have been using individually opti- mized low-vision aids. After careful ophthalmological examination, patients were randomized into two groups: Group 1: Training to read during rapid serial visual presentation (RSVP) with elimination of eye movements as far as possible (n = 20); Group 2: Training to optimize reading eye movements (SM, sensomotoric training) (n = 16). Only patients with magnification requirement up to sixfold were included in the study. Training was performed for 4 weeks with an intensity of ½ hr per day and 5 days a week. Reading speed during page reading was measured before and after training. Eye movements during silent reading were recorded before and after training using a video eye tracker in 11 patients (five patients of SM and six of RSVP training group) and using an infrared reflection system in five patients (three patients from the SM and two patients of RSVP training group). Results: Age, visual acuity and magnification requirement did not differ significantly between the two groups. The median reading speed was 83 words per minute (wpm) (interquartile range 74–105 wpm) in the RSVP training group and 102 (interquartile range 63–126 wpm) in the SM group before training and increased significantly to 104 (interquartile range 81–124 wpm) and 122, respectively (interquartile range 102–137 wpm; p = 0.01 and 0.006) after training, i.e. patients with RSVP training increased their reading speed by a median of 21 wpm, while it was 20 wpm in the SM group. There were individual patients, who benefited strongly from the training. Eye move- ment recordings before and after training showed that in the RSVP group, increasing reading speed correlated with decreasing fixation duration (r = )0.75 |
Jianguang Ni; Huihui Jiang; Yixiang Jin; Nanhui Chen; Jianhong Wang; Zhengbo Wang; Yuejia Luo; Yuanye Ma; Xintian Hu Dissociable modulation of overt visual attention in valence and arousal revealed by topology of scan path Journal Article In: PLoS ONE, vol. 6, no. 4, pp. e18262, 2011. @article{Ni2011, Emotional stimuli have evolutionary significance for the survival of organisms; therefore, they are attention-grabbing and are processed preferentially. The neural underpinnings of two principle emotional dimensions in affective space, valence (degree of pleasantness) and arousal (intensity of evoked emotion), have been shown to be dissociable in the olfactory, gustatory and memory systems. However, the separable roles of valence and arousal in scene perception are poorly understood. In this study, we asked how these two emotional dimensions modulate overt visual attention. Twenty-two healthy volunteers freely viewed images from the International Affective Picture System (IAPS) that were graded for affective levels of valence and arousal (high, medium, and low). Subjects' heads were immobilized and eye movements were recorded by camera to track overt shifts of visual attention. Algebraic graph-based approaches were introduced to model scan paths as weighted undirected path graphs, generating global topology metrics that characterize the algebraic connectivity of scan paths. Our data suggest that human subjects show different scanning patterns to stimuli with different affective ratings. Valence salient stimuli (with neutral arousal) elicited faster and larger shifts of attention, while arousal salient stimuli (with neutral valence) elicited local scanning, dense attention allocation and deep processing. Furthermore, our model revealed that the modulatory effect of valence was linearly related to the valence level, whereas the relation between the modulatory effect and the level of arousal was nonlinear. Hence, visual attention seems to be modulated by mechanisms that are separate for valence and arousal. |
Robert Niebergall; Paul S. Khayat; Stefan Treue; Julio C. Martinez-Trujillo Multifocal attention filters targets from distracters within and beyond primate mt neurons' receptive field boundaries Journal Article In: Neuron, vol. 72, no. 6, pp. 1067–1079, 2011. @article{Niebergall2011, Visual attention has been classically described as a spotlight that enhances the processing of a behaviorally relevant object. However, in many situations, humans and animals must simultaneously attend to several relevant objects separated by distracters. To account for this ability, various models of attention have been proposed including splitting of the attentional spotlight into multiple foci, zooming of the spotlight over a region of space, and switching of the spotlight among objects. We investigated this controversial issue by recording neuronal activity in visual area MT of two macaques while they attended to two translating objects that circumvented a third distracter object located inside the neurons' receptive field. We found that when the attended objects passed through or nearby the receptive field, neuronal responses to the distracter were either decreased or remained unaltered. These results demonstrate that attention can split into multiple spotlights corresponding to relevant objects while filtering out interspersed distracters. |
Robert Niebergall; Paul S. Khayat; Stefan Treue; Julio C. Martinez-Trujillo Expansion of MT neurons excitatory receptive fields during covert attentive tracking Journal Article In: Journal of Neuroscience, vol. 31, no. 43, pp. 15499–15510, 2011. @article{Niebergall2011a, Primates can attentively track moving objects while keeping gaze stationary. The neural mechanisms underlying this ability are poorly understood. We investigated this issue by recording responses of neurons in area MT of two rhesus monkeys while they performed two different tasks. During the Attend-Fixation task, two moving random dot patterns (RDPs) translated across a screen at the same speed and in the same direction while the animals directed gaze to a fixation spot and detected a change in its luminance. During the Tracking task, the animals kept gaze on the fixation spot and attentively tracked the two RDPs to report a change in the local speed of one of the patterns' dots. In both conditions, neuronal responses progressively increased as the RDPs entered the neurons' receptive field (RF), peaked when they reached its center, and decreased as they translated away. This response profile was well described by a Gaussian function with its center of gravity indicating the RF center and its flanks the RF excitatory borders. During Tracking, responses were increased relative to Attend-Fixation, causing the Gaussian profiles to expand. Such increases were proportionally larger in the RF periphery than at its center, and were accompanied by a decrease in the trial-to-trial response variability (Fano factor) relative to Attend-Fixation. These changes resulted in an increase in the neurons' performance at detecting targets at longer distances from the RF center. Our results show that attentive tracking dynamically changes MT neurons' RF profiles, ultimately improving the neurons' ability to encode the tracked stimulus features. |
Tanja C. W. Nijboer; Gabriela Satris; Stefan Van Stigchel The influence of synesthesia on eye movements: No synesthetic pop-out in an oculomotor target selection task Journal Article In: Consciousness and Cognition, vol. 20, no. 4, pp. 1193–1200, 2011. @article{Nijboer2011, Recent research on grapheme-colour synesthesia has focused on whether visual attention is necessary to induce a synesthetic percept. The current study investigated the influence of synesthesia on overt visual attention during an oculomotor target selection task. Chromatic and achromatic stimuli were presented with one target among distractors (e.g. a '2' (target) among multiple '5's (distractors)). Participants executed an eye movement to the target. Synesthetes and controls showed a comparable target selection performance across conditions and a 'pop-out effect' was only seen in the chromatic condition. As a pop-out effect was absent for the synesthetes in the achromatic condition, a synesthetic element appears not to elicit a synesthetic colour, even when it is the target. The synesthetic percepts are not pre-attentively available to distinguish the synesthetic target from synesthetic distractors when elements are presented in the periphery. Synesthesia appears to require full recognition to bind form and colour. |
David J. Acunzo; John M. Henderson No emotional "Pop-out" effect in natural scene viewing Journal Article In: Emotion, vol. 11, no. 5, pp. 1134–1143, 2011. @article{Acunzo2011, It has been shown that attention is drawn toward emotional stimuli. In particular, eye movement research suggests that gaze is attracted toward emotional stimuli in an unconscious, automated manner. We addressed whether this effect remains when emotional targets are embedded within complex real-world scenes. Eye movements were recorded while participants memorized natural images. Each image contained an item that was either neutral, such as a bag, or emotional, such as a snake or a couple hugging. We found no latency difference for the first target fixation between the emotional and neutral conditions, suggesting no extrafoveal "pop-out" effect of emotional targets. However, once detected, emotional targets held attention for a longer time than neutral targets. The failure of emotional items to attract attention seems to contradict previous eye-movement research using emotional stimuli. However, our results are consistent with studies examining semantic drive of overt attention in natural scenes. Interpretations of the results in terms of perceptual and attentional load are provided. |
Carlos Aguilar; Eric Castet Gaze-contingent simulation of retinopathy: Some potential pitfalls and remedies Journal Article In: Vision Research, vol. 51, no. 9, pp. 997–1012, 2011. @article{Aguilar2011, Many important results in visual neuroscience rely on the use of gaze-contingent retinal stabilization techniques. Our work focuses on the important fraction of these studies that is concerned with the retinal stabilization of visual filters that degrade some specific portions of the visual field. For instance, macular scotomas, often induced by age related macular degeneration, can be simulated by continuously displaying a gaze-contingent mask in the center of the visual field. The gaze-contingent rules used in most of these studies imply only a very minimal processing of ocular data. By analyzing the relationship between gaze and scotoma locations for different oculo-motor patterns, we show that such a minimal processing might have adverse perceptual and oculomotor consequences due mainly to two potential problems: (a) a transient blink-induced motion of the scotoma while gaze is static, and (b) the intrusion of post-saccadic slow eye movements. We have developed new gaze-contingent rules to solve these two problems. We have also suggested simple ways of tackling two unrecognized problems that are a potential source of mismatch between gaze and scotoma locations. Overall, the present work should help design, describe and test the paradigms used to simulate retinopathy with gaze-contingent displays. |
Mehrnoosh Ahmadi; Mitra Judi; Anahita Khorrami; Javad Mahmoudi-Gharaei; Mehdi Tehrani-Doost Initial orientation of attention towards emotional faces in children with attention deficit hyperactivity disorder Journal Article In: Iranian Journal of Psychiatry, vol. 6, no. 3, pp. 87–91, 2011. @article{Ahmadi2011, OBJECTIVE: Early recognition of negative emotions is considered to be of vital importance. It seems that children with attention deficit hyperactivity disorder have some difficulties recognizing facial emotional expressions, especially negative ones. This study investigated the preference of children with attention deficit hyperactivity disorder for negative (angry, sad) facial expressions compared to normal children. METHOD: Participants were 35 drug naive boys with ADHD, aged between 6-11 years,and 31 matched healthy children. Visual orientation data were recorded while participants viewed face pairs (negative-neutral pairs) shown for 3000ms. The number of first fixations made to each expression was considered as an index of initial orientation. RESULTS: Group comparisons revealed no difference between attention deficit hyperactivity disorder group and their matched healthy counterparts in initial orientation of attention. A tendency towards negative emotions was found within the normal group, while no difference was observed between initial allocation of attention toward negative and neutral expressions in children with ADHD. CONCLUSION: Children with attention deficit hyperactivity disorder do not have significant preference for negative facial expressions. In contrast, normal children have a significant preference for negative facial emotions rather than neutral faces. |
Ian C. Fiebelkorn; John J. Foxe; John S. Butler; Manuel R. Mercier; Adam C. Snyder; Sophie Molholm Ready, set, reset: Stimulus-locked Periodicity in behavioral performance demonstrates the consequences of cross-sensory phase reset Journal Article In: Journal of Neuroscience, vol. 31, no. 27, pp. 9971–9981, 2011. @article{Fiebelkorn2011, The simultaneous presentation of a stimulus in one sensory modality often enhances target detection in another sensory modality,but the neural mechanisms that govern these effects are still under investigation. Here, we test a hypothesis proposed in the neurophysiological literature: that auditory facilitation of visual-target detection operates through cross-sensory phase reset ofongoing neural oscillations (Lakatos et al., 2009). To date, measurement limitations have prevented this potentially powerful neural mechanism from being directly linked with its predicted behavioral consequences. The present experiment uses a psychophysical approach in humans to demonstrate, forthe first time, stimulus-locked periodicity in visual-target detection,following a temporally informative sound. Our data further demonstrate that periodicity in behavioral performance is strongly influenced by the probability of audiovisual co-occurrence. We argue that fluctuations in visual-target detection result from cross-sensory phase reset, both at the moment it occurs and persisting for seconds thereafter. The precise frequency at which this periodicity operates remains to be determined through a method that allows for a higher sampling rate. |
Katja Fiehler; Immo Schütz; Denise Y. P. Henriques Gaze-centered spatial updating of reach targets across different memory delays Journal Article In: Vision Research, vol. 51, no. 8, pp. 890–897, 2011. @article{Fiehler2011, Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present. |
Ruth Filik; Emma Barber Inner speech during silent reading reflects the reader's regional accent Journal Article In: PLoS ONE, vol. 6, no. 10, pp. e25782, 2011. @article{Filik2011, While reading silently, we often have the subjective experience of inner speech. However, there is currently little evidence regarding whether this inner voice resembles our own voice while we are speaking out loud. To investigate this issue, we compared reading behaviour of Northern and Southern English participants who have differing pronunciations for words like 'glass', in which the vowel duration is short in a Northern accent and long in a Southern accent. Participants' eye movements were monitored while they silently read limericks in which the end words of the first two lines (e.g., glass/class) would be pronounced differently by Northern and Southern participants. The final word of the limerick (e.g., mass/sparse) then either did or did not rhyme, depending on the reader's accent. Results showed disruption to eye movement behaviour when the final word did not rhyme, determined by the reader's accent, suggesting that inner speech resembles our own voice. |
C. D. Fiorillo Transient activation of midbrain dopamine neurons by reward risk Journal Article In: Neuroscience, vol. 197, pp. 162–171, 2011. @article{Fiorillo2011, Dopamine neurons of the ventral midbrain are activated transiently following stimuli that predict future reward. This response has been shown to signal the expected value of future reward, and there is strong evidence that it drives positive reinforcement of stimuli and actions associated with reward in accord with reinforcement learning models. Behavior is also influenced by reward uncertainty, or risk, but it is not known whether the transient response of dopamine neurons is sensitive to reward risk. To investigate this, monkeys were trained to associate distinct visual stimuli with certain or uncertain volumes of juice of nearly the same expected value. In a choice task, monkeys preferred the stimulus predicting an uncertain (risky) reward outcome. In a Pavlovian task, in which the neuronal responses to each stimulus could be measured in isolation, it was found that dopamine neurons were more strongly activated by the stimulus associated with reward risk. Given extensive evidence that dopamine drives reinforcement, these results strongly suggest that dopamine neurons can reinforce risk-seeking behavior (gambling), at least under certain conditions. Risk-seeking behavior has the virtue of promoting exploration and learning, and these results support the hypothesis that dopamine neurons represent the value of exploration. |
Gemma Fitzsimmons; Denis Drieghe The influence of number of syllables on word skipping during reading Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 4, pp. 736–741, 2011. @article{Fitzsimmons2011, In an eye-tracking experiment, participants read sentences containing a monosyllabic (e.g., grain) or a disyllabic (e.g., cargo) five-letter word. Monosyllabic target words were skipped more often than disyllabic target words, indicating that syllabic structure was extracted from the parafovea early enough to influence the decision of saccade target selection. Fixation times on the target word when it was fixated did not show an influence of number of syllables, demonstrating that number of syllables differentially impacts skipping rates and fixation durations during reading. |
Heather Flowe An exploration of visual behaviour in eyewitness identification tests Journal Article In: Applied Cognitive Psychology, vol. 25, no. 2, pp. 244–254, 2011. @article{Flowe2011, The contribution of internal (eyes, nose and mouth) and external (hair-line, cheek and jaw-line) features across eyewitness identification tests was examined using eye tracking. In Experiment 1, participants studied faces and were tested with lineups, either simultaneous (test faces presented in an array) or sequential (test faces presented one at a time). In Experiment 2, the recognition of previously studied faces was tested in a showup (a suspect face alone was presented). Results indicated that foils were analysed for a shorter period of time in the simultaneous compared to the sequential condition, whereas a positively identified face was analysed for a comparable period of time across lineup procedures. In simultaneous lineups and showups, a greater proportion of time was spent analysing internal features of the test faces compared to sequential lineups. Different decision processes across eyewitness identification tests are inferred based on the results. |
Heather Flowe; Garrison W. Cottrell An examination of simultaneous lineup identification decision processes using eye tracking Journal Article In: Applied Cognitive Psychology, vol. 25, pp. 443–451, 2011. @article{Flowe2011a, Decision processes in simultaneous lineups (an array of faces in which a ‘suspect' face is displayed along with foil faces) were examined using eye tracking to capture the length and number oftimes that individual faces were visually analysed. The similarity of the lineup target face relative to the study face was manipulated, and face dwell times on the first visit and on return visits to the individual lineup faces were measured. On first visits, positively identified faces were examined for a longer duration compared to faces that were not identified. When no face was identified from the lineup, the suspect was visited for a longer duration compared to a foil face. On return visits, incorrectly identified faces were examined for a longer duration and visited more often compared to correctly identified faces. The results indicate that lineup decisions can be predicted by face dwell time and the number of visits made to faces. |
Angélica Pérez Fornos; Jörg Sommerhalder; Marco Pelizzone Reading with a simulated 60-channel implant Journal Article In: Frontiers in Neuroscience, vol. 5, pp. 57, 2011. @article{Fornos2011, First generation retinal prostheses containing 50-60 electrodes are currently in clinical trials. The purpose of this study was to evaluate the theoretical upper limit (best possible) reading performance attainable with a state-of-the-art 60-channel retinal implant and to find the optimum viewing conditions for the task. Four normal volunteers performed full-page text reading tasks with a low-resolution, 60-pixel viewing window that was stabilized in the central visual field. Two parameters were systematically varied: (1) spatial resolution (image magnification) and (2) the orientation of the rectangular viewing window. Performance was measured in terms of reading accuracy (% of correctly read words) and reading rates (words/min). Maximum reading performances were reached at spatial resolutions between 3.6 and 6 pixels/char. Performance declined outside this range for all subjects. In optimum viewing conditions (4.5 pixels/char), subjects achieved almost perfect reading accuracy and mean reading rates of 26 words/min for the vertical viewing window and of 34 words/min for the horizontal viewing window. These results suggest that, theoretically, some reading abilities can be restored with actual state-of-the-art retinal implant prototypes if "image magnification" is within an "optimum range." Future retinal implants providing higher pixel resolutions, thus allowing for a wider visual span might allow faster reading rates. |
Jelmer P. De Vries; Ignace T. C. Hooge; Marco A. Wiering; Frans A. J. Verstraten Saccadic selection and crowding in visual search: Stronger lateral masking leads to shorter search times Journal Article In: Experimental Brain Research, vol. 211, no. 1, pp. 119–131, 2011. @article{DeVries2011, We investigated the role of crowding in saccadic selection during visual search. To guide eye movements, often information from the visual periphery is used. Crowding is known to deteriorate the quality of peripheral information. In four search experiments, we studied the role of crowding, by accompanying individual search elements by flankers. Varying the difference between target and flankers allowed us to manipulate crowding strength throughout the stimulus. We found that eye movements are biased toward areas with little crowding for conditions where a target could be discriminated peripherally. Interestingly, for conditions in which the target could not be discriminated peripherally, this bias reversed to areas with strong crowding. This led to shorter search times for a target presented in areas with stronger crowding, compared to a target presented in areas with less crowding. These findings suggest a dual role for crowding in visual search. The presence of flankers similar to the target deteriorates the quality of the peripheral target signal but can also attract eye movements, as more potential targets are present over the area. |
Jelmer P. Vries; Ignace T. C. Hooge; Marco A. Wiering; Frans A. J. Verstraten How longer saccade latencies lead to a competition for salience Journal Article In: Psychological Science, vol. 22, no. 7, pp. 916–923, 2011. @article{Vries2011, It has been suggested that independent bottom-up and top-down processes govern saccadic selection. However, recent findings are hard to explain in such terms. We hypothesized that differences in visual-processing time can explain these findings, and we tested this using search displays containing two deviating elements, one requiring a short processing time and one requiring a long processing time. Following short saccade latencies, the deviation requiring less processing time was selected most frequently. This bias disappeared following long saccade latencies. Our results suggest that an element that attracts eye movements following short saccade latencies does so because it is the only element processed at that time. The temporal constraints of processing visual information therefore seem to be a determining factor in saccadic selection. Thus, relative saliency is a time-dependent phenomenon. |
Louis F. Dell'Osso; Richard W. Hertle; R. John Leigh; Jonathan B. Jacobs; Susan King; Stacia Yaniglos Effects of topical brinzolamide on infantile nystagmus syndrome waveforms: Eyedrops for nystagmus Journal Article In: Journal of Neuro-Ophthalmology, vol. 31, no. 3, pp. 228–233, 2011. @article{DellOsso2011, BACKGROUND: Recent advances in infantile nystagmus syndrome (INS) surgery have uncovered the therapeutic importance of proprioception. In this report, we test the hypothesis that the topical carbonic anhydrase inhibitor (CAI) brinzolamide (Azopt) has beneficial effects on measures of nystagmus foveation quality in a subject with INS. METHODS: Eye movement data were taken, using a high-speed digital video recording system, before and after 3 days of the application of topical brinzolamide 3 times daily in each eye. Nystagmus waveforms were analyzed by applying the eXpanded Nystagmus Acuity Function (NAFX) at different gaze angles and determining the longest foveation domain (LFD) and compared to previously published data from the same subject after the use of a systemic CAI, contact lenses, and convergence and to other subjects before and after eye muscle surgery for INS. RESULTS:: Topical brinzolamide improved foveation by both a 51.9% increase in the peak value of the NAFX function (from 0.395 to 0.600) and a 50% broadening of the NAFX vs Gaze Angle curve (the LFD increased from 20 degrees to 30 degrees ). The improvements in NAFX after topical brinzolamide were equivalent to systemic acetazolamide or eye muscle surgery and were intermediate between those of soft contact lenses or convergence. Topical brinzolamide and contact lenses had equivalent LFD improvements and were less effective than convergence. CONCLUSIONS: In this subject with INS, topical brinzolamide resulted in improved-foveation INS waveforms over a broadened range of gaze angles. Its therapeutic effects were equivalent to systemic CAI. Although a prospective clinical trial is needed to prove efficacy or effectiveness in other subjects, an eyedrops-based therapy for INS may emerge as a viable addition to optical, surgical, behavioral, and systemic drug therapies. |
Stefan Van Stigchel; Puck Imants; K. Richard Ridderinkhof; Stefan Van Stigchel; Puck Imants; K. Richard Ridderinkhof Positive affect increases cognitive control in the antisaccade task Journal Article In: Brain and Cognition, vol. 75, no. 2, pp. 177–181, 2011. @article{Stigchel2011, To delineate the modulatory effects of induced positive affect on cognitive control, the current study investigated whether positive affect increases the ability to suppress a reflexive saccade in the antisaccade task. Results of the antisaccade task showed that participants made fewer erroneous prosaccades in the condition in which a positive mood was induced compared to the neutral condition (i.e. in which no emotional mood was induced). This improvement of oculomotor inhibition was restricted to saccades with an express latency. These results are in line with the idea that enhanced performance in the positive affect condition could be caused by increased dopaminergic neurotransmission the brain. © 2010 Elsevier Inc. |
Loni Desanghere; J. J. Marotta "Graspability" of objects affects gaze patterns during perception and action tasks Journal Article In: Experimental Brain Research, vol. 212, no. 2, pp. 177–187, 2011. @article{Desanghere2011, When grasping an object, our gaze marks key positions to which the fingertips are directed. In contrast, eye fixations during perceptual tasks are typically concentrated on an object's centre of mass (COM). However, previous studies have typically required subjects to either grasp the object at predetermined sites or just look at computer-generated shapes "as a whole". In the current study, we investigated gaze fixations during a reaching and grasping task to symmetrical objects and compared these fixations with those made during a perceptual size estimation task using real (Experiment 1) and computer-generated objects (Experiment 2). Our results demonstrated similar gaze patterns in both perception and action to real objects. Participants first fixated a location towards the top edge of the object, consistent with index finger location during grasping, followed by a subsequent fixation towards the object's COM. In contrast, during the perceptual task to computer-generated objects, an opposite pattern in fixation locations was observed, where first fixations were closer to the COM, followed by a subsequent fixation towards the top edge. Even though differential fixation patterns were observed between studies, the area in which these fixations occurred, between the centre of the object and top edge, was the same in all tasks. These results demonstrate for the first time consistencies in fixation locations across both perception and action tasks, particularly when the same type of information (e.g. object size) is important for the completion of both tasks, with fixation locations increasing relative to the object's COM with increases in block height. |
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold Attentional guidance during visual search among patients with schizophrenia Journal Article In: Schizophrenia Research, vol. 131, no. 1-3, pp. 224–230, 2011. @article{Elahipanah2011, The current study investigated visual guidance and saccadic selectivity during visual search among patients with schizophrenia (SCZ). Data from a previous study (Elahipanah, A., Christensen, B.K., & Reingold, E.M., 2008. Visual selective attention among persons with schizophrenia: The distractor ratio effect. Schizophr. Res. 105, 61-67.) suggested that visual guidance for the less frequent distractors in a conjunction search display (i.e., the distractor ratio effect) is intact among SCZ patients. The current study investigated the distractor ratio effect among SCZ patients when: 1) search is more demanding, and 2) search involves motion perception. In addition, eye tracking was employed to directly study saccadic selectivity for the different types of distractors. Twenty-eight SCZ patients receiving a single antipsychotic medication and 26 healthy control participants performed two conjunction search tasks: a within-dimension (i.e., colour × colour) search task; and a cross-dimension (i.e., motion × colour) search task. In each task the relative frequency of distractors was manipulated across 5 levels. Despite slower search times, patients' eye movement data indicated unimpaired visual guidance in both tasks. However, in the motion × colour conjunction search task, patients displayed disproportionate difficulty detecting the moving target when the majority of distractors were also moving. Results demonstrate that bottom-up attentional guidance is unimpaired among patients with SCZ; however, patients' impairment in motion discrimination impedes their ability to detect a moving target against noisy backgrounds. |
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold What can eye movements tell us about Symbol Digit substitution by patients with schizophrenia? Journal Article In: Schizophrenia Research, vol. 127, no. 1-3, pp. 137–143, 2011. @article{Elahipanah2011a, Substitution tests are sensitive to cognitive impairment and reliably discriminate patients with schizophrenia from healthy individuals better than most other neuropsychological instruments. However, due to their multifaceted nature, substitution test scores cannot pinpoint the specific cognitive deficits that lead to poor performance. The current study investigated eye movements during performance on a substitution test in order to better understand what aspect of substitution test performance underlies schizophrenia-related impairment. Twenty-five patients with schizophrenia and 25 healthy individuals performed a computerized version of the Symbol Digit Modalities Test while their eye movements were monitored. As expected, patients achieved lower overall performance scores. Moreover, analysis of participants' eye movements revealed that patients spent more time searching for the target symbol every time they visited the key area. Patients also made more visits to the key area for each response that they made. Regression analysis suggested that patients' impaired performance on substitution tasks is primarily related to a less efficient visual search and, secondarily, to impaired memory. |
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold Controlling the spotlight of attention: Visual span size and flexibility in schizophrenia Journal Article In: Neuropsychologia, vol. 49, no. 12, pp. 3370–3376, 2011. @article{Elahipanah2011b, The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. |
Jessica J. Ellis; Mackenzie G. Glaholt; Eyal M. Reingold Eye movements reveal solution knowledge prior to insight Journal Article In: Consciousness and Cognition, vol. 20, no. 3, pp. 768–776, 2011. @article{Ellis2011, In two experiments, participants solved anagram problems while their eye movements were monitored. Each problem consisted of a circular array of five letters: a scrambled four-letter solution word containing three consonants and one vowel, and an additional randomly-placed distractor consonant. Viewing times on the distractor consonant compared to the solution consonants provided an online measure of knowledge of the solution. Viewing times on the distractor consonant and the solution consonants were indistinguishable early in the trial. In contrast, several seconds prior to the response, viewing times on the distractor consonant decreased in a gradual manner compared to viewing times on the solution consonants. Importantly, this pattern was obtained across both trials in which participants reported the subjective experience of insight and trials in which they did not. These findings are consistent with the availability of partial knowledge of the solution prior to such information being accessible to subjective phenomenal awareness. |
Jan Drewes; Julia Trommershäuser; Karl R. Gegenfurtner Parallel visual search and rapid animal detection in natural scenes Journal Article In: Journal of Vision, vol. 11, no. 2, pp. 1–21, 2011. @article{Drewes2011, Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Recent studies found human response times to be as fast as 120 ms in a dual-presentation (2-AFC) setup (H. Kirchner & S. J. Thorpe, 2005). In most previous experiments, pairs of randomly chosen images were presented, frequently from very different contexts (e.g., a zebra in Africa vs. the New York Skyline). Here, we tested the effect of background size and contiguity on human performance by using a new, contiguous background image set. Individual images contained a single animal surrounded by a large, animal-free image area. The image could be positioned and cropped in such a manner that the animal could occur in one of eight evenly spaced positions on an imaginary circle (radius 10-deg visual angle). In the first (8-Choice) experiment, all eight positions were used, whereas in the second (2-Choice) and third (2-Image) experiments, the animals were only presented on the two positions to the left and right of the screen center. In the third experiment, additional rectangular frames were used to mimic the conditions of earlier studies. Average latencies on successful trials differed only slightly between conditions, indicating that the number of possible animal locations within the display does not affect decision latency. Detailed analysis of saccade targets revealed a preference toward both the head and the center of gravity of the target animal, affecting hit ratio, latency, and the number of saccades required to reach the target. These results illustrate that rapid animal detection operates scene-wide and is fast and efficient even when the animals are embedded in their natural backgrounds. |
Peter J. Etchells; Christopher P. Benton; Casimir J. H. Ludwig; Iain D. Gilchrist Testing a simplified method for measuring velocity integration in saccades using a manipulation of target contrast Journal Article In: Frontiers in Psychology, vol. 2, pp. 115, 2011. @article{Etchells2011, A growing number of studies in vision research employ analyses of how perturbations in visual stimuli influence behavior on single trials. Recently, we have developed a method along such lines to assess the time course over which object velocity information is extracted on a trial-by-trial basis in order to produce an accurate intercepting saccade to a moving target. Here, we present a simplified version of this methodology, and use it to investigate how changes in stimulus contrast affect the temporal velocity integration window used when generating saccades to moving targets. Observers generated saccades to one of two moving targets which were presented at high (80%) or low (7.5%) contrast. In 50% of trials, target velocity stepped up or down after a variable interval after the saccadic go signal. The extent to which the saccade endpoint can be accounted for as a weighted combination of the pre- or post-step velocities allows for identification of the temporal velocity integration window. Our results show that the temporal integration window takes longer to peak in the low when compared to high contrast condition. By enabling the assessment of how information such as changes in velocity can be used in the programming of a saccadic eye movement on single trials, this study describes and tests a novel methodology with which to look at the internal processing mechanisms that transform sensory visual inputs into oculomotor outputs. |
William S. Evans; David Caplan; Gloria Waters Effects of concurrent arithmetical and syntactic complexity on self-paced reaction times and eye fixations Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 6, pp. 1203–1211, 2011. @article{Evans2011, Two dual-task experiments (replications of Experiments 1 and 2 in Fedorenko, Gibson, & Rohde, Journal of Memory and Language, 56, 246-269 2007) were conducted to determine whether syntactic and arithmetical operations share working memory resources. Subjects read object- or subject-extracted relative clause sentences phrase by phrase in a self-paced task while simultaneously adding or subtracting numbers. Experiment 2 measured eye fixations as well as self-paced reaction times. In both experiments, there were main effects of syntax and of mathematical operation on self-paced reading times, but no interaction of the two. In the Experiment 2 eye-tracking results, there were main effects of syntax on first-pass reading time and total reading time and an interaction between syntax and math in total reading time on the noun phrase within the relative clause. The findings point to differences in the ways individuals process sentences under these dual-task conditions, as compared with viewing sentences during "normal" reading conditions, and do not support the view that arithmetical and syntactic integration operations share a working memory system. |
Nathan Faivre; Sid Kouider Increased sensory evidence reverses nonconscious priming during crowding Journal Article In: Journal of Vision, vol. 11, no. 13, pp. 1–13, 2011. @article{Faivre2011, Sensory adaptation reflects the fact that the responsiveness of a perceptual system changes after the processing of a specific stimulus. Two manifestations of this property have been used in order to infer the mechanisms underlying vision: priming, in which the processing of a target is facilitated by prior exposure to a related adaptor, and habituation, in which this processing is hurt by overexposure to an adaptor. In the present study, we investigated the link between priming and habituation by measuring how sensory evidence (short vs. long adaptor exposure) and perceptual awareness (discriminable vs. undiscriminable adaptor stimulus) affects the adaptive response on a related target. Relying on gaze-contingent crowding, we manipulated independently adaptor discriminability and adaptor duration and inferred sensory adaptation from reaction times on the discrimination of a subsequent oriented target. When adaptor orientation was undiscriminable, we found that increasing its duration reversed priming into habituation. When adaptor orientation was discriminable, priming effects were larger after short exposure, but increasing adaptor duration led to a decrease of priming instead of a reverse into habituation. We discuss our results as reflecting changes in the temporal dynamics of angular orientation processing, depending on the mechanisms associated with perceptual awareness and attentional amplification. |
Nathan Faivre; Sid Kouider Multi-feature objects elicit nonconscious priming despite crowding Journal Article In: Journal of Vision, vol. 11, no. 3, pp. 1–10, 2011. @article{Faivre2011a, The conscious representation we build from the visual environment appears jumbled in the periphery, reflecting a phenomenon known as crowding. Yet, it remains possible that object-level representations (i.e., resulting from the binding of the stimulus' different features) are preserved even if they are not consciously accessible. With a paradigm involving gaze-contingent substitution, which allows us to ensure the constant absence of peripheral stimulus discrimination, we show that, despite their jumbled appearance, multi-feature crowded objects, such as faces and directional symbols, are encoded in a nonconscious manner and can influence subsequent behavior. Furthermore, we show that the encoding of complex crowded contents is modulated by attention in the absence of consciousness. These results, in addition to bringing new insights concerning the fate of crowded information, illustrate the potential of the Gaze-Contingent Crowding (GCC) approach for probing nonconscious cognition. |
Joost Felius; Valeria L. N. Fu; Eileen E. Birch; Richard W. Hertle; Reed M. Jost; Vidhya Subramanian Quantifying nystagmus in infants and young children: Relation between foveation and visual acuity deficit Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 12, pp. 8724–8731, 2011. @article{Felius2011, PURPOSE. Nystagmus eye movement data from infants and young children are often not suitable for advanced quantitative analysis. A method was developed to capture useful informa- tion from noisy data and validate the technique by showing meaningful relationships with visual functioning. METHODS. Horizontal eye movements from patients (age 5 months–8 years) with idiopathic infantile nystagmus syndrome (INS) were used to develop a quantitative outcome measure that allowed for head and body movement during the record- ing. The validity of this outcome was assessed by evaluating its relation to visual acuity deficit in 130 subjects, its relation to actual fixation as assessed under simultaneous fundus imaging, its correlation with the established expanded nystagmus acuity function (NAFX), and its test–retest variability. RESULTS. The nystagmus optimal fixation function (NOFF) was defined as the logit transform of the fraction of data points meeting position and velocity criteria within a moving win- dow. A decreasing exponential relationship was found be- tween visual acuity deficit and the NOFF, yielding a 0.75 logMAR deficit for the poorest NOFF and diminishing deficits with improving foveation. As much as 96% of the points iden- tified as foveation events fell within 0.25° of the actual target. Good correlation (r ⫽ 0.96) was found between NOFF and NAFX. Test–retest variability was 0.49 logit units. CONCLUSIONS. The NOFF is a feasible method to quantify noisy nystagmus eye movement data. Its validation makes it a prom- ising outcome measure for the progression and treatment of nystagmus during early childhood. |
Joost C. Dessing; J. Douglas Crawford; W. Pieter Medendorp Spatial updating across saccades during manual interception Journal Article In: Journal of Vision, vol. 11, no. 10, pp. 1–18, 2011. @article{Dessing2011, We studied the effect of intervening saccades on the manual interception of a moving target. Previous studies suggest that stationary reach goals are coded and updated across saccades in gaze-centered coordinates, but whether this generalizes to interception is unknown. Subjects (n = 9) reached to manually intercept a moving target after it was rendered invisible. Subjects either fixated throughout the trial or made a saccade before reaching (both fixation points were in the range of -10° to 10°). Consistent with previous findings and our control experiment with stationary targets, the interception errors depended on the direction of the remembered moving goal relative to the new eye position, as if the target is coded and updated across the saccade in gaze-centered coordinates. However, our results were also more variable in that the interception errors for more than half of our subjects also depended on the goal direction relative to the initial gaze direction. This suggests that the feedforward transformations for interception differ from those for stationary targets. Our analyses show that the interception errors reflect a combination of biases in the (gaze-centered) representation of target motion and in the transformation of goal information into body-centered coordinates for action. |
Leandro Luigi Di Stasi; Adoración Antolí; José J. Cañas Main sequence: An index for detecting mental workload variation in complex tasks Journal Article In: Applied Ergonomics, vol. 42, no. 6, pp. 807–813, 2011. @article{DiStasi2011a, The primary aim of this study was to validate the saccadic main sequence, in particular the peak velocity [PV], as an alternative psychophysiological measure of Mental Workload [MW]. Taking the Wickens' multiple resource model as the theoretical framework of reference, an experiment was conducted using the Firechief®microworld. MW was manipulated by changing the task complexity (between groups) and the amount of training (within groups). There were significant effects on PV from both factors. These results provide additional empirical support for the sensitivity of PV to discriminate MW variation on visual-dynamic complex tasks. These findings and other recent results on PV could provide important information for the development of a new vigilance screening tool for the prevention of accidents in several fields of applied ergonomics. |
Leandro Luigi Di Stasi; Adoración Antolí; Miguel Gea; José J. Cañas A neuroergonomic approach to evaluating mental workload in hypermedia interactions Journal Article In: International Journal of Industrial Ergonomics, vol. 41, no. 3, pp. 298–304, 2011. @article{DiStasi2011b, Neuroergonomics could provide on-line methods for measuring mental effort while the operator interacts with hypermedia. We present an experimental study in which 28 participants interacted with a modified version of an existing Spanish e-commerce website in two searching tasks (Goal oriented shopping and Experiential shopping) that demand different amounts of cognitive resources. Mental workload was evaluated multidimensionally, using subjective rating, an interaction index, and eye-related indices. Eye movements and pupil diameter were recorded. The results showed visual scanning behaviour coincided with subjective test scores and performance data in showing a higher information processing load in Goal oriented shopping. However, pupil diameter was able to detect only the variation in user activation during the interaction task, a finding that replicates previous results on the validity of pupil size as an index of arousal. We conclude that a neuroergonomics approach could be a useful method for detecting variations in operators' attentional states. Relevance to industry: These results could provide important information for the development of a new attentional screening tool for the prevention of accidents in several application domains. |
Leandro Luigi Di Stasi; D. Contreras; Antonio Cándido; José J. Cañas; A. Catena Behavioral and eye-movement measures to track improvements in driving skills of vulnerable road users: First-time motorcycle riders Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 14, no. 1, pp. 26–35, 2011. @article{DiStasi2011, Motorcyclist deaths and injuries follow the trend in sales rather than in growth in the number of motorcycles, suggesting that fatalities are related to the lack of driver experience with recently purchased motorcycles. The aim of the present investigation was to assess the effects of experience and training in hazard perception. We compared first-time riders (people who are not yet riders/drivers) before and after training in six different riding scenarios to expert motorcycle riders. Thirty-three participants took part in the experiment. Volunteers rode a moped in a fixed-base virtual environment and were presented with a number of preset risky events. We used a multidimensional methodology, including behavioral, subjective and eye-movements data. The results revealed differences between experts and first-time riders, as well as the effect of training on the novice group. As expected, training led to an improvement in the riding skills of first-time riders, reducing the number of accidents, improving their capacity to adapt their speed to the situation, reducing trajectory-corrective movements, and changing their pattern of gaze exploration. We identified several behavioral and eye-related measures that are sensitive to both long-term experience and training in motorcycle riders. These findings will be useful for the design of on-line monitoring systems to evaluate changes in risk behavior and of programs for preventing and controlling risk behavior and improving situation awareness for novice riders, with the ultimate aim of reducing road-user mortality. |
Alan F. Dixson; Barnaby J. Dixson Venus figurines of the european paleolithic: Symbols of fertility or attractiveness? Journal Article In: Journal of Anthropology, vol. 2011, pp. 1–11, 2011. @article{Dixson2011, The earliest known representations of the human female form are the European Paleolithic “Venus figurines,” ranging in age from 23,000 to 25,000 years. We asked participants to rate images of Paleolithic figurines for their attractiveness, age grouping and reproductive status. Attractiveness was positively correlated with measures of the waist-to hip ratio (WHR) of figurines, consistent with the “sexually attractive symbolism” hypothesis. However, most figurines had high WHRs (>1.0) and received low attractiveness scores. Participants rated most figurines as representing middle-aged or young adult women, rather than being adolescent or older (postmenopausal). While some were considered to represent pregnant women, consistent with the “fertility symbol” hypothesis, most were judged as being non-pregnant. Some figurines depict obese, large-breasted women, who are in their mature reproductive years and usually regarded as being of lower attractiveness. At the time these figurines were made, Europe was in the grip of a severe ice age. Obesity and survival into middle age after multiple pregnancies may have been rare in the European Upper Paleolithic. We suggest that depictions of corpulent, middle-aged females were not “Venuses” in any conventional sense. They may, instead, have symbolized the hope for survival and longevity, within well-nourished and reproductively successful communities. |
Barnaby J. Dixson; Gina M. Grimshaw; Wayne L. Linklater; Alan F. Dixson Eye tracking of men's preferences for female breast size and areola pigmentation Journal Article In: Archives of Sexual Behavior, vol. 40, no. 1, pp. 51–58, 2011. @article{Dixson2011a, Sexual selection via male mate choice has often been implicated in the evolution of permanently enlarged breasts in women. While questionnaire studies have shown that men find female breasts visually attractive, there is very little information about how they make such visual judgments. In this study, we used eye-tracking technology to test two hypotheses: (1) that larger breasts should receive the greatest number of visual fixations and longest dwell times, as well as being rated as most attractive; (2) that lightly pigmented areolae, indicative of youth and nubility, should receive most visual attention and be rated as most attractive. Results showed that men rated images with medium-sized or large breasts as significantly more attractive than small breasts. Images with dark and medium areolar pigmentation were rated as more attractive than images with light areolae. However, variations in breast size had no significant effect on eye-tracking measures (initial visual fixations, number of fixations, and dwell times). The majority of initial fixations during eye-tracking tests were on the areolae. However, areolar pigmentation did not affect measures of visual attention. While these results demonstrate that cues indicative of female sexual maturity (large breasts and dark areolae) are more attractive to men, patterns of eye movements did not differ based on breast size or areolar pigmentation. We conclude that areolar pigmentation, as well as breast size, plays a significant role in men's judgments of female attractiveness. However, fine-grained measures of men's visual attention to these morphological traits do not correlate, in a simplistic way, with their attractiveness judgments. |
Barnaby J. Dixson; Gina M. Grimshaw; Wayne L. Linklater; Alan F. Dixson Eye-tracking of men's preferences for waist-to-hip ratio and breast size of women Journal Article In: Archives of Sexual Behavior, vol. 40, no. 1, pp. 43–50, 2011. @article{Dixson2011b, Studies of human physical traits and mate preferences often use questionnaires asking participants to rate the attractiveness of images. Female waist-to-hip ratio (WHR), breast size, and facial appearance have all been implicated in assessments by men of female attractiveness. However, very little is known about how men make fine-grained visual assessments of such images. We used eye-tracking techniques to measure the numbers of visual fixations, dwell times, and initial fixations made by men who viewed front-posed photographs of the same woman, computer-morphed so as to differ in her WHR (0.7 or 0.9) and breast size (small, medium, or large). Men also rated these images for attractiveness. Results showed that the initial visual fixation (occurring within 200 ms from the start of each 5 s test) involved either the breasts or the waist. Both these body areas received more first fixations than the face or the lower body (pubic area and legs). Men looked more often and for longer at the breasts, irrespective of the WHR of the images. However, men rated images with an hourglass shape and a slim waist (0.7 WHR) as most attractive, irrespective of breast size. These results provide quantitative data on eye movements that occur during male judgments of the attractiveness of female images, and indicate that assessments of the female hourglass figure probably occur very rapidly. |
Isabel Dombrowe; Mieke Donk; Christian N. L. Olivers The costs of switching attentional sets Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 8, pp. 2481–2488, 2011. @article{Dombrowe2011, People prioritize those aspects of the visual environment that match their attentional set. In the present study, we investigated whether switching from one attentional set to another is associated with a cost. We asked observers to sequentially saccade toward two color-defined targets, one on the left side of the display, the other on the right, each among a set of heterogeneously colored distractors. The targets were of the same color (no attentional set switch required) or of different colors (switch of attentional sets necessary), with each color consistently tied to a side, to allow observers to maximally prepare for the switch. We found that saccades were less accurate and slower in the switch condition than in the no-switch condition. Furthermore, whenever one of the distractors had the color associated with the other attentional set, a substantial proportion of saccades did not end on the target, but on this distractor. A time course analysis revealed that this distractor preference turned into a target preference after about 250-300 ms, suggesting that this is the time required to switch attentional sets. |
Mieke Donk; Wieske Zoest No control in orientation search: The effects of instruction on oculomotor selection in visual search Journal Article In: Vision Research, vol. 51, no. 19, pp. 2156–2166, 2011. @article{Donk2011, The present study aimed to investigate whether people can selectively use salience information in search for a target. Observers were presented with a display consisting of multiple homogeneously oriented background lines and two orientation singletons. The orientation singletons differed in salience, where salience was defined by their orientation contrast relative to the background lines. Observers had the task to make a speeded eye movement towards a target, which was either the most or the least salient element of the two orientation singletons. The specific orientation of the target was either constant or variable over a block of trials such that observers had varying knowledge concerning the target identity. The results demonstrated that instruction - whether people were instructed to move to the most or the least salient item - only minimally affected the results. Short-latency eye movements were completely salience driven; here it did not matter whether people were searching for the most or least salient element. Long-latency eye movements were marginally affected by instruction, in particular when observers knew the target identity. These results suggest that even though people use salience information in oculomotor selection, they cannot use this information in a goal-driven manner. The results are discussed in terms of current models on visual selection. |
Tom Foulsham; Rana Alan; Alan Kingstone Scrambled eyes? Disrupting scene structure impedes focal processing and increases bottom-up guidance Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 7, pp. 2008–2025, 2011. @article{Foulsham2011b, Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes. |
Tom Foulsham; Jason J. S. Barton; Alan Kingstone; Richard Dewhurst; Geoffrey Underwood Modeling eye movements in visual agnosia with a saliency map approach: Bottom-up guidance or top-down strategy? Journal Article In: Neural Networks, vol. 24, no. 6, pp. 665–677, 2011. @article{Foulsham2011, Two recent papers (Foulsham, Barton, Kingstone, Dewhurst, & Underwood, 2009; Mannan, Kennard, & Husain, 2009) report that neuropsychological patients with a profound object recognition problem (visual agnosic subjects) show differences from healthy observers in the way their eye movements are controlled when looking at images. The interpretation of these papers is that eye movements can be modeled as the selection of points on a saliency map, and that agnosic subjects show an increased reliance on visual saliency, i.e., brightness and contrast in low-level stimulus features. Here we review this approach and present new data from our own experiments with an agnosic patient that quantifies the relationship between saliency and fixation location. In addition, we consider whether the perceptual difficulties of individual patients might be modeled by selectively weighting the different features involved in a saliency map. Our data indicate that saliency is not always a good predictor of fixation in agnosia: even for our agnosic subject, as for normal observers, the saliency-fixation relationship varied as a function of the task. This means that top-down processes still have a significant effect on the earliest stages of scanning in the setting of visual agnosia, indicating severe limitations for the saliency map model. Top-down, active strategies-which are the hallmark of our human visual system-play a vital role in eye movement control, whether we know what we are looking at or not. |
Tom Foulsham; Robert Teszka; Alan Kingstone Saccade control in natural images is shaped by the information visible at fixation: Evidence from asymmetric gaze-contingent windows Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 1, pp. 266–283, 2011. @article{Foulsham2011c, When people view images, their saccades are predominantly horizontal and show a positively skewed distribution of amplitudes. How are these patterns affected by the information close to fixation and the features in the periphery? We recorded saccades while observers encoded a set of scenes with a gaze-contingent window at fixation: Features inside a rectangular (Experiment 1) or elliptical (Experiment 2) window were intact; peripheral background was masked completely or blurred. When the window was asymmetric, with more information preserved either horizontally or vertically, saccades tended to follow the information within the window, rather than exploring unseen regions, which runs counter to the idea that saccades function to maximize information gain on each fixation. Window shape also affected fixation and amplitude distributions, but horizontal windows had less of an impact. The findings suggest that saccades follow the features currently being processed and that normal vision samples these features from a horizontally elongated region. |
Tom Foulsham; Geoffrey Underwood If visual saliency predicts search, then why? Evidence from normal and gaze-contingent search tasks in natural scenes Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 48–63, 2011. @article{Foulsham2011a, The Itti and Koch (Vision Research 40: 14891506, 2000) saliency map model has inspired a wealth of research testing the claim that bottom-up saliency determines the placement of eye fixations in natural scenes. Although saliency seems to correlate with (although not necessarily cause) fixation in free-viewing or encoding tasks, it has been suggested that visual saliency can be overridden in a search task, with saccades being planned on the basis of target features, rather than being captured by saliency. Here, we find that target regions of a scene that are salient according to this model are found quicker than control regions (Experiment 1). However, this does not seem to be altered by filtering features in the periphery using a gaze-contingent display (Experiment 2), and a deeper analysis of the eye movements made suggests that the saliency effect is instead due to the meaning of the scene regions. Experiment 3 supports this interpretation, showing that scene inversion reduces the saliency effect. These results suggest that saliency effects on search may have nothing to do with bottom-up saccade guidance. |
Tom Foulsham; Esther Walker; Alan Kingstone The where, what and when of gaze allocation in the lab and the natural environment Journal Article In: Vision Research, vol. 51, no. 17, pp. 1920–1931, 2011. @article{Foulsham2011d, How do people distribute their visual attention in the natural environment? We and our colleagues have usually addressed this question by showing pictures, photographs or videos of natural scenes under controlled conditions and recording participants' eye movements as they view them. In the present study, we investigated whether people distribute their gaze in the same way when they are immersed and moving in the world compared to when they view video clips taken from the perspective of a walker. Participants wore a mobile eye tracker while walking to buy a coffee, a trip that required a short walk outdoors through the university campus. They subsequently watched first-person videos of the walk in the lab. Our results focused on where people directed their eyes and their head, what objects were gazed at and when attention-grabbing items were selected. Eye movements were more centralised in the real world, and locations around the horizon were selected with head movements. Other pedestrians, the path, and objects in the distance were looked at often in both the lab and the real world. However, there were some subtle differences in how and when these items were selected. For example, pedestrians close to the walker were fixated more often when viewed on video than in the real world. These results provide a crucial test of the relationship between real behaviour and eye movements measured in the lab. |
Jeremy Freeman; G. J. Brouwer; David J. Heeger; Elisha P. Merriam Orientation decoding depends on maps, not columns Journal Article In: Journal of Neuroscience, vol. 31, no. 13, pp. 4792–4804, 2011. @article{Freeman2011a, The representation of orientation in primary visual cortex (V1) has been examined at a fine spatial scale corresponding to the columnar architecture. We present functional magnetic resonance imaging (fMRI) measurements providing evidence for a topographic map of orientation preference in human V1 at a much coarser scale, in register with the angular-position component of the retinotopic map of V1. This coarse-scale orientation map provides a parsimonious explanation for why multivariate pattern analysis methods succeed in decoding stimulus orientation from fMRI measurements, challenging the widely held assumption that decoding results reflect sampling of spatial irregularities in the fine-scale columnar architecture. Decoding stimulus attributes and cognitive states from fMRI measurements has proven useful for a number of applications, but our results demonstrate that the interpretation cannot assume decoding reflects or exploits columnar organization. |
Jeremy Freeman; Eero P. Simoncelli Metamers of the ventral stream Journal Article In: Nature Neuroscience, vol. 14, no. 9, pp. 1195–1204, 2011. @article{Freeman2011, The human capacity to recognize complex visual patterns emerges in a sequence of brain areas known as the ventral stream, beginning with primary visual cortex (V1). We developed a population model for mid-ventral processing, in which nonlinear combinations of V1 responses are averaged in receptive fields that grow with eccentricity. To test the model, we generated novel forms of visual metamers, stimuli that differ physically but look the same. We developed a behavioral protocol that uses metameric stimuli to estimate the receptive field sizes in which the model features are represented. Because receptive field sizes change along the ventral stream, our behavioral results can identify the visual area corresponding to the representation. Measurements in human observers implicate visual area V2, providing a new functional account of neurons in this area. The model also explains deficits of peripheral vision known as crowding, and provides a quantitative framework for assessing the capabilities and limitations of everyday vision. |
Hans Peter Frey; Kerstin Wirz; Verena Willenbockel; Torsten Betz; Cornell Schreiber; Tom Troscianko; Peter König Beyond correlation: Do color features influence attention in rainforest? Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 36, 2011. @article{Frey2011a, Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous. We recorded eye-movements of color-normal and deuteranope human participants freely viewing original and modified rainforest images. Eliminating red-green color information dramatically alters fixation behavior in color-normal participants. Changes in feature correlations and variability over subjects and conditions provide evidence for a causal effect of red-green color-contrast. The effects of blue-yellow contrast are much smaller. However, globally rotating hue in color space in these images reveals a mechanism analyzing color-contrast invariant of a specific axis in color space. Surprisingly, in deuteranope participants we find significantly elevated red-green contrast at fixation points, comparable to color-normal participants. Temporal analysis indicates that this is due to compensatory mechanisms acting on a slower time scale. Taken together, our results suggest that under natural conditions red-green color information contributes to overt attention at a low-level (bottom-up). Nevertheless, the results of the image modifications and deuteranope participants indicate that evaluation of color information is done in a hue-invariant fashion. |
Jared Frey; Dario L. Ringach Binocular eye movements evoked by self-induced motion parallax Journal Article In: Journal of Neuroscience, vol. 31, no. 47, pp. 17069–17073, 2011. @article{Frey2011, Perception often triggers actions, but actions may sometimes be necessary to evoke percepts. This is most evident in the recovery of depth by self-induced motion parallax. Here we show that depth information derived from one's movement through a stationary environment evokes binocular eye movements consistent with the perception of three-dimensional shape. Human subjects stood in front of a display and viewed a simulated random-dot sphere presented monocularly or binocularly. Eye movements were recorded by a head-mounted eye tracker, while head movements were monitored by a motion capture system. The display was continuously updated to simulate the perspective projection of a stationary, transparent random dot sphere viewed from the subject's vantage point. Observers were asked to keep their gaze on a red target dot on the surface of the sphere as they moved relative to the display. The movement of the target dot simulated jumps in depth between the front and back surfaces of the sphere along the line of sight. We found the subjects' eyes converged and diverged concomitantly with changes in the perceived depth of the target. Surprisingly, even under binocular viewing conditions, when binocular disparity signals conflict with depth information from motion parallax, transient vergence responses were observed. These results provide the first demonstration that self-induced motion parallax is sufficient to drive vergence eye movements under both monocular and binocular viewing conditions. |
Teresa C. Frohman; Scott L. Davis; Elliot M. Frohman Modeling the mechanisms of Uhthoff's phenomenon in MS patients with internuclear ophthalmoparesis Journal Article In: Annals of the New York Academy of Sciences, vol. 1233, no. 1, pp. 313–319, 2011. @article{Frohman2011, Internuclear ophthalmoparesis (INO) is the most common saccadic eye movement disorder observed in patients with multiple sclerosis (MS). It is characterized by slowing of the adducting eye during horizontal saccades, and most commonly results from a demyelinating lesion in the medial longitudinal fasciculus (MLF) within the midline tegmentum of the pons (ventral to the fourth ventricle) or midbrain (ventral to the cerebral aqueduct). Recent research has demonstrated that adduction velocity in MS-related INO, as measured by infrared eye movement recording techniques, is further reduced by a systematic increase in core body temperature (utilizing tube-lined water infusion suits in conjunction with an ingestible temperature probe and transabdominal telemetry) and reversed to baseline with active cooling. These results suggest that INO may represent a model syndrome by which we can carefully study the Uhthoff's phenomenon and objectively test therapeutic agents for its prevention. |
Isabella Fuchs; Ulrich Ansorge; Christoph Redies; Helmut Leder Salience in paintings: Bottom-up influences on eye fixations Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 25–36, 2011. @article{Fuchs2011, In the current study, we investigated whether visual salience attracts attention in a bottom-up manner. We presented abstract and depictive paintings as well as photographs to naı¨ve participants in free-viewing (Experiment 1) and target-search (Experiment 2) tasks. Image salience was computed in terms of local feature contrasts in color, luminance, and orientation. Based on the theories of stimulus-driven salience effects on attention and fixations, we expected salience effects in all conditions and a characteristic short-lived temporal profile of the salience-driven effect on fixations. Our results confirmed the predictions. Results are discussed in terms of their potential implications. |
Shai Gabay; Yoni Pertzov; Avishai Henik Orienting of attention, pupil size, and the norepinephrine system Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 1, pp. 123–129, 2011. @article{Gabay2011, This research examined a novel suggestion regard-ing the involvement of the locus coeruleus–norepinephrine (LC–NE) system in orienting reflexive (exogenous) attention. A common procedure for studying exogenous orienting of attention is Posner's cuing task. Importantly, one can manipulate the required level of target processing by changing task requirements, which, in turn, can elicit a different time course of inhibition of return (IOR). An easy task (responding to target location) produces earlier onset IOR, whereas a demanding task (responding to target identity) produces later onset IOR. Aston-Jones and Cohen (Annual Review of Neuroscience, 28, 403–450, 2005) presented a theory suggesting two different modes of LC activity: tonic and phasic. Accordingly, we suggest that in the more demanding task, the LC–NE system is activated in phasic mode, and in the easier task, it is activated in tonic mode. This, in turn, influences the appearance of IOR. We examined this suggestion by measuring participants' pupil size, which has been demonstrated to correlate with the LC–NE system, while they performed cuing tasks. We found a response-locked phasic dilation of the pupil in the discrimination task, as compared with the localization task, which may reflect different firing modes of the LC–NE system during the two tasks. We also demonstrated a correlation between pupil size at the time of cue presentation and magnitude of IOR. |
Benjamin Gagl; Stefan Hawelka; Florian Hutzler Systematic influence of gaze position on pupil size measurement: Analysis and correction Journal Article In: Behavior Research Methods, vol. 43, no. 4, pp. 1171–1181, 2011. @article{Gagl2011, Cognitive effort is reflected in pupil dilation, but the assessment of pupil size is potentially susceptible to changes in gaze position. This study exemplarily used sentence reading as a stand-in for paradigms that assess pupil size in tasks during which changes in gaze position are unavoidable. The influence of gaze position on pupil size was first investigated by an artificial eye model with a fixed pupil size. Despite its fixed pupil size, the systematic measurements of the artificial eye model revealed substantial gaze-position-dependent changes in the measured pupil size. We evaluated two functions and showed that they can accurately capture and correct the gaze-dependent measurement error of pupil size recorded during a sentence-reading and an effortless z-string-scanning task. Implications for previous studies are discussed, and recommendations for future studies are provided. |
Tamara L. Watson; B. Krekelberg An equivalent noise investigation of saccadic suppression Journal Article In: Journal of Neuroscience, vol. 31, no. 17, pp. 6535–6541, 2011. @article{Watson2011, Visual stimuli presented just before or during an eye movement are more difficult to detect than those same visual stimuli presented during fixation. This laboratory phenomenon-behavioral saccadic suppression-is thought to underlie the everyday experience of not perceiving the motion created by our own eye movements-saccadic omission. At the neural level, many cortical and sub cortical areas respond differently to perisaccadic visual stimuli than to stimuli presented during fixation. Those neural response changes, however, are complex and the link to the behavioral phenomena of reduced detect ability remains tentative.We used awellestablished model of human visual detection perform ance to provide a quantitative description of behavioral saccadic suppression and thereby allow amore focused search for its neural mechanisms. We used an equivalent noise method to distinguish between three mechanisms that could underlie saccadic suppression. The first hypothesized mechanism reduces the gain of the visual system, the second increases internal noise levels in a stimulus-dependent manner, and the third increases stimulus uncertainty. All three mechanisms predict that perisaccadic stimuli should be more difficult to detect, but each mechanism predicts a unique pattern of detectability as a function of the amount of external noise. Our experimental finding was that saccades increased detection threshold sat low external noise, but had little influence on thresholds at high levels of external noise. A formal analysis of these data in the equivalent noise analysis framework showed that the most parsimonious mechanism underlying saccadic suppression is a stimulus-independent reduction in response gain. |
Matthew David Weaver; Johan Lauwereyns Attentional capture and hold: the oculomotor correlates of the change detection advantage for faces Journal Article In: Psychological Research, vol. 75, no. 1, pp. 10–23, 2011. @article{Weaver2011, The present study investigated the influence of semantic information on overt attention. Semantic influence on attentional capture and hold mechanisms was explored by measuring oculomotor correlates of the reaction time (RT) and accuracy advantage for faces in the change detection task. We also examined whether the face advantage was due to mandatory processing of faces or an idiosyncratic strategy by participants, by manipulating preknowledge of the object category in which to expect a change. An RT and accuracy advantage was found for detecting changes in faces compared to other objects of less social and biological significance, in the form of greater attentional capture and hold. The faster attentional capture by faces appeared to overcompensate for the longer hold, to produce faster and more accurate manual responses. Preknowledge did not eliminate the face advantage, suggesting that faces receive mandatory processing when competing for attention with stimuli of less sociobiological salience. |
Matthew David Weaver; Johan Lauwereyns; Jan Theeuwes The effect of semantic information on saccade trajectory deviations Journal Article In: Vision Research, vol. 51, no. 10, pp. 1124–1128, 2011. @article{Weaver2011a, In recent years, many studies have explored the conditions in which irrelevant visual distractors affect saccades trajectories. These previous studies mainly focused on the low-level stimulus characteristics and how they affect the magnitude of curvature. The present study explored the possible effect of high level semantic information on saccade curvature. Semantic saliency was manipulated by presenting irrelevant peripheral taboo versus neutral cue words in a spatial cuing paradigm that allowed for the measurement of trajectory deviations. Findings showed larger saccade trajectory deviations away from taboo (versus neutral) cue words when making a saccade towards another location. This indicates that due to their high semantic saliency, more inhibition was necessarily applied to taboo cue locations to effectively suppress their competing as saccade targets. |
Alice K. Welham; Andy J. Wills Unitization, similarity, and overt attention in categorization and exposure Journal Article In: Memory & Cognition, vol. 39, no. 8, pp. 1518–1533, 2011. @article{Welham2011, Unitization, the creation of new stimulus features by the fusion of preexisting features, is one of the hypothesized processes of perceptual learning (Goldstone Annual Review of Psychology, 49:585-612, 1998). Some argue that unitization occurs to the extent that it is required for successful task performance (e.g., Shiffrin & Lightfoot, 1997), while others argue that unitization is largely independent of functionality (e.g., McLaren & Mackintosh Animal Learning & Behavior, 30:177-200, 2000). Across three experiments, employing supervised category learning and unsupervised exposure, we investigated three predictions of the McLaren and Mackintosh (Animal Learning & Behavior, 30:177-200, 2000) model: (1) Unitization is accompanied by an initial increase in the subjective similarity of stimuli sharing a unitized component; (2) unitization of a configuration occurs through exposure to its components, even when the task does not require it; (3) as unitization approaches completion, salience of the unitized component may be reduced. Our data supported these predictions. We also found that unitization is associated with increases in overt attention to the unitized component, as measured through eye tracking. |
Jessica Werthmann; Anne Roefs; Chantal Nederkoorn; Karin Mogg; Brendan P. Bradley; Anita Jansen Can(not) take my eyes off it: Attention bias for food in overweight participants Journal Article In: Health Psychology, vol. 30, no. 5, pp. 561–569, 2011. @article{Werthmann2011, Objective: The aim of the current study was to investigate attention biases for food cues, craving, and overeating in overweight and healthy-weight participants. Specifically, it was tested whether attention allocation processes toward high-fat foods differ between overweight and normal weight individuals and whether selective attention biases for food cues are related to craving and food intake. Method: Eye movements were recorded as a direct index of attention allocation in a sample of 22 overweight/obese and 29 healthy-weight female students during a visual probe task with food pictures. In addition, self-reported craving and actual food intake during a bogus "taste-test" were assessed. Results: Overweight participants showed an approach-avoidance pattern of attention allocation toward high-fat food. Overweight participants directed their first gaze more often toward food pictures than healthy-weight individuals, but subsequently showed reduced maintenance of attention on these pictures. For overweight participants, craving was related to initial orientation toward food. Moreover, overweight participants consumed significantly more snack food than healthy-weight participants. Conclusion: Results emphasize the importance of identifying different attention bias components in overweight individuals with regard to craving and subsequent overeating. |
Gregory L. West; Naseem Al-Aidroos; Josh Susskind; Jay Pratt Emotion and action: The effect of fear on saccadic performance Journal Article In: Experimental Brain Research, vol. 209, no. 1, pp. 153–158, 2011. @article{West2011, According to evolutionary accounts, emotions originated to prepare an organism for action (Darwin 1872; Frijda 1986). To investigate this putative relationship between emotion and action, we examined the effect of an emotional stimulus on oculomotor actions controlled by the superior colliculus (SC), which has connections with subcortical structures involved in the perceptual prioritization of emotion, such as the amygdala through the pulvinar. The pulvinar connects the amygdala to cells in the SC responsible for the speed of saccade execution, while not affecting the spatial component of the saccade. We tested the effect of emotion on both temporal and spatial signatures of oculomotor functioning using a gap-distractor paradigm. Changes in spatial programming were examined through saccadic curvature in response to a remote distractor stimulus, while changes in temporal execution were examined using a fixation gap manipulation. We show that following the presentation of a task-irrelevant fearful face, the temporal but not the spatial component of the saccade generation system was affected. |
Sarah J. White; Tessa Warren; Erik D. Reichle Parafoveal preview during reading: Effects of sentence position Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1221–1238, 2011. @article{White2011, Two experiments examined parafoveal preview for words located in the middle of sentences and at sentence boundaries. Parafoveal processing was shown to occur for words at sentence-initial, mid-sentence, and sentence-final positions. Both Experiments 1 and 2 showed reduced effects of preview on regressions out for sentence-initial words. In addition, Experiment 2 showed reduced preview effects on first-pass reading times for sentence-initial words. These effects of sentence position on preview could result from either reduced parafoveal processing for sentence-initial words or other processes specific to word reading at sentence boundaries. In addition to the effects of preview, the experiments also demonstrate variability in the effects of sentence wrap-up on different reading measures, indicating that the presence and time course of wrap-up effects may be modulated by text-specific factors. We also report simulations of Experiment 2 using version 10 of E-Z Reader (Reichle, Warren, & McConnell, 2009), designed to explore the possible mechanisms underlying parafoveal preview at sentence boundaries. |
Melissa L. -H. Võ; John M. Henderson Object-scene inconsistencies do not capture gaze: evidence from the flash-preview moving-window paradigm Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 6, pp. 1742–1753, 2011. @article{Vo2011, In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene. |
Ben D. B. Willmore; James A. Mazer; Jack L. Gallant Sparse coding in striate and extrastriate visual cortex Journal Article In: Journal of Neurophysiology, vol. 105, no. 6, pp. 2907–2919, 2011. @article{Willmore2011, Theoretical studies of mammalian cortex argue that efficient neural codes should be sparse. However, theoretical and experimental studies have used different definitions of the term "sparse" leading to three assumptions about the nature of sparse codes. First, codes that have high lifetime sparseness require few action potentials. Second, lifetime-sparse codes are also population-sparse. Third, neural codes are optimized to maximize lifetime sparseness. Here, we examine these assumptions in detail and test their validity in primate visual cortex. We show that lifetime and population sparseness are not necessarily correlated and that a code may have high lifetime sparseness regardless of how many action potentials it uses. We measure lifetime sparseness during presentation of natural images in three areas of macaque visual cortex, V1, V2, and V4. We find that lifetime sparseness does not increase across the visual hierarchy. This suggests that the neural code is not simply optimized to maximize lifetime sparseness. We also find that firing rates during a challenging visual task are higher than theoretical values based on metabolic limits and that responses in V1, V2, and V4 are well-described by exponential distributions. These findings are consistent with the hypothesis that neurons are optimized to maximize information transmission subject to metabolic constraints on mean firing rate. |
Sara A. Winges; John F. Soechting Spatial and temporal aspects of cognitive influences on smooth pursuit Journal Article In: Experimental Brain Research, vol. 211, no. 1, pp. 27–36, 2011. @article{Winges2011, It is well known that prediction is used to overcome processing delays within the motor system and ocular control is no exception. Motion extrapolation is one mechanism that can be used to overcome the visual processing delay. Expectations based on previous experience or cognitive cues are also capable of overcoming this delay. The present experiment was designed to examine how smooth pursuit is altered by cognitive information about the time and/or direction of an upcoming change in target direction. Subjects visually tracked a cursor as it moved at a constant velocity on a computer screen. The target initially moved from left to right and then abruptly reversed horizontal direction and traveled along one of seven possible oblique paths. In half of the trials, a cue was present throughout the trial to signal the position (as well as the time), and/or the direction of the upcoming change. Whenever a position cue (which will be referred to as a timing cue throughout the paper) was present, there were clear anticipatory adjustments to the horizontal velocity component of smooth pursuit. In the presence of a timing cue, a directional cue also led to anticipatory adjustments in the vertical velocity, and hence the direction of smooth pursuit. However, without the timing cue, a directional cue alone produced no anticipation. Thus, in this task, a cognitive spatial cue about the new direction could not be used unless it was made explicit in the time domain. |
Heather Winskel Orthographic and phonological parafoveal processing of consonants, vowels, and tones when reading Thai Journal Article In: Applied Psycholinguistics, vol. 32, no. 4, pp. 739–759, 2011. @article{Winskel2011, Four eye movement experiments investigated whether readers use parafoveal input to gain information about the phonological or orthographic forms of consonants, vowels, and tones in word recognition when reading Thai silently. Target words were presented in sentences preceded by parafoveal previews in which consonant, vowel, or tone information was manipulated. Previews of homophonous consonants (Experiment I) and concordant vowels (Experiment 2) did not substantially facilitate processing of the target word, whereas the identical previews did. Hence, orthography appears to be playing the prominent role in early word recognition for consonants and vowels. Incorrect tone marker previews (Experiment 3) substantially retarded the subsequent processing of the target word, indicating that lexical tone plays an important role in early word recognition. Vowels in VOP (Experiment 4) did not facilitate processing, which points to vowel position being a significant factor. Primarily, orthographic codes of consonants and vowels (HOP) in conjunction with tone information are assembled from parafoveal input and used for early lexical access. |
Andi K. Winterboer; Martin I. Tietze; Maria K. Wolters; Johanna D. Moore The user model-based summarize and refine approach improves information presentation in spoken dialog systems Journal Article In: Computer Speech and Language, vol. 25, no. 2, pp. 175–191, 2011. @article{Winterboer2011, A common task for spoken dialog systems (SDS) is to help users select a suitable option (e.g., flight, hotel, and restaurant) from the set of options available. As the number of options increases, the system must have strategies for generating summaries that enable the user to browse the option space efficiently and successfully. In the user-model based summarize and refine approach (UMSR, Demberg and Moore, 2006), options are clustered to maximize utility with respect to a user model, and linguistic devices such as discourse cues and adverbials are used to highlight the trade-offs among the presented items. In a Wizard-of-Oz experiment, we show that the UMSR approach leads to improvements in task success, efficiency, and user satisfaction compared to an approach that clusters the available options to maximize coverage of the domain (Polifroni et al., 2003). In both a laboratory experiment and a web-based experimental paradigm employing the Amazon Mechanical Turk platform, we show that the discourse cues in UMSR summaries help users compare different options and choose between options, even though they do not improve verbatim recall. This effect was observed for both written and spoken stimuli. |
C. Witzel; Karl R. Gegenfurtner Is there a lateralized category effect for color? Journal Article In: Journal of Vision, vol. 11, no. 12, pp. 16–16, 2011. @article{Witzel2011, According to the lateralized category effect for color, the influence of color category borders on color perception in fast reaction time tasks is significantly stronger in the right visual field than in the left. This finding has directly related behavioral category effects to the hemispheric lateralization of language. Multiple succeeding articles have built on these findings. We ran ten different versions of the two original experiments with overall 230 naive observers. We carefully controlled the rendering of the stimulus colors and determined the genuine color categories with an appropriate naming method. Congruent with the classical pattern of a category effect, reaction times in the visual search task were lower when the two colors to be discriminated belonged to different color categories than when they belonged to the same category. However, these effects were not lateralized: They appeared to the same extent in both visual fields. |
Lise Van der Haegen; Marc Brysbaert The mechanisms underlying the interhemispheric integration of information in foveal word recognition: Evidence for transcortical inhibition Journal Article In: Brain and Language, vol. 118, no. 3, pp. 81–89, 2011. @article{VanderHaegen2011, Words are processed as units. This is not as evident as it seems, given the division of the human cerebral cortex in two hemispheres and the partial decussation of the optic tract. In two experiments, we investigated what underlies the unity of foveally presented words: A bilateral projection of visual input in foveal vision, or interhemispheric inhibition and integration as proposed by the SERIOL model of visual word recognition. Experiment 1 made use of pairs of words and nonwords with a length of four letters each. Participants had to name the word and ignore the nonword. The visual field in which the word was presented and the distance between the word and the nonword were manipulated. The results showed that the typical right visual field advantage was observed only when the word and the nonword were clearly separated. When the distance between them became smaller, the right visual field advantage turned into a left visual field advantage, in line with the interhemispheric inhibition mechanism postulated by the SERIOL model. Experiment 2, using 5-letters stimuli, confirmed that this result was not due to the eccentricity of the word relative to the fixation location but to the distance between the word and the nonword. |
Lise Van der Haegen; Qing Cai; Ruth Seurinck; Marc Brysbaert Further fMRI validation of the visual half field technique as an indicator of language laterality: A large-group analysis Journal Article In: Neuropsychologia, vol. 49, no. 10, pp. 2879–2888, 2011. @article{VanderHaegen2011a, The best established lateralized cerebral function is speech production, with the majority of the population having left hemisphere dominance. An important question is how to best assess the laterality of this function. Neuroimaging techniques such as functional Magnetic Resonance Imaging (fMRI) are increasingly used in clinical settings to replace the invasive Wada-test. We evaluated the usefulness of behavioral visual half field (VHF) tasks for screening a large sample of healthy left-handers. Laterality indices (LIs) calculated on the basis of the latencies in a word and picture naming VHF task were compared to the brain activity measured in a silent word generation task in fMRI (pars opercularis/BA44 and pars triangularis/BA45). Results confirmed the usefulness of the VHF-tasks as a screening device. None of the left-handed participants with clear right visual field (RVF) advantages in the picture and word naming task showed right hemisphere dominance in the scanner. In contrast, 16/20 participants with a left visual field (LVF) advantage in both word and picture naming turned out to have atypical right brain dominance. Results were less clear for participants who failed to show clear VHF asymmetries (below 20 ms RVF advantage and below 60 ms LVF advantage) or who had inconsistent asymmetries in picture and word naming. These results indicate that the behavioral tasks can mainly provide useful information about the direction of speech dominance when both VHF differences clearly point in the same direction. |
Stefan Van der Stigchel; Jelmer P. De Vries; R. Bethlehem; Jan Theeuwes A global effect of capture saccades Journal Article In: Experimental Brain Research, vol. 210, no. 1, pp. 57–65, 2011. @article{VanderStigchel2011, When two target elements are presented in close proximity, the endpoint of a saccade is generally positioned at an intermediate location ('global effect'). Here, we investigated whether the global effect also occurs for eye movements executed to distracting elements. To this end, we adapted the oculomotor capture paradigm such that on a subset of trials, two distractors were presented. When the two distractors were closely aligned, erroneous eye movements were initiated to a location in between the two distractors. Even though to a lesser extent, this effect was also present when the two distractors were presented further apart. In a second experiment, we investigated the global effect for eye movements in the presence of two targets. A strong global effect was observed when two targets were presented closely aligned, while this effect was absent when the targets were further apart. This study shows that there is a global effect when saccades are captured by distractors. This 'capture global' effect is different from the traditional global effect that occurs when two targets are presented because the global effect of capture saccades also occurs for remote elements. The spatial dynamics of this global effect will be explained in terms of the population coding theory. |
Julie A. Van Dyke; Brian McElree Cue-dependent interference in comprehension Journal Article In: Journal of Memory and Language, vol. 65, no. 3, pp. 247–263, 2011. @article{VanDyke2011, The role of interference as a primary determinant of forgetting in memory has long been accepted, however its role as a contributor to poor comprehension is just beginning to be understood. The current paper reports two studies, in which speed-accuracy tradeoff and eye-tracking methodologies were used with the same materials to provide converging evidence for the role of syntactic and semantic cues as mediators of both proactive (PI) and retroactive interference (RI) during comprehension. Consistent with previous work (e.g., Van Dyke & Lewis, 2003), we found that syntactic constraints at the retrieval site are among the cues that drive retrieval in comprehension, and that these constraints effectively limit interference from potential distractors with semantic/pragmatic properties in common with the target constituent. The data are discussed in terms of a cue-overload account, in which interference both arises from and is mediated through a direct-access retrieval mechanism that utilizes a linear, weighted cue-combinatoric scheme. |
Wieske Zoest; Amelia R. Hunt Saccadic eye movements and perceptual judgments reveal a shared visual representation that is increasingly accurate over time Journal Article In: Vision Research, vol. 51, no. 1, pp. 111–119, 2011. @article{Zoest2011, Although there is evidence to suggest visual illusions affect perceptual judgments more than actions, many studies have failed to detect task-dependant dissociations. In two experiments we attempt to resolve the contradiction by exploring the time-course of visual illusion effects on both saccadic eye movements and perceptual judgments, using the Judd illusion. The results showed that, regardless of whether a saccadic response or a perceptual judgement was made, the illusory bias was larger when responses were based on less information, that is, when saccadic latencies were short, or display duration was brief. The time-course of the effect was similar for both the saccadic responses and perceptual judgements, suggesting that both modes may be driven by a shared visual representation. Changes in the strength of the illusion over time also highlight the importance of controlling for the latency of different response systems when evaluating possible dissociations between them. |
James M. G. Tsui; Christopher C. Pack Contrast sensitivity of MT receptive field centers and surrounds Journal Article In: Journal of Neurophysiology, vol. 106, no. 4, pp. 1888–1900, 2011. @article{Tsui2011, Neurons throughout the visual system have receptive fields with both excitatory and suppressive components. The latter are responsible for a phenomenon known as surround suppression, in which responses decrease as a stimulus is extended beyond a certain size. Previous work has shown that surround suppression in the primary visual cortex depends strongly on stimulus contrast. Such complex center-surround interactions are thought to relate to a variety of functions, although little is known about how they affect responses in the extrastriate visual cortex. We have therefore examined the interaction of center and surround in the middle temporal (MT) area of the macaque (Macaca mulatta) extrastriate cortex by recording neuronal responses to stimuli of different sizes and contrasts. Our findings indicate that surround suppression in MT is highly contrast dependent, with the strongest suppression emerging unexpectedly at intermediate stimulus contrasts. These results can be explained by a simple model that takes into account the nonlinear contrast sensitivity of the neurons that provide input to MT. The model also provides a qualitative link to previous reports of a topographic organization of area MT based on clusters of neurons with differing surround suppression strength. We show that this organization can be detected in the gamma-band local field potentials (LFPs) and that the model parameters can predict the contrast sensitivity of these LFP responses. Overall our results show that surround suppression in area MT is far more common than previously suspected, highlighting the potential functional importance of the accumulation of nonlinearities along the dorsal visual pathway. |
Geoffrey Underwood; Katherine Humphrey; Editha M. Loon Decisions about objects in real-world scenes are influenced by visual saliency before and during their inspection Journal Article In: Vision Research, vol. 51, no. 18, pp. 2031–2038, 2011. @article{Underwood2011, Evidence from eye-tracking experiments has provided mixed support for saliency map models of inspection, with the task set for the viewer accounting for some of the discrepancies between predictions and observations. In the present experiment viewers inspected pictures of road scenes with the task being to decide whether or not they would enter a highway from a junction. Road safety observations have concluded that highly visible road users are less likely to be involved in crashes, suggesting that saliency is important in real-world tasks. The saliency of a critical vehicle was varied in the present task, as was the type of vehicle and the preferred vehicle of the viewer. Decisions were influenced by saliency, with more risky decisions when low saliency motorcycles were present. Given that the vehicles were invariably inspected, this may relate to the high incidence of "looked-but-failed-to-see" crashes involving motorcycles and to prevalence effects in visual search. Eye-tracking measures indicated effects of saliency on the fixation preceding inspection of the critical vehicle (as well as effects on inspection of the vehicle itself), suggesting that high saliency can attract an early fixation. These results have implications for recommendations about the conspicuity of vulnerable road users. |
Gurmit Uppal; Mary P. Feely; Michael D. Crossland; Luke Membrey; John Lee; Lyndon Cruz; Gary S. Rubin Assessment of reading behavior with an infrared eye tracker after 360° macular translocation for age-related macular degeneration Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 9, pp. 6486–6496, 2011. @article{Uppal2011, Purpose. Macular translocation (MT360) is complex surgery used to restore reading in exudative age-related macular degeneration (AMD). MT360 involves retinal rotation and subsequent oculomotor globe counterrotation and is not without significant surgical risk. This study attempts to gauge the optimal potential of MT360 in restoring reading ability and describe the quality and extent of recovery. Methods. The six best outcomes were examined from a consecutive series of 23 MT360 cases. Reading behavior and fixation characteristics were examined with an infrared eye tracker. Results were compared to age-matched normal subjects and patients with untreated exudative and nonexudative AMD. Retinal sensitivity was examined with microperimetry to establish threshold visual function. Results. MT360 produced significant improvements in visual function over untreated disease and approximated normal function for reading speed and fixation quality. Relative to the comparative groups, eye tracking revealed the MT360 cohort generated a greater number of horizontal and vertical saccades, of longer latency and reduced velocity. In contrast, saccadic behavior when reading (forward and regressive saccades) closely matched normal function. Microperimetry revealed a reduction in the central scotoma with three patients recovering normal foveal sensitivity. Conclusions. Near normal reading function is recovered despite profound surgical disruption to the anatomy (retinal/oculomotor). MT360 restores foveal function sufficient to produce a single stable locus of fixation, with marked reduction of the central scotoma. Despite the limitations on saccadic function, the quality of reading saccadic behavior is maintained with good reading ability. Oculomotor surgery appears not to limit reading ability, and the results of retinal surgery approximate normal macular function. |
Sarah Uzzaman; Steve Joordens The eyes know what you are thinking: Eye movements as an objective measure of mind wandering Journal Article In: Consciousness and Cognition, vol. 20, no. 4, pp. 1882–1886, 2011. @article{Uzzaman2011, Paralleling the recent work by Reichle, Reineberg, and Schooler (2010), we explore the use of eye movements as an objective measure of mind wandering while participants performed a reading task. Participants were placed in a self-classified probe-caught mind wandering paradigm while their eye movements were recorded. They were randomly probed every 2-3. min and were required to indicate whether their mind had been wandering. The results show that eye movements were generally less complex when participants reported mind wandering episodes, with both duration and frequency of within-word regressions, for example, becoming significantly reduced. This is consistent with the theoretical claim that the cognitive processes that normally influence eye movements to enhance semantic processing during reading exert less control during mind wandering episodes. |
Seppo Vainio; Raymond Bertram; Anneli Pajunen; Jukka Hyönä Processing modifier-head agreement in long Finnish words: Evidence from eye movements Journal Article In: Acta Linguistica Hungarica, vol. 58, no. 1, pp. 134–156, 2011. @article{Vainio2011, The present study investigates whether processing of an inflected Finnish noun is facilitated when preceded by a modifier in the same case ending. In Finnish, modifiers agree with their head nouns both in case and in number and the agreement is expressed by means of suffixes (e.g., vanha/ssa talo/ssa 'old/in house/in' –> 'in the old house'). Vainio et al. (2003; 2008) showed processing benefits for this kind of modifier-head agreement, when the head nouns were relatively short. However, the effect showed up relatively late in the processing stream, such that word n + 1, the word following the target noun talo/ssa, was read faster when it was preceded by an agreeing modifier (vanha/ssa) than when no modifier was present. This led Vainio et al. to the conclusion that agreement exerts its effect at a later stage, namely at the level of syntactic integration and not at the level of lexical access. The current study investigates whether the same holds when head nouns are considerably longer (e.g., kaupungin/talo/ssa 'city house/in' –> 'in the city hall'). Our results show that the effect of agreement is facilitative in case of longer head nouns as well, but – in contrast to what was found for shorter words – the effect not only appeared late, but was also observed in earlier processing measures. It thus seems that, in processing long words, benefits related to modifier-head agreement are not confined to post-lexical syntactic integration processes, but extend to lexical identification of the head. Adapted from the source document |
Eva Van Assche; Denis Drieghe; Wouter Duyck; Marijke Welvaert; Robert J. Hartsuiker The influence of semantic constraints on bilingual word recognition during sentence reading Journal Article In: Journal of Memory and Language, vol. 64, no. 1, pp. 88–107, 2011. @article{VanAssche2011, The present study investigates how semantic constraint of a sentence context modulates language-non-selective activation in bilingual visual word recognition. We recorded Dutch-English bilinguals' eye movements while they read cognates and controls in low and high semantically constraining sentences in their second language. Early and late eye-movement measures yielded cognate facilitation, both for low- and high-constraint sentences. Facilitation increased gradually as a function of cross-lingual overlap between translation equivalents. A control experiment showed that the same stimuli did not yield cognate effects in English monolingual controls, ensuring that these effects were not due to any uncontrolled stimulus characteristics. The present study supports models of bilingual word recognition with a limited role for top-down influences of semantic constraints on lexical access in both early and later stages of bilingual word recognition. |
Marije Beilen; Remco J. Renken; Erik S. Groenewold; Frans W. Cornelissen Attentional window set by expected relevance of environmental signals Journal Article In: PLoS ONE, vol. 6, no. 6, pp. e21262, 2011. @article{Beilen2011, The existence of an attentional window–a limited region in visual space at which attention is directed–has been invoked to explain why sudden visual onsets may or may not capture overt or covert attention. Here, we test the hypothesis that observers voluntarily control the size of this attentional window to regulate whether or not environmental signals can capture attention. We have used a novel approach to test this: participants eye-movements were tracked while they performed a search task that required dynamic gaze-shifts. During the search task, abrupt onsets were presented that cued the target positions at different levels of congruency. The participant knew these levels. We determined oculomotor capture efficiency for onsets that appeared at different viewing eccentricities. From these, we could derive the participant's attentional window size as a function of onset congruency. We find that the window was small during the presentation of low-congruency onsets, but increased monotonically in size with an increase in the expected congruency of the onsets. This indicates that the attentional window is under voluntary control and is set according to the expected relevance of environmental signals for the observer's momentary behavioral goals. Moreover, our approach provides a new and exciting method to directly measure the size of the attentional window. |
Goedele Van Belle; Thomas Busigny; Philippe Lefèvre; Sven Joubert; Olivier Felician; Francesco Gentile; Bruno Rossion; Philippe Lefèvre; Sven Joubert; Olivier Felician; Francesco Gentile; Bruno Rossion Impairment of holistic face perception following right occipito-temporal damage in prosopagnosia: Converging evidence from gaze-contingency Journal Article In: Neuropsychologia, vol. 49, no. 11, pp. 3145–3150, 2011. @article{VanBelle2011, Gaze-contingency is a method traditionally used to investigate the perceptual span in reading by selectively revealing/masking a portion of the visual field in real time. Introducing this approach in face perception research showed that the performance pattern of a brain-damaged patient with acquired prosopagnosia (PS) in a face matching task was reversed, as compared to normal observers: the patient showed almost no further decrease of performance when only one facial part (eye, mouth, nose, etc.) was available at a time (foveal window condition, forcing part-based analysis), but a very large impairment when the fixated part was selectively masked (mask condition, promoting holistic perception) (Van Belle, De Graef, Verfaillie, Busigny, & Rossion, 2010a; Van Belle, De Graef, Verfaillie, Rossion, & Lefèvre, 2010b). Here we tested the same manipulation in a recently reported case of pure prosopagnosia (GG) with unilateral right hemisphere damage (Busigny, Joubert, Felician, Ceccaldi, & Rossion, 2010). Contrary to normal observers, GG was also significantly more impaired with a mask than with a window, demonstrating impairment with holistic face perception. Together with our previous study, these observations support a generalized account of acquired prosopagnosia as a critical impairment of holistic (individual) face perception, implying that this function is a key element of normal human face recognition. Furthermore, the similar behavioral pattern of the two patients despite different lesion localizations supports a distributed network view of the neural face processing structures, suggesting that the key function of human face processing, namely holistic perception of individual faces, requires the activity of several brain areas of the right hemisphere and their mutual connectivity. |
Joris Vangeneugden; Patrick A. De Maziere; Marc M. Van Hulle; Tobias Jaeggli; Luc Van Van Gool; Rufin Vogels Distinct mechanisms for coding of visual actions in macaque temporal cortex Journal Article In: Journal of Neuroscience, vol. 31, no. 2, pp. 385–401, 2011. @article{Vangeneugden2011, Temporal cortical neurons are known to respond to visual dynamic-action displays. Many human psychophysical and functional imaging studies examining biological motion perception have used treadmill walking, in contrast to previous macaque single-cell studies. We assessed the coding of locomotion in rhesus monkey (Macacamulatta) temporal cortex using movies of stationary walkers,varying both form and motion (i.e.,different facing directions) or varying only the frame sequence (i.e.,forward vs backward walking). The majority of superiortemporal sulcus and inferior temporal neurons were selective for facing direction, whereas a minority distinguished forward from backward walking. Support vector machines using the temporal cortical population responses as input classified facing direction well, but forward and backward walking less so. Classification performance for the latter improved markedly when the within-action response modulation was considered, reflecting differences in momentarybody poses within the locomotion sequences. Responses to static pose presentations predicted the responses during the course of the action. Analyses of the responses to walking sequences wherein the start frame was varied across trials showed that some neurons also carried a snapshot sequence signal. Such sequence information was present in neurons that responded to static snapshot presen- tations and in neurons that required motion. Our data suggest that actions area nalyzed by temporal cortical neurons using distinct mechanisms. Most neurons predominantly signal momentary pose. In addition, temporal cortical neurons, including those responding to static pose, are sensitive to pose sequence, which can contribute to the signaling oflearned action sequences. |
Shravan Vasishth; Heiner Drenhaus Locality in German Journal Article In: Dialogue and Discourse, vol. 2, no. 1, pp. 59–82, 2011. @article{Vasishth2011, Three experiments (self-paced reading, eyetracking and an ERP study) show that in relative clauses, increasing the distance between the relativized noun and the relative-clause verb makes it more difficult to process the relative-clause verb (the so-called locality effect). This result is consistent with the predictions of several theories (Gibson, 2000; Lewis and Vasishth, 2005), and contradicts the recent claim (Levy, 2008) that in relative-clause structures increasing argument-verb distance makes processing easier at the verb. Levy's expectation-based account predicts that the expectation for a verb becomes sharper as distance is increased and therefore processing becomes easier at the verb. We argue that, in addition to expectation effects (which are seen in the eyetracking study in first-pass regression probability), processing load alsoincreases with increasing distance. This contradicts Levy's claim that heightened expectation leadsto lower processing cost. Dependency- resolution cost and expectation-based facilitation are jointly responsible for determining processing cost. |
B. -E. Verhoef; Rufin Vogels; Peter Janssen Synchronization between the end stages of the dorsal and the ventral visual stream Journal Article In: Journal of Neurophysiology, vol. 105, no. 5, pp. 2030–2042, 2011. @article{Verhoef2011, The end stage areas of the ventral (IT) and the dorsal (AIP) visual streams encode the shape of disparity-defined three-dimensional (3D) surfaces. Recent anatomical tracer studies have found direct reciprocal connections between the 3D-shape selective areas in IT and AIP. Whether these anatomical connections are used to facilitate 3D-shape perception is still unknown. We simultaneously recorded multi-unit activity (MUA) and local field potentials in IT and AIP while monkeys discriminated between concave and convex 3D shapes and measured the degree to which the activity in IT and AIP synchronized during the task. We observed strong beta-band synchronization between IT and AIP preceding stimulus onset that decreased shortly after stimulus onset and became modulated by stereo-signal strength and stimulus contrast during the later portion of the stimulus period. The beta-coherence modulation was unrelated to task-difficulty, regionally specific, and dependent on the MUA selectivity of the pairs of sites under study. The beta-spike-field coherence in AIP predicted the upcoming choice of the monkey. Several convergent lines of evidence suggested AIP as the primary source of the AIP-IT synchronized activity. The synchronized beta activity seemed to occur during perceptual anticipation and when the system has stabilized to a particular perceptual state but not during active visual processing. Our findings demonstrate for the first time that synchronized activity exists between the end stages of the dorsal and ventral stream during 3D-shape discrimination. |
Marine Vernet; Qing Yang; Zoï Kapoula Guiding binocular saccades during reading: A TMS study of the PPC Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 14, 2011. @article{Vernet2011, Reading is an activity based on complex sequences of binocular saccades and fixations. During saccades, the eyes do not move together perfectly: saccades could end with a misalignment, compromising fused vision. During fixations, small disconjugate drift can partly reduce this misalignment. We hypothesized that maintaining eye alignment during reading involves active monitoring from posterior parietal cortex (PPC); this goes against traditional views considering only downstream binocular control. Nine young adults read a text; transcranial magnetic stimulation (TMS) was applied over the PPC every 5 ± 0.2 s. Eye movements were recorded binocularly with Eyelink II. Stimulation had three major effects: (1) disturbance of eye alignment during fixation; (2) increase of saccade disconjugacy leading to eye misalignment; (3) decrease of eye alignment reduction during fixation drift. The effects depend on the side; the right PPC was more involved in maintaining alignment over the motor sequence. Thus, the PPC is actively involved in the control of binocular eye alignment during reading, allowing clear vision. Cortical activation during reading is related to linguistic processes and motor control per se. The study might be of interest for the understanding of deficits of binocular coordination, encountered in several populations, e.g., in children with dyslexia. |
Eduardo Vidal-Abarca; Tomás Martinez; Ladislao Salmerón; Raquel Cerdán; Ramiro Gilabert; Laura Gil; Amelia Mañá; Ana C. Llorens; Ricardo Ferris Recording online processes in task-oriented reading with Read&Answer Journal Article In: Behavior Research Methods, vol. 43, no. 1, pp. 179–192, 2011. @article{VidalAbarca2011, We present an application to study task-oriented reading processes called Read&Answer. The application mimics paper-and-pencil situations in which a reader interacts with one or more documents to perform a specific task, such as answering questions, writing an essay, or similar activities. Read&Answer presents documents and questions with a mask. The reader unmasks documents and questions so that only a piece of information is available at a time. This way the entire interaction between the reader and the documents on the task is recorded and can be analyzed. We describe Read&Answer and present its applications for research and assessment. Finally, we explain two studies that compare readers' performance on Read&Answer with students' reading times and comprehension levels on a paper-and-pencil task, and on a computer task recorded with eye-tracking. The use of Read&Answer produced similar comprehension scores, although it changed the pattern of reading times. |
Eleonora Vig; Michael Dorr; Thomas Martinetz; Erhardt Barth Eye movements show optimal average anticipation with natural dynamic scenes Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 79–88, 2011. @article{Vig2011, A less studied component of gaze allocation in dynamic real-world scenes is the time lag of eye movements in responding to dynamic attention-capturing events. Despite the vast amount of research on anticipatory gaze behaviour in natural situations, such as action execution and observation, little is known about the predictive nature of eye movements when viewing different types of natural or realistic scene sequences. In the present study, we quantify the degree of anticipation during the free viewing of dynamic natural scenes. The cross-correlation analysis of image-based saliency maps with an empirical saliency measure derived from eye movement data reveals the existence of predictive mechanisms responsible for a near-zero average lag between dynamic changes of the environment and the responding eye movements. We also show that the degree of anticipation is reduced when moving away from natural scenes by introducing camera motion, jump cuts, and film-editing. |
Ye Wang; Bogdan F. Iliescu; Jianfu Ma; Kresimir Josic; Valentin Dragoi Adaptive changes in neuronal synchronization in macaque V4 Journal Article In: Journal of Neuroscience, vol. 31, no. 37, pp. 13204–13213, 2011. @article{Wang2011b, A fundamental property of cortical neurons is the capacity to exhibit adaptive changes or plasticity. Whether adaptive changes in cortical responses are accompanied by changes in synchrony between individual neurons and local population activity in sensory cortex is unclear. This issue is important as synchronized neural activity is hypothesized to play an important role in propagating information in neuronal circuits. Here, we show that rapid adaptation (300 ms) to a stimulus of fixed orientation modulates the strength of oscillatory neuronal synchronization in macaque visual cortex (area V4) and influences the ability of neurons to distinguish small changes in stimulus orientation. Specifically, rapid adaptation increases the synchronization of individual neuronal responses with local population activity in the gamma frequency band (30-80 Hz). In contrast to previous reports that gamma synchronization is associated with an increase in firing rates in V4, we found that the postadaptation increase in gamma synchronization is associated with a decrease in neuronal responses. The increase in gamma-band synchronization after adaptation is functionally significant as it is correlated with an improvement in neuronal orientation discrimination performance. Thus, adaptive synchronization between the spiking activity of individual neurons and their local population can enhance temporally insensitive, rate-based-coding schemes for sensory discrimination. |
Zheng Wang; Anna W. Roe Trial-to-trial noise cancellation of cortical field potentials in awake macaques by autoregression model with exogenous input (ARX) Journal Article In: Journal of Neuroscience Methods, vol. 194, no. 2, pp. 266–273, 2011. @article{Wang2011a, Gamma band synchronization has drawn increasing interest with respect to its potential role in neuronal encoding strategy and behavior in awake, behaving animals. However, contamination of these recordings by power line noise can confound the analysis and interpretation of cortical local field potential (LFP). Existing denoising methods are plagued by inadequate noise reduction, inaccuracies, and even introduction of new noise components. To carefully and more completely remove such contamination, we propose an automatic method based on the concept of adaptive noise cancellation that utilizes the correlative features of common noise sources, and implement with AutoRegressive model with eXogenous Input (ARX). We apply this technique to both simulated data and LFPs recorded in the primary visual cortex of awake macaque monkeys. The analyses here demonstrate a greater degree of accurate noise removal than conventional notch filters. Our method leaves desired signal intact and does not introduce artificial noise components. Application of this method to awake monkey V1 recordings reveals a significant power increase in the gamma range evoked by visual stimulation. Our findings suggest that the ARX denoising procedure will be an important pre-processing step in the analysis of large volumes of cortical LFP data as well as high frequency (gamma-band related) electroencephalography/magnetoencephalography (EEG/MEG) applications, one which will help to convincingly dissociate this notorious artifact from gamma-band activity. |
Zhong I. Wang; Louis F. Dell'Osso A unifying model-based hypothesis for the diverse waveforms of infantile nystagmus syndrome Journal Article In: Journal of Eye Movement Research, vol. 4, no. 1, pp. 1–18, 2011. @article{Wang2011, We expanded the original behavioral Ocular Motor System (OMS) model for Infantile Nystagmus Syndrome (INS) by incorporating common types of jerk waveforms within a unifying mechanism. Alexander's law relationships were used to produce desired INS null positions and sharpness. At various gaze angles, these relationships influenced the IN slow-phase amplitudes differently, thereby mimicking the gaze-angle effects of INS patients. Transitions from pseudopendular with foveating saccades to jerk waveforms required replacing braking saccades with foveating fast phases and adding a resettable neural integrator in the pursuit pre-motor circuitry. The robust simulations of accurate OMS behavior in the presence of diverse INS waveforms demonstrate that they can all be generated by a loss of pursuit-system damping, supporting this hypothetical origin. |
Tessa Warren; Erik D. Reichle; Nikole D. Patson Lexical and post-lexical complexity effects on eye movements Journal Article In: Journal of Eye Movement Research, vol. 4, no. 1, pp. 1–10, 2011. @article{Warren2011, The current study investigated how a post-lexical complexity manipulation followed by a lexical complexity manipulation affects eye movements during reading. Both manipula- tions caused disruption in all measures on the manipulated words, but the patterns of spill- over differed. Critically, the effects of the two kinds of manipulations did not interact, and there was no evidence that post-lexical processing difficulty delayed lexical processing on the next word (c.f. Henderson & Ferreira, 1990). This suggests that post-lexical processing of one word and lexical processing of the next can proceed independently and likely in parallel. This finding is consistent with the assumptions of the E-Z Reader model of eye movement control in reading (Reichle, Warren, & McConnell, 2009). |
Luc P. J. Selen; W. Pieter Medendorp Saccadic updating of object orientation for grasping movements Journal Article In: Vision Research, vol. 51, no. 8, pp. 898–907, 2011. @article{Selen2011, Reach and grasp movements are a fundamental part of our daily interactions with the environment. This spatially-guided behavior is often directed to memorized objects because of intervening eye movements that caused them to disappear from sight. How does the brain store and maintain the spatial representations of objects for future reach and grasp movements? We had subjects (n= 8) make reach and two-digit grasp movements to memorized objects, briefly presented before an intervening saccade. Grasp errors, characterizing the spatial representation of object orientation, depended on current gaze position, with and without intervening saccade. This suggests that the orientation information of the object is coded and updated relative to gaze during intervening saccades, and that the grasp errors arose after the updating stage, during the later transformations involved in grasping. The pattern of reach errors also revealed a gaze-centered updating of object location, consistent with previous literature on updating of single-point targets. Furthermore, grasp and reach errors correlated strongly, but their relationship had a non-unity slope, which may suggest that the gaze-centered spatial updates were made in separate channels. Finally, the errors of the two digits were strongly correlated, supporting the notion that these were not controlled independently to form the grip in these experimental conditions. Taken together, our results suggest that the visuomotor system dynamically represents the short-term memory of location and orientation information for reach-and-grasp movements. |
Matthew C. Shake; Elizabeth A. L. Stine-Morrow Age differences in resolving anaphoric expressions during reading Journal Article In: Aging, Neuropsychology, and Cognition, vol. 18, no. 6, pp. 678–707, 2011. @article{Shake2011, One crucial component of reading comprehension is the ability to bind current information to earlier text, which is often accomplished via anaphoric expressions (e.g., pronouns referring to previous nouns). Processing time for anaphors that violate expectations (e.g., 'The firefighter burned herself while rescuing victims from the building') provide a window into how the semantic representation of the referent is instantiated and retained up to the anaphor. We present data from three eye-tracking experiments examining older and younger adults' reading patterns for passages containing such local expectancy violations. Younger adults quickly registered and resolved the expectancy violation at the point at which it first occurred (as measured by increased gaze duration on the anaphor), regardless of whether sentences were read in isolation or embedded in a discourse context. Older adults, however, immediately noticed the violation only when sentences were embedded in discourse context, suggesting that they relied more on situational grounding to instantiate the referent. For neither young nor old did prior disambiguation within the context (e.g., stating the firefighter was a woman) reduce the effect of the local violation on early processing. For older readers, however, prior disambiguation facilitated anaphor resolution by reducing reprocessing. These results suggest that (a) anaphor resolution unfolds serially, such that prior disambiguating context does not 'inoculate' against local activation of salient (but contextually inappropriate) features, and that (b) older readers use the situational grounding of discourse context to support earlier access to the antecedent, and are more likely to reprocess the context for anaphor resolution. |
Diego E. Shalom; Bruno Dagnino; Mariano Sigman Looking at Breakout: Urgency and predictability direct eye events Journal Article In: Vision Research, vol. 51, no. 11, pp. 1262–1272, 2011. @article{Shalom2011, We investigated the organization of eye-movement classes in a natural and dynamical setup. To mimic the goals and objectives of the natural world in a controlled environment, we studied eye-movements while participants played Breakout, an old Atari game which remains surprisingly entertaining, often addictive, in spite of its graphic and structural simplicity. Our results show that eye-movement dynamics can be explained in terms of simple principles of moments of prediction and urgency of action. We observed a consistent anticipatory behavior (gaze was directed ahead of ball trajectory) except during the moment in which the ball bounced either in the walls, or in the paddle. At these moments, we observed a refractory period during which there are no blinks and saccades. Saccade delay caused the gaze to fall behind the ball. This pattern is consistent with a model by which participants postpone saccades at the bounces while predicting the ball trajectory and subsequently make a catch-up saccade directed to a position which anticipates ball trajectory. During bounces, trajectories were smooth and curved interpolating the V-shape function of the ball with minimal acceleration. These results pave the path to understand the taxonomy of eye-movements on natural configurations in which stimuli and goals switch dynamically in time. |
Swetha Shankar; Dino P. Massoglia; Dantong Zhu; M. Gabriela Costello; Terrence R. Stanford; Emilio Salinas Tracking the temporal evolution of a perceptual judgment using a compelled-response task Journal Article In: Journal of Neuroscience, vol. 31, no. 23, pp. 8406–8421, 2011. @article{Shankar2011, Choice behavior and its neural correlates have been intensely studied with tasks in which a subject makes a perceptual judgment and indicates the result with a motor action. Yet a question crucial for relating behavior to neural activity remains unresolved: what fraction of a subject's reaction time (RT) is devoted to the perceptual evaluation step, as opposed to executing the motor report? Making such timing measurements accurately is complicated because RTs reflect both sensory and motor processing, and because speed and accuracy may be traded. To overcome these problems, we designed the compelled-saccade task, a two-alternative forced-choice task in which the instruction to initiate a saccade precedes the appearance of the relevant sensory information. With this paradigm, it is possible to track perceptual performance as a function of the amount of time during which sensory information is available to influence a subject's choice. The result-the tachometric curve-directly reveals a subject's perceptual processing capacity independently of motor demands. Psychophysical data, together with modeling and computer-simulation results, reveal that task performance depends on three separable components: the timing of the motor responses, the speed of the perceptual evaluation, and additional cognitive factors. Each can vary quickly, from one trial to the next, or can show stable, longer-term changes. This novel dissociation between sensory and motor processes yields a precise metric of how perceptual capacity varies under various experimental conditions and serves to interpret choice-related neuronal activity as perceptual, motor, or both. |
Jessica L. Sullivan; Barbara J. Juhasz; Timothy J. Slattery; Hilary C. Barth Adults' number-line estimation strategies: Evidence from eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 3, pp. 557–563, 2011. @article{Sullivan2011, Although the development of number-line estimation ability is well documented, little is known of the processes underlying successful estimators' mappings of numerical information onto spatial representations during these tasks. We tracked adults' eye movements during a number-line estimation task to investigate the processes underlying number-to-space translation, with three main results. First, eye movements were strongly related to the target number's location, and early processing measures directly predicted later estimation performance. Second, fixations and estimates were influenced by the size of the first number presented, indicating that adults calibrate their estimates online. Third, adults' number-line estimates demonstrated patterns of error consistent with the predictions of psychophysical models of proportion estimation, and eye movement data predicted the specific error patterns we observed. These results support proportion-based accounts of number-line estimation and suggest that adults' translation of numerical information into spatial representations is a rapid, online process. |
Aiga Švede; Jörg Hoormann; Stephanie Jainta; Wolfgang Jaschinski Subjective fixation disparity affected by dynamic asymmetry, resting vergence, and nonius bias Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 7, pp. 4356–4361, 2011. @article{Svede2011, PURPOSE. This study was undertaken to investigate how sub- jectively measured fixation disparity can be explained by (1) the convergent–divergent asymmetry of vergence dynamics (called dynamic asymmetry) for a disparity vergence step stimulus of 1° (60 arc min), (2) the dark vergence, and (3) the nonius bias. METHODS. Fixation disparity, dark vergence, and nonius bias were measured subjectively using nonius lines. Dynamic vergence step responses (both convergent and divergent) were measured objectively. RESULTS. In 20 subjects (mean age, 24.5 ⫾ 4.3 years, visual acuity, ≥1.0; all emmetropic except for one with myopia, wearing contact lenses), multiple regression analyses showed that 39% of the variance in subjective fixation disparity was due to the characteristic factors of physiological vergence: dynamic asymmetry (calculated from convergent and divergent veloci- ties), and dark vergence. An additional 23% of variance was due to the subjective nonius bias (i.e., the physical nonius offset required for perceived alignment of binocularly [nondichopti- cally] presented nonius lines). Together, these factors ex- plained 62% of the interindividual differences in subjectively measured fixation disparity, demonstrating the influence of oculomotor and perceptual factors. CONCLUSIONS. Clinically relevant subjective fixation disparity originates from distinct physiological sources. Dynamic asym- metry in vergence dynamics, resting vergence, and nonius bias were found to affect fixation disparity directly, not only via changes in vergence dynamics. |
Agnieszka Szarkowska; Izabela Krejtz; Zuzanna Klyszejko; Anna Wieczorek Verbatim, standard, or edited? Reading patterns of different captioning styles among deaf, hard of hearing, and hearing viewers Journal Article In: American Annals of the Deaf, vol. 156, no. 4, pp. 363–378, 2011. @article{Szarkowska2011, One of the most frequently recurring themes in captioning is whether captions should be edited or verbatim. The authors report on the results of an eye-tracking study of captioning for deaf and hard of hearing viewers reading different types of captions. By examining eye movement patterns when these viewers were watching clips with verbatim, standard, and edited captions, the authors tested whether the three different caption styles were read differently by the study participants (N = 40): 9 deaf, 21 hard of hearing, and 10 hearing individuals. Interesting interaction effects for the proportion of dwell time and fixation count were observed. In terms of group differences, deaf participants differed from the other two groups only in the case of verbatim captions. The results are discussed with reference to classical reading studies, audiovisual translation, and a new concept of viewing speed. |
Martin Szinte; Patrick Cavanagh Spatiotopic apparent motion reveals local variations in space constancy Journal Article In: Journal of Vision, vol. 11, no. 2, pp. 1–20, 2011. @article{Szinte2011, While participants made 10° horizontal saccades, two dots were presented, one before and one after the saccade. Each dot was presented for 400 ms, the first turned off about 100 ms before, while the second turned on about 100 ms after the saccade. The two dots were separated vertically by 3°, but because of the intervening eye movement, they were also separated horizontally on the retina by an additional 10°. Participants nevertheless reported that the perceived motion was much more vertical than horizontal, suggesting that the trans-saccadic displacement was corrected, at least to some extent, for the retinal displacement caused by the eye movement. The corrections were not exact, however, showing significant biases that corresponded to about 5% of the saccade amplitude. The perceived motion between the probes was tested at 9 different locations and the biases, the deviations from accurate correction, varied significantly across locations. Two control experiments for judgments of position and of verticality of motion without eye movement confirmed that these biases are specific to the correction for the saccade. The local variations in the correction for saccades are consistent with physiological "remapping" proposals for space constancy that individually correct only a few attended targets but are not consistent with global mechanisms that predict the same correction at all locations. |
Bernard Marius Hart; Tilman Gerrit Jakob Abresch; Wolfgang Einhäuser Faces in places: Humans and machines make similar face detection errors Journal Article In: PLoS ONE, vol. 6, no. 10, pp. e25373, 2011. @article{tHart2011, The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications ("Viola-Jones" algorithm) achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections ("real faces"), false positives ("illusory faces") and correctly rejected locations ("non faces"). Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system. |
Zheng Tai; Richard W. Hertle; Richard A. Bilonick; Dongsheng Yang A new algorithm for automated nystagmus acuity function analysis Journal Article In: British Journal of Ophthalmology, vol. 95, no. 6, pp. 832–836, 2011. @article{Tai2011, Aims: We developed a new data analysis algorithm called the automated nystagmus acuity function (ANAF) to automatically assess nystagmus acuity function. We compared results from the ANAF with those of the well-known expanded nystagmus acuity function (NAFX). Methods: Using the ANAF and NAFX, we analysed 60 segments of nystagmus data collected with a video-based eye tracking system (EyeLink 1000) from 30 patients with infantile or mal-development fusional nystagmus. The ANAF algorithm used the best-foveation positions (not true foveation positions) and all data points in each nystagmus cycle to calculate a nystagmus acuity function. Results: The ANAF automatically produced a nystagmus acuity function in a few seconds because manual identification of foveation eye positions is not required. A structural equation model was used to compare the ANAF and NAFX. Both ANAF and NAFX have similar measurement imprecision and relatively little bias. The estimated bias was not statistically significant for either methods or replicates. Conclusions: We conclude that the ANAF is a valid and efficient algorithm for determining a nystagmus acuity function. |