All EyeLink Eye Tracker Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2014 |
R. Becket Ebitz; John M. Pearson; Michael L. Platt Pupil size and social vigilance in rhesus macaques Journal Article In: Frontiers in Neuroscience, vol. 8, pp. 100, 2014. @article{Ebitz2014,Complex natural environments favor the dynamic alignment of neural processing between goal-relevant stimuli and conflicting but biologically salient stimuli like social competitors or predators. The biological mechanisms that regulate dynamic changes in vigilance have not been fully elucidated. Arousal systems that ready the body to respond adaptively to threat may contribute to dynamic regulation of vigilance. Under conditions of constant luminance, pupil diameter provides a peripheral index of arousal state. Although pupil size varies with the processing of goal-relevant stimuli, it remains unclear whether pupil size also predicts attention to biologically salient objects and events like social competitors, whose presence interferes with current goals. Here we show that pupil size in rhesus macaques both reflects the biological salience of task-irrelevant social distractors and predicts vigilance for these stimuli. We measured pupil size in monkeys performing a visual orienting task in which distractors-monkey faces and phase-scrambled versions of the same images-could appear in a congruent, incongruent, or neutral position relative to a rewarded target. Baseline pupil size under constant illumination predicted distractor interference, consistent with the hypothesis that pupil-linked arousal mechanisms regulate task engagement and distractibility. Notably, pupil size also predicted enhanced vigilance for social distractors, suggesting that pupil-linked arousal may adjust the balance of processing resources between goal-relevant and biologically important stimuli. The magnitude of pupil constriction in response to distractors closely tracked distractor interference, saccade planning and the social relevance of distractors, endorsing the idea that the pupillary light response is modulated by attention. These findings indicate that pupil size indexes dynamic changes in attention evoked by both the social environment and arousal. |
Chandan Singh; Dhananjay Yadav; Jinho Lee Reader comprehension ranking by monitoring eye gaze using eye tracker Journal Article In: International Journal of Intelligent Systems Technologies and Applications, vol. 13, no. 4, pp. 294–307, 2014. @article{Singh2014,This paper concentrates on measuring comprehension ability of a reader by calculating reader ranking based on correct answer lines recorded by eye gaze tracker (mounted on reader's eye) and number of correct answers given by reader. Time is measured to find the answer line (page time T1) and time spent on the answer line (score time T2). The ratio (T2/T1) of both these time parameters plays vital role in evaluation of rank of reader. Score is calculated only if reader reads the answer line/s and after that gives the correct answer otherwise the score will be zero for same question. Finally, the reader gets score and rank among the existing readers on the basis of time ratio and correctness of answers. |
Marta Castellano; Michael Plöchl; Raul Vicente; Gordon Pipa Neuronal oscillations form parietal/frontal networks during contour integration Journal Article In: Frontiers in Integrative Neuroscience, vol. 8, pp. 64, 2014. @article{Castellano2014,The ability to integrate visual features into a global coherent percept that can be further categorized and manipulated are fundamental abilities of the neural system. While the processing of visual information involves activation of early visual cortices, the recruitment of parietal and frontal cortices has been shown to be crucial for perceptual processes. Yet is it not clear how both cortical and long-range oscillatory activity leads to the integration of visual features into a coherent percept. Here, we will investigate perceptual grouping through the analysis of a contour categorization task, where the local elements that form contour must be linked into a coherent structure, which is then further processed and manipulated to perform the categorization task. The contour formation in our visual stimulus is a dynamic process where, for the first time, visual perception of contours is disentangled from the onset of visual stimulation or from motor preparation, cognitive processes that until now have been behaviorally attached to perceptual processes. Our main finding is that, while local and long-range synchronization at several frequencies seem to be an ongoing phenomena, categorization of a contour could only be predicted through local oscillatory activity within parietal/frontal sources, which in turn, would synchronize at gamma (>30 Hz) frequency. Simultaneously, fronto-parietal beta (13-30 Hz) phase locking forms a network spanning across neural sources that are not category specific. Both long range networks, i.e., the gamma network that is category specific, and the beta network that is not category specific, are functionally distinct but spatially overlapping. Altogether, we show that a critical mechanism underlying contour categorization involves oscillatory activity within parietal/frontal cortices, as well as its synchronization across distal cortical sites. |
Andrea Helo; Sebastian Pannasch; Louah Sirri; Pia Rämä The maturation of eye movement behavior: Scene viewing characteristics in children and adults Journal Article In: Vision Research, vol. 103, pp. 83–91, 2014. @article{Helo2014,While the close link between eye movements and visual attention has often been demonstrated, recently distinct attentional modes have been associated with specific eye movement patterns. The ambient mode-serving the localization of objects and dominating early scene inspection-is expressed by short fixations and large saccade amplitudes. The focal mode-associated with the identification of object details and dominating later stages of scene exploration-is indicated by longer fixations embedded in short saccades. The relationship between these processing modes and eye movement characteristics has so far only been examined in adults. While studies in children revealed a maturation of oculomotor behavior up to adolescence, developmental aspects of the processing modes are still unknown. Here we explored these mechanisms by comparing eye movements during the inspection of naturalistic scenes. Therefore, gaze behavior from adults and children in four different age groups (2, 4-6, 6-8, 8-10. years old) was examined. We found a general effect of age, revealing that with age fixation durations decrease and saccade amplitudes increase. However, in all age groups fixations were shorter and saccades were longer at the beginning of scene inspection but fixations became longer and saccades became shorter over time. While saliency influenced eye guidance in the two youngest groups over the full inspection period, for the older groups this influence was found only at the beginning of scene inspection. The results reveal indications for ambient and focal processing strategies for as early as 2 years of age. |
Eyal M. Reingold; Mackenzie G. Glaholt Cognitive control of fixation duration in visual search: The role of extrafoveal processing Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 610–634, 2014. @article{Reingold2014a,Participants' eye movements were monitored in two visual search experiments that manipulated target-distractor similarity (high vs. low) as well as the availability of distractors for extrafoveal processing (Free-Viewing vs. No-Preview). The influence of the target-distractor similarity by preview manipulation on the distributions of first fixation and second fixation duration was examined by using a survival analysis technique which provided precise estimates of the timing of the first discernible influence of target-distractor similarity on fixation duration. We found a significant influence of target-distractor similarity on first fixation duration in normal visual search (Free Viewing) as early as 26?28 ms from the start of fixation. In contrast, the influence of target-distractor similarity occurred much later (199?233 ms) in the No-Preview condition. The present study also documented robust and fast acting extrafoveal and foveal preview effects. Implications for models of eye-movement control and visual search are discussed.$backslash$nParticipants' eye movements were monitored in two visual search experiments that manipulated target-distractor similarity (high vs. low) as well as the availability of distractors for extrafoveal processing (Free-Viewing vs. No-Preview). The influence of the target-distractor similarity by preview manipulation on the distributions of first fixation and second fixation duration was examined by using a survival analysis technique which provided precise estimates of the timing of the first discernible influence of target-distractor similarity on fixation duration. We found a significant influence of target-distractor similarity on first fixation duration in normal visual search (Free Viewing) as early as 26?28 ms from the start of fixation. In contrast, the influence of target-distractor similarity occurred much later (199?233 ms) in the No-Preview condition. The present study also documented robust and fast acting extrafoveal and foveal preview effects. Implications for models of eye-movement control and visual search are discussed. |
Harold H. Greene; James M. Brown; Barry Dauphin When do you look where you look? A visual field asymmetry Journal Article In: Vision Research, vol. 102, pp. 33–40, 2014. @article{Greene2014,Pre-saccadic fixation durations associated with saccades directed in different directions were compared in three endogenous-attention oriented saccadic scanning tasks (i.e. visual search and scene viewing). Pre-saccadic fixation durations were consistently briefer before the execution of upward saccades, than downward saccades. Saccades also had a higher probability of being directed upwards than downwards. Pre-saccadic fixation durations were symmetric and longer for horizontally-directed saccades. The vertical visual field asymmetry in pre-saccadic fixation durations reflects an influence of factors not directly related to currently fixated elements. The ability to predict pre-saccadic fixation durations is important for computational modelling of real-time saccadic scanning, and the findings make a case for including directional constraints in computational modelling of when the eyes move. |
Paul J. Boon; Jan Theeuwes; Artem V. Belopolsky Updating visual-spatial working memory during object movement Journal Article In: Vision Research, vol. 94, pp. 51–57, 2014. @article{Boon2014,Working memory enables temporary maintenance and manipulation of information for immediate access by cognitive processes. The present study investigates how spatial information stored in working memory is updated during object movement. Participants had to remember a particular location on an object which, after a retention interval, started to move. The question was whether the memorized location was updated with the movement of the object or whether after object movement it remained represented in retinotopic coordinates. We used saccade trajectories to examine how memorized locations were represented. The results showed that immediately after the object stopped moving, there was both a retinotopic and an object-centered representation. However, 200. ms later, the activity at the retinotopic location decayed, making the memory representation fully object-centered. Our results suggest that memorized locations are updated from retinotopic to object-centered coordinates during, or shortly after object movement. |
Olivia M. Maynard; Angela Attwood; Laura O'Brien; Sabrina Brooks; Craig Hedge; Ute Leonards; Marcus R. Munafò Avoidance of cigarette pack health warnings among regular cigarette smokers Journal Article In: Drug and Alcohol Dependence, vol. 136, no. 1, pp. 170–174, 2014. @article{Maynard2014,Background: Previous research with adults and adolescents indicates that plain cigarette packs increase visual attention to health warnings among non-smokers and non-regular smokers, but not among regular smokers. This may be because regular smokers: (1) are familiar with the health warnings, (2) preferentially attend to branding, or (3) actively avoid health warnings. We sought to distinguish between these explanations using eye-tracking technology. Method: A convenience sample of 30 adult dependent smokers participated in an eye-tracking study. Participants viewed branded, plain and blank packs of cigarettes with familiar and unfamiliar health warnings. The number of fixations to health warnings and branding on the different pack types were recorded. Results: Analysis of variance indicated that regular smokers were biased towards fixating the branding rather than the health warning on all three pack types. This bias was smaller, but still evident, for blank packs, where smokers preferentially attended the blank region over the health warnings. Time-course analysis showed that for branded and plain packs, attention was preferentially directed to the branding location for the entire 10. s of the stimulus presentation, while for blank packs this occurred for the last 8. s of the stimulus presentation. Familiarity with health warnings had no effect on eye gaze location. Conclusion: Smokers actively avoid cigarette pack health warnings, and this remains the case even in the absence of salient branding information. Smokers may have learned to divert their attention away from cigarette pack health warnings. These findings have implications for cigarette packaging and health warning policy. |
Kasey S. Hemington; James N. Reynolds In: Clinical Neurophysiology, vol. 125, no. 12, pp. 2364–2371, 2014. @article{Hemington2014,Objective: Children with Fetal Alcohol Spectrum Disorder (FASD) exhibit cognitive deficits that can be probed using eye movement tasks. We employed a recently developed, single-sensor electroencephalographic (EEG) recording device in measuring EEG activity during the performance of an eye movement task probing working memory in this population. Methods: Children with FASD (n= 18) and typically developing children (n= 19) performed a memory-guided saccade task requiring the participant to remember the spatial location of one, two or three stimuli. We hypothesized that children with FASD would (i) exhibit performance deficits, particularly at greater mnemonic loads; and (ii) display differences in theta (4-8 Hz) and alpha (8-12 Hz) frequency band power compared with controls. Results: Children with FASD failed to perform the task correctly more often than controls when presented with two or three stimuli, and demonstrated related reductions in alpha and theta power. Conclusion: These data suggest that the memory-guided task is sensitive to working memory deficits in children with FASD. Significance: Simultaneous recording of EEG activity suggest differing patterns of underlying neural recruitment in the clinical group, consistent with previous literature indicating more cognitive resources are required by children with FASD in order to complete complex tasks correctly. |
Shanna C. Yeung; Cristina Rubino; Jaya Viswanathan; Jason J. S. Barton The inter-trial effect of prepared but not executed antisaccades Journal Article In: Experimental Brain Research, vol. 232, no. 12, pp. 3699–3705, 2014. @article{Yeung2014,A preceding antisaccade increases the latency of the saccade in the next trial. Whether this inter-trial effect is generated by the preparation or the execution of the antisaccade is not certain. Our goal was to examine the inter-trial effects from trials on which subjects prepared an antisaccade but did not make one. We tested 15 subjects on blocks of randomly ordered prosaccades and antisaccades. An instructional cue at fixation indicated whether a prosaccade or antisaccade was required, with the target appearing 2 s later. On 20 % of antisaccade trials, the target did not appear (prepared-only antisaccade trials). We analyzed the latencies of all correct prosaccades or antisaccades preceded by correctly executed trials. The latencies of prosaccade trials were 15 ms shorter if they were preceded by prosaccades than if the prior trial was an antisaccade. Prosaccades preceded by trials on which antisaccades were cued but not executed also showed prolonged latencies that were equivalent to those preceded by executed antisaccades. We conclude that the inter-trial effects from a prior antisaccade are generated by its preparation rather than its execution. This may reflect persistence of pre-target preparatory activity from the prior trial to affect that of the next trial in structures like the superior colliculus and frontal eye field. |
Rosanna K. Olsen; Mark Chiew; Bradley R. Buchsbaum; Jennifer D. Ryan The relationship between delay period eye movements and visuospatial memory Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–11, 2014. @article{Olsen2014,We investigated whether overt shifts of attention were associated with visuospatial memory performance. Participants were required to study the locations of a set of visual objects and subsequently detect changes to the spatial location of one of the objects following a brief delay period. Relational information regarding the locations among all of the objects could be used to support performance on the task (Experiment 1) or relational information was removed during test and location manipulation judgments had to be made for a singly presented target item (Experiment 2). We computed the similarity of the fixation patterns in space during the study phase to the fixations made during the delay period. Greater fixation pattern similarity across participants was associated with higher accuracy when relational information was available at test (Experiment 1); however, this association was not observed when the target item was presented in isolation during the test display (Experiment 2). Similarly, increased fixation pattern similarity on a given trial (within participants) was associated with successful task performance when the relations among studied items could be used for comparison (Experiment 1), but not when memory for absolute spatial location was assessed (Experiment 2). This pattern of behavior and performance on the two tasks suggested that eye movements facilitated memory for the relationships among objects. Shifts of attention through eye movements may provide a mechanism for the maintenance of relational visuospatial memory. |
Axel Larsen Deconstructing mental rotation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 3, pp. 1072–1091, 2014. @article{Larsen2014,A random walk model of the classical mental rotation task is explored in two experiments. By assuming that a mental rotation is repeated until sufficient evidence for a match/mismatch is obtained, the model accounts for the approximately linearly increasing reaction times (RTs) on positive trials, flat RTs on negative trials, false alarms and miss rates, effects of complexity, and for the number of eye movement switches between stimuli as functions of angular difference in orientation. Analysis of eye movements supports key aspects of the model and shows that initial processing time is roughly constant until the first saccade switch between stimulus objects, while the duration of the remaining trial increases approximately linearly as a function of angular discrepancy. The increment results from additive effects of (a) a linear increase in the number of saccade switches between stimulus objects, (b) a linear increase in the number of saccades on a stimulus, and (c) a linear increase in the number and in the duration of fixations on a stimulus object. The fixation duration increment was the same on simple and complex trials (about 15 ms per 60°), which suggests that the critical orientation alignment take place during fixations at very high speed. |
Cai S. Longman; Aureliu Lavric; Cristian Munteanu; Stephen Monsell Attentional inertia and delayed orienting of spatial attention in task-switching Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1580–1602, 2014. @article{Longman2014,Among the potential, but neglected, sources of task-switch costs is the need to reallocate attention to different attributes or objects. Even theorists who recognize the importance of attentional resetting in task-switching sometimes think it too efficient to result in significant behavioral costs. We examined the dynamics of spatial attention in a task-cuing paradigm using eye-tracking. Digits appeared simultaneously at 3 locations. A cue preceded this display by a variable interval, instructing the performance of 1 of 3 classification tasks (odd-even, low-high, inner-outer) each consistently associated with a location, so that task preparation could be tracked via fixation of the task-relevant location. Task-switching led to a delay in selecting the relevant location and a tendency to misallocate attention; the previously relevant location attracted attention much more than the other irrelevant location on switch trials, indicating "inertia" in attentional parameters rather than mere distractibility. These effects predicted reaction time switch costs within and over participants. The switch-induced delay was not confined to trials with slow/late orienting, but characteristic of most switch trials. The attentional pull of the previously relevant location was substantially reduced, but not eliminated, by extending the preparation interval to more than 1 sec, suggesting that attentional inertia contributes to the "residual" switch cost. A control condition, using identical displays but only 1 task, showed that these effects could not be attributed to the (small and transient) delays or inertia observed when the required orientation changed between trials in the absence of a task change. |
Mehrdad Seirafi; Peter De Weerd; Beatrice De Gelder Suppression of face perception during saccadic eye movements Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 1–7, 2014. @article{Seirafi2014,Lack of awareness of a stimulus briefly presented during saccadic eye movement is known as saccadic omission. Studying the reduced visibility of visual stimuli around the time of saccade-known as saccadic suppression-is a key step to investigate saccadic omission. To date, almost all studies have been focused on the reduced visibility of simple stimuli such as flashes and bars. The extension of the results from simple stimuli to more complex objects has been neglected. In two experimental tasks, we measured the subjective and objective awareness of a briefly presented face stimuli during saccadic eye movement. In the first task, we measured the subjective awareness of the visual stimuli and showed that in most of the trials there is no conscious awareness of the faces. In the second task, we measured objective sensitivity in a two-alternative forced choice (2AFC) face detection task, which demonstrated chance-level performance. Here, we provide the first evidence of complete suppression of complex visual stimuli during the saccadic eye movement. |
Kyoung Whan Choe; Randolph Blake; Sang-Hun Lee Dissociation between neural signatures of stimulus and choice in population activity of human V1 during perceptual decision-making Journal Article In: Journal of Neuroscience, vol. 34, no. 7, pp. 2725–2743, 2014. @article{Choe2014,Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. |
Adam M. Larson; Tyler E. Freeman; Ryan V. Ringer; Lester C. Loschky The spatiotemporal dynamics of scene gist recognition Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 2, pp. 471–487, 2014. @article{Larson2014,Viewers can rapidly extract a holistic semantic representation of a real-world scene within a single eye fixation, an ability called recognizing the gist of a scene, and operationally defined here as recognizing an image's basic-level scene category. However, it is unknown how scene gist recognition unfolds over both time and space-within a fixation and across the visual field. Thus, in 3 experiments, the current study investigated the spatiotemporal dynamics of basic-level scene categorization from central vision to peripheral vision over the time course of the critical first fixation on a novel scene. The method used a window/scotoma paradigm in which images were briefly presented and processing times were varied using visual masking. The results of Experiments 1 and 2 showed that during the first 100 ms of processing, there was an advantage for processing the scene category from central vision, with the relative contributions of peripheral vision increasing thereafter. Experiment 3 tested whether this pattern could be explained by spatiotemporal changes in selective attention. The results showed that manipulating the probability of information being presented centrally or peripherally selectively maintained or eliminated the early central vision advantage. Across the 3 experiments, the results are consistent with a zoom-out hypothesis, in which, during the first fixation on a scene, gist extraction extends from central vision to peripheral vision as covert attention expands outward. |
Xingshan Li; Klinton Bicknell; Pingping Liu; Wei Wei; Keith Rayner In: Journal of Experimental Psychology: General, vol. 143, no. 2, pp. 895–913, 2014. @article{Li2014,While much previous work on reading in languages with alphabetic scripts has suggested that reading is word-based, reading in Chinese has been argued to be less reliant on words. This is primarily because in the Chinese writing system words are not spatially segmented, and characters are themselves complex visual objects. Here, we present a systematic characterization of the effects of a wide range of word and character properties on eye movements in Chinese reading, using a set of mixed-effects regression models. The results reveal a rich pattern of effects of the properties of the current, previous, and next words on a range of reading measures, which is strikingly similar to the pattern of effects of word properties reported in spaced alphabetic languages. This finding provides evidence that reading shares a word-based core and may be fundamentally similar across languages with highly dissimilar scripts. We show that these findings are robust to the inclusion of character properties in the regression models and are equally reliable when dependent measures are defined in terms of characters rather than words, providing strong evidence that word properties have effects in Chinese reading above and beyond characters. This systematic characterization of the effects of word and character properties in Chinese advances our knowledge of the processes underlying reading and informs the future development of models of reading. More generally, however, this work suggests that differences in script may not alter the fundamental nature of reading. |
Daniel P. Newman; Gerard M. Loughnane; Rafael Abe; Marco T. R. Zoratti; Ana C. P. Martins; Petra C. Bogert; Simon P. Kelly; Redmond G. O'Connell; Mark A. Bellgrove Differential shift in spatial bias over time depends on observers' initial bias: Observer subtypes, or regression to the mean? Journal Article In: Neuropsychologia, vol. 64, pp. 33–40, 2014. @article{Newman2014,Healthy subjects typically exhibit a subtle bias of visuospatial attention favouring left space that is commonly termed 'pseudoneglect'. This bias is attenuated, or shifted rightwards, with decreasing alertness over time, consistent with theoretical models proposing that pseudoneglect is a result of the right hemisphere[U+05F3]s dominance in regulating attention. Although this 'time-on-task effect' for spatial bias is observed when averaging across whole samples of healthy participants, Benwell, C. S. Y., Thut, G., Learmonth, G., & Harvey, M. (2013b). Spatial attention: differential shifts in pseudoneglect direction with time-on-task and initial bias support the idea of observer subtypes. Neuropsychologia, 51(13), 2747-2756 recently presented evidence that the direction and magnitude of bias exhibited by the participant early in the task (left biased, no bias, or right biased) were stable traits that predicted the direction of the subsequent time-on-task shift in spatial bias. That is, the spatial bias of participants who were initially left biased shifted in a rightward direction with time, whereas that of participants who were initially right biased shifted in a leftward direction. If valid, the data of Benwell et al. are potentially important and may demand a re-evaluation of current models of the neural networks governing spatial attention. Here we use two novel spatial attention tasks in an attempt to confirm the results of Benwell et al. We show that rather than being indicative of true participant subtypes, these data patterns are likely driven, at least in part, by 'regression towards the mean' arising from the analysis method employed. Although evidence supports the contention that trait-like individual differences in spatial bias exist within the healthy population, no clear evidence is yet available for participant/observer subtypes in the direction of time-on-task shift in spatial biases. |
Jianliang Tong; Jun Maruta; Kristin J. Heaton; Alexis L. Maule; Jamshid Ghajar Adaptation of visual tracking synchronization after one night of sleep deprivation Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 121–131, 2014. @article{Tong2014,The temporal delay between sensory input and motor execution is a fundamental constraint in interactions with the environment. Predicting the temporal course of a stimulus and dynamically synchronizing the required action with the stimulus are critical for offsetting this constraint, and this prediction-synchronization capacity can be tested using visual tracking of a target with predictable motion. Although the role of temporal prediction in visual tracking is assumed, little is known of how internal predictions interact with the behavioral outcome or how changes in the cognitive state influence such interaction. We quantified and compared the predictive visual tracking performance of military volunteers before and after one night of sleep deprivation. The moment-to-moment synchronization of visual tracking during sleep deprivation deteriorated with sensitivity changes greater than 40 %. However, increased anticipatory saccades maintained the overall temporal accuracy with near zero phase error. Results suggest that acute sleep deprivation induces instability in visuomotor prediction, but there is compensatory visuomotor adaptation. Detection of these visual tracking features may aid in the identification of insufficient sleep. |
Mark Wexler; Thérèse Collins Orthogonal steps relieve saccadic suppression Journal Article In: Journal of vision, vol. 14, no. 2, pp. 1–9, 2014. @article{Wexler2014,Although the retinal position of objects changes with each saccadic eye movement, we perceive the visual world to be stable. How this visual stability or constancy arises is debated. Cancellation accounts propose that the retinal consequences of eye movements are compensated for by an equal-but-opposite eye movement signal. Assumption accounts propose that saccade-induced retinal displacements are ignored because we have a prior belief in a stable world. Saccadic suppression of displacement-the fact that small displacements of the visual targets during saccades go unnoticed-argues in favor of assumption accounts. Extinguishing the target before the displacement unmasks it, arguing in favor of cancellation accounts. We show that an irrelevant displacement of the target orthogonal to saccade direction unmasks displacements parallel to saccade direction, and therefore relieves saccadic suppression of displacement. This result suggests that visual stability arises from the interplay between cancellation and assumption mechanisms: When the post-saccadic target position falls within an elliptic region roughly equivalent to habitual saccadic variability, displacements are not seen and stability is assumed. When the displacements fall outside this region, as with our orthogonal steps, displacements are seen and positions are remapped. |
Mari Anzai; Soich Nagao Motor learning in common marmosets: Vestibulo-ocular reflex adaptation and its sensitivity to inhibitors of Purkinje cell long-term depression Journal Article In: Neuroscience Research, vol. 83, pp. 33–42, 2014. @article{Anzai2014,Adaptation of the horizontal vestibulo-ocular reflex (HVOR) provides an experimental model for cerebellum-dependent motor learning. We developed an eye movement measuring system and a paradigm for induction of HVOR adaptation for the common marmoset. The HVOR gain in dark measured by 10° (peak-to-peak amplitude) and 0.11-0.5. Hz turntable oscillation was around unity. The gain-up and gain-down HVOR adaptation was induced by 1. h of sustained out-of-phase and in-phase 10°-0.33. Hz combined turntable-screen oscillation in the light, respectively. To examine the role of long-term depression (LTD) of parallel fiber-Purkinje cell synapses, we intraperitonially applied T-588 or nimesulide, which block the induction of LTD in vitro or in vivo preparations, 1. h before the test of HVOR adaptation. T-588 (3 and 5. mg/kg body weight) did not affect nonadapted HVOR gains, and impaired both gain-up and gain-down HVOR adaptation. Nimesulide (3 and 6. mg/kg) did not affect nonadapted HVOR gains, and impaired gain-up HVOR adaptation dose-dependently; however, it very little affected gain-down HVOR adaptation. These findings are consistent with the results of our study of nimesulide on the adaptation of horizontal optokinetic response in mice (. Le et al., 2010), and support the view that LTD underlies HVOR adaptation. |
Kohitij Kar; Bart Krekelberg Transcranial alternating current stimulation attenuates visual motion adaptation Journal Article In: Journal of Neuroscience, vol. 34, no. 21, pp. 7334–7340, 2014. @article{Kar2014,Transcranial alternating current stimulation (tACS) is used in clinical applications and basic neuroscience research. Although its behavioral effects are evident from prior reports, current understanding of the mechanisms that underlie these effects is limited. We used motion perception, a percept with relatively well known properties and underlying neural mechanisms to investigate tACS mechanisms. Healthy human volunteers showed a surprising improvement in motion sensitivity when visual stimuli were paired with 10 Hz tACS. In addition, tACS reduced the motion-after effect, and this reduction was correlated with the improvement in motion sensitivity. Electrical stimulation had no consistent effect when applied before presenting a visual stimulus or during recovery from motion adaptation. Together, these findings suggest that perceptual effects of tACS result from an attenuation of adaptation. Important consequences for the practical use of tACS follow from our work. First, because this mechanism interferes only with adaptation, this suggests that tACS can be targeted at subsets of neurons (by adapting them), even when the applied currents spread widely throughout the brain. Second, by interfering with adaptation, this mechanism provides a means by which electrical stimulation can generate behavioral effects that outlast the stimulation. |
James F. Cavanagh; Thomas V. Wiecki; Angad Kochar; Michael J. Frank Eye tracking and pupillometry are indicators of dissociable latent decision processes Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 4, pp. 1476–1488, 2014. @article{Cavanagh2014,Can you predict what people are going to do just by watching them? This is certainly difficult: it would require a clear mapping between observable indicators and unobservable cognitive states. In this report, we demonstrate how this is possible by monitoring eye gaze and pupil dilation, which predict dissociable biases during decision making. We quantified decision making using the drift diffusion model (DDM), which provides an algorithmic account of how evidence accumulation and response caution contribute to decisions through separate latent parameters of drift rate and decision threshold, respectively. We used a hierarchical Bayesian estimation approach to assess the single trial influence of observable physiological signals on these latent DDM parameters. Increased eye gaze dwell time specifically predicted an increased drift rate toward the fixated option, irrespective of the value of the option. In contrast, greater pupil dilation specifically predicted an increase in decision threshold during difficult decisions. These findings suggest that eye tracking and pupillometry reflect the operations of dissociated latent decision processes. |
Nida Latif; Arlene Gehmacher; Monica S. Castelhano; Kevin G. Munhall The art of gaze guidance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 33–39, 2014. @article{Latif2014,An ongoing challenge in scene perception is identifying the factors that influence how we explore our visual world. By using multiple versions of paintings as a tool to control for high-level influences, we show that variation in the visual details of a painting causes differences in observers' gaze despite constant task and content. Further, we show that by switching locations of highly salient regions through textural manipulation, a corresponding switch in eye movement patterns is observed. Our results present the finding that salient regions and gaze behavior are not simply correlated; variation in saliency through textural differences causes an observer to direct their viewing accordingly. This work demonstrates the direct contribution of low-level factors in visual exploration by showing that examination of a scene, even for aesthetic purposes, can be easily manipulated by altering the low-level properties and hence, the saliency of the scene. |
D. A. Barany; V. Della-Maggiore; Shivakumar Viswanathan; M. Cieslak; Scott T. Grafton Feature Iinteractions enable decoding of sensorimotor transformations for goal-directed movement Journal Article In: Journal of Neuroscience, vol. 34, no. 20, pp. 6860–6873, 2014. @article{Barany2014,Neurophysiology and neuroimaging evidence shows that the brain represents multiple environmental and body-related features to compute transformations from sensory input to motor output. However, it is unclear how these features interact during goal-directed movement. To investigate this issue, we examined the representations of sensory and motor features of human hand movements within the left-hemisphere motor network. In a rapid event-related fMRI design, we measured cortical activity as participants performed right-handed movements at the wrist, with either of two postures and two amplitudes, to move a cursor to targets at different locations. Using a multivoxel analysis technique with rigorous generalization tests, we reliably distinguished representations of task-related features (primarily target location, movement direction, and posture) in multiple regions. In particular, we identified an interaction between target location and movement direction in the superior parietal lobule, which may underlie a transformation from the location of the target in space to a movement vector. In addition, we found an influence of posture on primary motor, premotor, and parietal regions. Together, these results reveal the complex interactions between different sensory and motor features that drive the computation of sensorimotor transformations. |
Jennie E. S. Choi; Pavan A. Vaswani; Reza Shadmehr Vigor of movements and the cost of time in decision making Journal Article In: Journal of Neuroscience, vol. 34, no. 4, pp. 1212–1223, 2014. @article{Choi2014,If we assume that the purpose of a movement is to acquire a rewarding state, the duration of the movement carries a cost because it delays acquisition of reward. For some people, passage of time carries a greater cost, as evidenced by how long they are willing to wait for a rewarding outcome. These steep discounters are considered impulsive. Is there a relationship between cost of time in decision making and cost of time in control of movements? Our theory predicts that people who are more impulsive should in general move faster than subjects who are less impulsive. To test our idea, we considered elementary voluntary movements: saccades of the eye. We found that in humans, saccadic vigor, assessed using velocity as a function of amplitude, was as much as 50% greater in one subject than another; that is, some people consistently moved their eyes with high vigor. We measured the cost of time in a decision-making task in which the same subjects were given a choice between smaller odds of success immediately and better odds if they waited. We measured how long they were willing to wait to obtain the better odds and how much they increased their wait period after they failed. We found that people that exhibited greater vigor in their movements tended to have a steep temporal discount function, as evidenced by their waiting patterns in the decision-making task. The cost of time may be shared between decision making and motor control. |
Stephan Geuter; Matthias Gamer; Selim Onat; Christian Büchel Parametric trial-by-trial prediction of pain by easily available physiological measures Journal Article In: Pain, vol. 155, no. 5, pp. 994–1001, 2014. @article{Geuter2014,Pain is commonly assessed by subjective reports on rating scales. However, in many experimental and clinical settings, an additional, objective indicator of pain is desirable. In order to identify an objective, parametric signature of pain intensity that is predictive at the individual stimulus level across subjects, we recorded skin conductance and pupil diameter responses to heat pain stimuli of different durations and temperatures in 34 healthy subjects. The temporal profiles of trial-wise physiological responses were characterized by component scores obtained from principal component analysis. These component scores were then used as predictors in a linear regression analysis, resulting in accurate pain predictions for individual trials. Using the temporal information encoded in the principal component scores explained the data better than prediction by a single summary statistic (ie, maximum amplitude). These results indicate that perceived pain is best reflected by the temporal dynamics of autonomic responses. Application of the regression model to an independent data set of 20 subjects resulted in a very good prediction of the pain ratings demonstrating the generalizability of the identified temporal pattern. Utilizing the readily available temporal information from skin conductance and pupil diameter responses thus allows parametric prediction of pain in human subjects. |
Yu-Cin Jian; Hwa-Wei Ko Investigating the effects of background knowledge on Chinese word processing during text reading: Evidence from eye movements Journal Article In: Journal of Research in Reading, vol. 37, pp. S71–S86, 2014. @article{Jian2014a,This study investigates the effects of background knowledge on Chinese word processing during silent reading by monitoring adult readers' eye movements. Both higher knowledge (physics major) and lower knowledge (nonphysics major) graduate students were given physics texts to read. Higher knowledge readers spent less time rereading and had lower regression rates on unfamiliar physics words and common words in physics texts than did lower knowledge readers; they also had shorter gaze durations and fewer first-pass fixations on familiar physics words than on unfamiliar physics words. For unfamiliar physics words and common words, both groups predominantly fixated first on the beginnings of words when they made multiple fixations on a word and on a left-of-centre location when they fixated only once on a word. These findings suggest that both groups comprise mature readers with strong language concepts. However, differences in background knowledge led to different reading processes at different stages of reading. |
R. Chris Miall; Se-Ho Nam; J. Tchalenko The influence of stimulus format on drawing-a functional imaging study of decision making in portrait drawing Journal Article In: NeuroImage, vol. 102, pp. 608–619, 2014. @article{Miall2014,To copy a natural visual image as a line drawing, visual identification and extraction of features in the image must be guided by top-down decisions, and is usually influenced by prior knowledge. In parallel with other behavioral studies testing the relationship between eye and hand movements when drawing, we report here a functional brain imaging study in which we compared drawing of faces and abstract objects: the former can be strongly guided by prior knowledge, the latter less so. To manipulate the difficulty in extracting features to be drawn, each original image was presented in four formats including high contrast line drawings and silhouettes, and as high and low contrast photographic images. We confirmed the detailed eye-hand interaction measures reported in our other behavioral studies by using in-scanner eye-tracking and recording of pen movements with a touch screen. We also show that the brain activation pattern reflects the changes in presentation formats. In particular, by identifying the ventral and lateral occipital areas that were more highly activated during drawing of faces than abstract objects, we found a systematic increase in differential activation for the face-drawing condition, as the presentation format made the decisions more challenging. This study therefore supports theoretical models of how prior knowledge may influence perception in untrained participants, and lead to experience-driven perceptual modulation by trained artists. |
Selim Onat; Alper Açik; Frank Schumann; Peter König The contributions of image content and behavioral relevancy to overt attention Journal Article In: PLoS ONE, vol. 9, no. 4, pp. e93254, 2014. @article{Onat2014,During free-viewing of natural scenes, eye movements are guided by bottom-up factors inherent to the stimulus, as well as top-down factors inherent to the observer. The question of how these two different sources of information interact and contribute to fixation behavior has recently received a lot of attention. Here, a battery of 15 visual stimulus features was used to quantify the contribution of stimulus properties during free-viewing of 4 different categories of images (Natural, Urban, Fractal and Pink Noise). Behaviorally relevant information was estimated in the form of topographical interestingness maps by asking an independent set of subjects to click at image regions that they subjectively found most interesting. Using a Bayesian scheme, we computed saliency functions that described the probability of a given feature to be fixated. In the case of stimulus features, the precise shape of the saliency functions was strongly dependent upon image category and overall the saliency associated with these features was generally weak. When testing multiple features jointly, a linear additive integration model of individual saliencies performed satisfactorily. We found that the saliency associated with interesting locations was much higher than any low-level image feature and any pair-wise combination thereof. Furthermore, the low-level image features were found to be maximally salient at those locations that had already high interestingness ratings. Temporal analysis showed that regions with high interestingness ratings were fixated as early as the third fixation following stimulus onset. Paralleling these findings, fixation durations were found to be dependent mainly on interestingness ratings and to a lesser extent on the low-level image features. Our results suggest that both low- and high-level sources of information play a significant role during exploration of complex scenes with behaviorally relevant information being more effective compared to stimulus features. |
Jingxin Wang; Jing Tian; Weijin Han; Simon P. Liversedge; Kevin B. Paterson Inhibitory stroke neighbour priming in character recognition and reading in Chinese Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 11, pp. 2149–2171, 2014. @article{Wang2014d,In alphabetic languages, prior exposure to a target word's orthographic neighbour influences word recognition in masked priming experiments and the process of word identification that occurs during normal reading. We investigated whether similar neighbour priming effects are observed in Chinese in 4 masked priming experiments (employing a forward mask and 33-ms, 50-ms, and 67-ms prime durations) and in an experiment that measured eye movements while reading. In these experiments, the stroke neighbour of a Chinese character was defined as any character that differed by the addition, deletion, or substitution of one or two strokes. Prime characters were either stroke neighbours or stroke non-neighbours of the target character, and each prime character had either a higher or a lower frequency of occurrence in the language than its corresponding target character. Frequency effects were observed in all experiments, demonstrating that the manipulation of character frequency was successful. In addition, a robust inhibitory priming effect was observed in response times for target characters in the masked priming experiments and in eye fixation durations for target characters in the reading experiment. This stroke neighbour priming was not modulated by the relative frequency of the prime and target characters. The present findings therefore provide a novel demonstration that inhibitory neighbour priming shown previously for alphabetic languages is also observed for nonalphabetic languages, and that neighbour priming (based on stroke overlap) occurs at the level of the character in Chinese. |
John M. Henderson; Wonil Choi; Steven G. Luke Morphology of primary visual cortex predicts individual differences in fixation duration during text reading Journal Article In: Journal of Cognitive Neuroscience, vol. 26, no. 12, pp. 2880–2888, 2014. @article{Henderson2014a,In skilled reading, fixations are brief periods of time in which the eyes settle on words. E-Z Reader, a computational model of dynamic reading, posits that fixation durations are under realtime control of lexical processing. Lexical processing, in turn, requires efficient visual encoding. Here we tested the hypothesis that individual differences in fixation durations are related to individual differences in the efficiency of early visual encoding. To test this hypothesis, we recorded participantsʼ eye movements during reading. We then examined individual differences in fixation duration distributions as a function of individual differences in the morphology of primary visual cortex measured from MRI scans. The results showed that greater gray matter surface area and volume in visual cortex predicted shorter and less variable fixation durations in reading. These results suggest that individual differences in eye movements during skilled reading are related to initial visual encoding, consistent with models such as E-Z Reader that emphasize lexical control over fixation time. |
Mina Choi; Joel Wang; Wei Chung Cheng; Giovanni Ramponi; Luigi Albani; Aldo Badano Effect of veiling glare on detectability in high-dynamic-range medical images Journal Article In: IEEE/OSA Journal of Display Technology, vol. 10, no. 5, pp. 420–428, 2014. @article{Choi2014a,We describe a methodology for predicting the detectability of subtle targets in dark regions of high-dynamic-range (HDR) images in the presence of veiling glare in the human eye. The method relies on predictions of contrast detection thresholds for the human visual system within a HDR image based on psychophysics measurements and modeling of the HDR display device characteristics. We present experimental results used to construct the model and discuss an image-dependent empirical veiling glare model and the validation of the model predictions with test patterns, natural scenes, and medical images. The model predictions are compared to a previously reported model (HDR-VDP2) for predicting HDR image quality accounting for glare effects. © 2005-2012 IEEE. |
Ali Borji; Laurent Itti Defending Yarbus: Eye movements reveal observers' task Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–22, 2014. @article{Borji2014,In a very influential yet anecdotal illustration, Yarbus suggested that human eye-movement patterns are modulated top down by different task demands. While the hypothesis that it is possible to decode the observer's task from eye movements has received some support (e.g., Henderson, Shinkareva, Wang, Luke, & Olejarczyk, 2013; Iqbal & Bailey, 2004), Greene, Liu, and Wolfe (2012) argued against it by reporting a failure. In this study, we perform a more systematic investigation of this problem, probing a larger number of experimental factors than previously. Our main goal is to determine the informativeness of eye movements for task and mental state decoding. We perform two experiments. In the first experiment, we reanalyze the data from a previous study by Greene et al. (2012) and contrary to their conclusion, we report that it is possible to decode the observer's task from aggregate eye-movement features slightly but significantly above chance, using a Boosting classifier (34.12% correct vs. 25% chance level; binomial test, p ¼ 1.0722e – 04). In the second experiment, we repeat and extend Yarbus's original experiment by collecting eye movements of 21 observers viewing 15 natural scenes (including Yarbus's scene) under Yarbus's seven questions. We show that task decoding is possible, also moderately but significantly above chance (24.21% vs. 14.29% chance- level; binomial test, p ¼ 2.4535e – 06). We thus conclude that Yarbus's idea is supported by our data and continues to be an inspiration for future computational and experimental eye-movement research. From a broader perspective, we discuss techniques, features, limitations, societal and technological impacts, and future directions in task decoding from eye movements. |
Wonil Choi; Rutvik H. Desai; John M. Henderson The neural substrates of natural reading: A comparison of normal and nonword text using eyetracking and fMRI Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 1024, 2014. @article{Choi2014c,Most previous studies investigating the neural correlates of reading have presented text using serial visual presentation (SVP), which may not fully reflect the underlying processes of natural reading. In the present study, eye movements and BOLD data were collected while subjects either read normal paragraphs naturally or moved their eyes through "paragraphs" of pseudo-text (pronounceable pseudowords or consonant letter strings) in two pseudo-reading conditions. Eye movement data established that subjects were reading and scanning the stimuli normally. A conjunction fMRI analysis across natural- and pseudo-reading showed that a common eye-movement network including frontal eye fields (FEF), supplementary eye fields (SEF), and intraparietal sulci was activated, consistent with previous studies using simpler eye movement tasks. In addition, natural reading versus pseudo-reading showed different patterns of brain activation: normal reading produced activation in a well-established language network that included superior temporal gyrus/sulcus, middle temporal gyrus (MTG), angular gyrus (AG), inferior frontal gyrus, and middle frontal gyrus, whereas pseudo-reading produced activation in an attentional network that included anterior/posterior cingulate and parietal cortex. These results are consistent with results found in previous single-saccade eye movement tasks and SVP reading studies, suggesting that component processes of eye-movement control and language processing observed in past fMRI research generalize to natural reading. The results also suggest that combining eyetracking and fMRI is a suitable method for investigating the component processes of natural reading in fMRI research. |
Heeju Hwang; Elsi Kaiser The role of the verb in grammatical function assignment in English and Korean Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 5, pp. 1363–1376, 2014. @article{Hwang2014,One of the central questions in speech production is how speakers decide which entity to assign to which grammatical function. According to the lexical hypothesis (e. g., Bock & Levelt, 1994), verbs play a key role in this process (e. g., "send" and "receive" result in different entities being assigned to the subject position). In contrast, according to the structural hypothesis (e. g., Bock, Irwin, & Davidson, 2004), grammatical functions can be assigned based on a speaker's conceptual representation of an event, even before a particular verb is chosen. In order to examine the role of the verb in grammatical function assignment, we investigated whether English and Korean speakers exhibit semantic interference effects for verbs during a scene description task. We also analyzed speakers' eye movements during production. We found that English speakers exhibited verb interference effects and also fixated the action/verb region before the subject region. In contrast, Korean speakers did not show any verb interference effects and did not fixate the action/verb region before the subject region. Rather, in Korean, looks to the action/verb region sharply increased following looks to the object region. The findings provide evidence for the lexical hypothesis for English and are compatible with the structural hypothesis for Korean. We suggest that whether the verb is retrieved before speech onset depends on the role that the verb plays in grammatical function assignment or structural choice in a particular language. |
Yu-Cin Jian; Chao-Jung Wu; Jia-Han Su Learners' eye movements during construction of mechanical kinematic representations from static diagrams Journal Article In: Learning and Instruction, vol. 32, pp. 51–62, 2014. @article{Jian2014,We investigated the influence of numbered arrows on construction of mechanical kinematic representations by using static diagrams. Undergraduate participants viewed a two-stage diagram depicting a flushing cistern (with or without numbered arrows) and answered questions about its function, step-by-step. The arrow group demonstrated greater overall accuracy and made fewer errors on the measure of continuous relations than did the non-arrow group. The arrow group also spent more time looking at components relevant to the operational sequence and had longer first-pass fixation times and shorter saccade lengths. The non-arrow group made more saccades between the two diagrams. Analysis of transition probabilities indicated that both groups viewed components according to their continuous relations. The arrow group followed the numbered arrows whereas the unique pathway of the non-arrow group was to compare the two diagrams. These findings indicate that numbered arrows provide perceptual information but also facilitate cognitive processing. |
Wonil Choi; Peter C. Gordon Word skipping during sentence reading: effects of lexicality on parafoveal processing Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 1, pp. 201–213, 2014. @article{Choi2014b,Two experiments examined how lexical status affects the targeting of saccades during reading by using the boundary technique to vary independently the content of a letter string when seen in parafoveal preview and when directly fixated. Experiment 1 measured the skipping rate for a target word embedded in a sentence under three parafoveal preview conditions: full preview (e.g., brain-brain), pseudohomophone preview (e.g., brane-brain), and orthographic nonword control preview (e.g., brant-brain); in the first condition, the preview string was always an English word, while in the second and third conditions, it was always a nonword. Experiment 2 investigated three conditions where the preview string was always a word: full preview (e.g., beach-beach), homophone preview (e.g., beech-beach), and orthographic control preview (e.g., bench-beach). None of the letter string manipulations used to create the preview conditions in the experiments disrupted sublexical orthographic or phonological patterns. In Experiment 1, higher skipping rates were observed for the full (lexical) preview condition, which consisted of a word, than for the nonword preview conditions (pseudohomophone and orthographic control). In contrast, Experiment 2 showed no difference in skipping rates across the three types of lexical preview conditions (full, homophone, and orthographic control), although preview type did influence reading times. This pattern indicates that skipping not only depends on the presence of disrupted sublexical patterns of orthography or phonology, but also is critically dependent on processes that are sensitive to the lexical status of letter strings in the parafovea. |
Jörn M. Horschig; Ole Jensen; Martine R. Schouwenburg; Roshan Cools; Mathilde Bonnefond Alpha activity reflects individual abilities to adapt to the environment Journal Article In: NeuroImage, vol. 89, pp. 235–243, 2014. @article{Horschig2014,Recent findings suggest that oscillatory alpha activity (7-13. Hz) is associated with functional inhibition of sensory regions by filtering incoming information. Accordingly the alpha power in visual regions varies in anticipation of upcoming, predictable stimuli which has consequences for visual processing and subsequent behavior. In covert spatial attention studies it has been demonstrated that performance correlates with the adaptation of alpha power in response to explicit spatial cueing. However it remains unknown whether such an adaptation also occurs in response to implicit statistical properties of a task. In a covert attention switching paradigm, we here show evidence that individuals differ on how they adapt to implicit statistical properties of the task. Subjects whose behavioral performance reflects the implicit change in switch trial likelihood show strong adjustment of anticipatory alpha power lateralization. Most importantly, the stronger the behavioral adjustment to the switch trial likelihood was, the stronger the adjustment of anticipatory posterior alpha lateralization. We conclude that anticipatory spatial attention is reflected in the distribution of posterior alpha band power which is predictive of individual detection performance in response to the implicit statistical properties of the task. |
Ali Borji; Daniel Parks; Laurent Itti Complementary effects of gaze direction and early saliency in guiding fixations during free viewing Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–32, 2014. @article{Borji2014a,Gaze direction provides an important and ubiquitous communication channel in daily behavior and social interaction of humans and some animals. While several studies have addressed gaze direction in synthesized simple scenes, few have examined how it can bias observer attention and how it might interact with early saliency during free viewing of natural and realistic scenes. Experiment 1 used a controlled, staged setting in which an actor was asked to look at two different objects in turn, yielding two images that differed only by the actor's gaze direction, to causally assess the effects of actor gaze direction. Over all scenes, the median probability of following an actor's gaze direction was higher than the median probability of looking toward the single most salient location, and higher than chance. Experiment 2 confirmed these findings over a larger set of unconstrained scenes collected from the Web and containing people looking at objects and/or other people. To further compare the strength of saliency versus gaze direction cues, we computed gaze maps by drawing a cone in the direction of gaze of the actors present in the images. Gaze maps predicted observers' fixation locations significantly above chance, although below saliency. Finally, to gauge the relative importance of actor face and eye directions in guiding observer's fixations, in Experiment 3, observers were asked to guess the gaze direction from only an actor's face region (with the rest of the scene masked), in two conditions: actor eyes visible or masked. Median probability of guessing the true gaze direction within ±9° was significantly higher when eyes were visible, suggesting that the eyes contribute significantly to gaze estimation, in addition to face region. Our results highlight that gaze direction is a strong attentional cue in guiding eye movements, complementing low-level saliency cues, and derived from both face and eyes of actors in the scene. Thus gaze direction should be considered in constructing more predictive visual attention models in the future. |
Daniel J. Schad; Sarah Risse; Timothy J. Slattery; Keith Rayner Word frequency in fast priming: Evidence for immediate cognitive control of eye movements during reading Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 390–414, 2014. @article{Schad2014,Numerous studies have demonstrated effects of word frequency on eye movements during reading, but the precise timing of this influence has remained unclear. The fast priming paradigm (Sereno & Rayner, 1992) was previously used to study influences of related versus unrelated primes on the target word. Here, we used this procedure to investigate whether the frequency of the prime word has a direct influence on eye movements during reading when the prime-target relation is not manipulated. We found that with average prime intervals of 32 ms readers made longer single fixation durations on the target word in the low than in the high frequency prime condition. Distributional analyses demonstrated that the effect of prime frequency on single fixation durations occurred very early, supporting theories of immediate cognitive control of eye movements. Finding prime frequency effects only 207 ms after visibility of the prime and for prime durations of 32 ms yields new time constraints for cognitive processes controlling eye movements during reading. Our variant of the fast priming paradigm provides a new approach to test early influences of word processing on eye movement control during reading. |
Khanh Vy Nguyen; Katherine S. Binder; Carolyn Nemier; Scott P. Ardoin Gotcha! Catching kids during mindless reading Journal Article In: Scientific Studies of Reading, vol. 18, no. 4, pp. 274–290, 2014. @article{Nguyen2014,The purpose of the current study was to examine the mindless reading behavior of children. Across two studies, 2nd-grade students read passages while their eye movements were monitored. Trained raters then identified mindless reading behaviors from the eye movement records. Several important findings emerged. We were able to reliably identify mindless reading behavior in children using eye-tracking methodology, which was characterized by shorter gaze durations and total time, more skipping, and in general a more erratic reading pattern than on-task reading behavior. On the other hand, on-task reading behavior was characterized by an increase in fixations and regressions, especially intraword regressions. Word frequency effects were attenuated during mindless reading. In addition, the children who engaged in mindless reading had weaker reading achievement profiles compared to children who read the entire passage.$backslash$nThe purpose of the current study was to examine the mindless reading behavior of children. Across two studies, 2nd-grade students read passages while their eye movements were monitored. Trained raters then identified mindless reading behaviors from the eye movement records. Several important findings emerged. We were able to reliably identify mindless reading behavior in children using eye-tracking methodology, which was characterized by shorter gaze durations and total time, more skipping, and in general a more erratic reading pattern than on-task reading behavior. On the other hand, on-task reading behavior was characterized by an increase in fixations and regressions, especially intraword regressions. Word frequency effects were attenuated during mindless reading. In addition, the children who engaged in mindless reading had weaker reading achievement profiles compared to children who read the entire passage. |
Maartje Velde; Antje S. Meyer; Agnieszka E. Konopka Message formulation and structural assembly: Describing "easy" and "hard" events with preferred and dispreferred syntactic structures Journal Article In: Journal of Memory and Language, vol. 71, no. 1, pp. 124–144, 2014. @article{Velde2014,When formulating simple sentences to describe pictured events, speakers look at the referents they are describing in the order of mention. Accounts of incrementality in sentence production rely heavily on analyses of this gaze-speech link. To identify systematic sources of variability in message and sentence formulation, two experiments evaluated differences in formulation for sentences describing "easy" and "hard" events (more codable and less codable events) with preferred and dispreferred structures (actives and passives). Experiment 1 employed a subliminal cuing manipulation and a cumulative priming manipulation to increase production of passive sentences. Experiment 2 examined the influence of event codability on formulation without a cuing manipulation. In both experiments, speakers showed an early preference for looking at the agent of the event when constructing active sentences. This preference was attenuated by event codability, suggesting that speakers were less likely to prioritize encoding of a single character at the outset of formulation in "easy" events than in "harder" events. Accessibility of the agent influenced formulation primarily when an event was "harder" to describe. Formulation of passive sentences in Experiment 1 also began with early fixations to the agent but changed with exposure to passive syntax: speakers were more likely to consider the patient as a suitable sentential starting point after cumulative priming. The results show that the message-to-language mapping in production can vary with the ease of encoding an event structure and of generating a suitable linguistic structure. |
Hossein Karimi; Kumiko Fukumura; Fernanda Ferreira; Martin J. Pickering The effect of noun phrase length on the form of referring expressions Journal Article In: Memory & Cognition, vol. 42, no. 6, pp. 993–1009, 2014. @article{Karimi2014,The length of a noun phrase has been shown to influence choices such as syntactic role assignment (e.g., whether the noun phrase is realized as the subject or the object). But does length also affect the choice between different forms of referring expressions? Three experiments investigated the effect of antecedent length on the choice between pronouns (e.g., he) and repeated nouns (e.g., the actor) using a sentence-continuation paradigm. Experiments 1 and 2 found an effect of antecedent length on written continuations: Participants used more pronouns (relative to repeated nouns) when the antecedent was longer than when it was shorter. Experiment 3 used a spoken continuation task and replicated the effect of antecedent length on the choice of referring expressions. Taken together, the results suggest that longer antecedents increase the likelihood of pronominal reference. The results support theories arguing that length enhances the accessibility of the associated entity through richer semantic encoding. |
Kathryn Louise McCabe; Rebbekah Josephine Atkinson; Gavin Cooper; Jessica Lauren Melville; Jill Harris; Ulrich Schall; Carmel M. Loughland; Renate Thienel; Linda E. Campbell In: Journal of Neurodevelopmental Disorders, vol. 6, no. 1, pp. 1–8, 2014. @article{McCabe2014,BACKGROUND: 22q11.2 deletion syndrome (22q11DS) is associated with a number of physical anomalies and neuropsychological deficits including impairments in executive and sensorimotor function. It is estimated that 25% of children with 22q11DS will develop schizophrenia and other psychotic disorders later in life. Evidence of genetic transmission of information processing deficits in schizophrenia suggests performance in 22q11DS individuals will enhance understanding of the neurobiological and genetic substrates associated with information processing. In this report, we examine information processing in 22q11DS using measures of startle eyeblink modification and antisaccade inhibition to explore similarities with schizophrenia and associations with neurocognitive performance.$backslash$n$backslash$nMETHODS: Startle modification (passive and active tasks; 120- and 480-ms pre-pulse intervals) and antisaccade inhibition were measured in 25 individuals with genetically confirmed 22q11DS and 30 healthy control subjects.$backslash$n$backslash$nRESULTS: Individuals with 22q11DS exhibited increased antisaccade error as well as some evidence (trend-level effect) of impaired sensorimotor gating during the active condition, suggesting a dysfunction in controlled attentional processing, rather than a pre-attentive dysfunction using this paradigm.$backslash$n$backslash$nCONCLUSIONS: The findings from the present study show similarities with previous studies in clinical populations associated with 22q11DS such as schizophrenia that may indicate shared dysfunction of inhibition pathways in these groups. |
K. Ooms; Philippe De Maeyer; V. Fack Study of the attentive behavior of novice and expert map users using eye tracking Journal Article In: Cartography and Geographic Information Science, vol. 41, no. 1, pp. 37–54, 2014. @article{Ooms2014,The aim of this paper is to gain better understanding of the way map users read and interpret the visual stimuli presented to them and how this can be influenced. In particular, the difference between expert and novice map users is considered. In a user study, the participants studied four screen maps which had been manipulated to introduce deviations. The eye movements of 24 expert and novice participants were tracked, recorded, and analyzed (both visually and statistically) based on a grid of Areas of Interest. These visual analyses are essential for studying the spatial dimension of maps to identify problems in design. In this research, we used visualization of eye movement metrics (fixation count and duration) in a 2D and 3D grid and a statistical comparison of the grid cells. The results show that the users' eye movements clearly reflect the main elements on the map. The users' attentive behavior is influenced by deviating colors, as their attention is drawn to it. This could also influence the users' interpretation process. Both user groups encountered difficulties when trying to interpret and store map objects that were mirrored. Insights into how different types of map users read and interpret map content are essential in this fast-evolving era of digital cartographic products. |
Tobias Bormann; Sascha A. Wolfer; Wibke Hachmann; Wolf A. Lagrèze; Lars Konieczny An eye movement study on the role of the visual field defect in pure alexia Journal Article In: PLoS ONE, vol. 9, no. 7, pp. e100898, 2014. @article{Bormann2014,Pure alexia is a severe impairment of word reading which is usually accompanied by a right-sided visual field defect. Patients with pure alexia exhibit better preserved writing and a considerable word length effect, claimed to result from a serial letter processing strategy. Two experiments compared the eye movements of four patients with pure alexia to controls with simulated visual field defects (sVFD) when reading single words. Besides differences in response times and differential effects of word length on word reading in both groups, fixation durations and the occurrence of a serial, letter-by-letter fixation strategy were investigated. The analyses revealed quantitative and qualitative differences between pure alexic patients and unimpaired individuals reading with sVFD. The patients with pure alexia read words slower and exhibited more fixations. The serial, letter-by-letter fixation strategy was observed only in the patients but not in the controls with sVFD. It is argued that the VFD does not cause pure alexic reading. |
John M. Henderson; Steven G. Luke Stable individual differences in saccadic eye movements during reading, pseudoreading, scene viewing, and scene search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1390–1400, 2014. @article{Henderson2014,Mean fixation duration and mean saccade amplitude during active viewing tasks differ from person to person. Previous studies have shown that these individual differences tend to be stable across at least some tasks, suggesting that they may reflect underlying traits associated with individuals. However, whether these individual differences are also stable over time has not been established. The present study established stable individual differences in mean fixation duration and mean saccade amplitude across 4 viewing tasks, showed that the observed individual differences are stable over several days, and extended these results to standard deviations of fixation duration and saccade amplitude. The results have implications for theories of eye movement control and for using eye movement characteristics as individual difference measures. |
Michael B. McCamy; Stephen L. Macknik; Susana Martinez-Conde Different fixational eye movements mediate the prevention and the reversal of visual fading Journal Article In: Journal of Physiology, vol. 592, no. 19, pp. 4381–4394, 2014. @article{McCamy2014,Fixational eye movements (FEMs; including microsaccades, drift and tremor) are thought to improve visibility during fixation by thwarting neural adaptation to unchanging stimuli, but how the different FEM types influence this process is a matter of debate. Attempts to answer this question have been hampered by the failure to distinguish between the prevention of fading (where fading is blocked before it happens in the first place) and the reversal of fading (where vision is restored after fading has already occurred). Because fading during fixation is a detriment to clear vision, the prevention of fading, which avoids visual degradation before it happens, is a more desirable scenario than improving visibility after fading has occurred. Yet previous studies have not examined the role of FEMs in the prevention of fading, but have focused on visual restoration instead. Here we set out to determine the differential contributions and efficacies of microsaccades and drift to preventing fading in human vision. Our results indicate that both microsaccades and drift mediate the prevention of visual fading. We also found that drift is a potentially larger contributor to preventing fading than microsaccades, although microsaccades are more effective than drift. Microsaccades moreover prevented foveal and peripheral fading in an equivalent fashion, and their efficacy was independent of their size, number, and direction. Our data also suggest that faster drift may prevent fading better than slower drift. These findings may help to reconcile the long-standing controversy concerning the comparative roles of microsaccades and drift in visibility during fixation |
Francisco M. Costela; Jorge Otero-Millan; Michael B. McCamy; Stephen L. Macknik; Xoana G. Troncoso; Ali Najafian Jazi; Sharon M. Crook; Susana Martinez-Conde Fixational eye movement correction of blink-induced gaze position errors Journal Article In: PLoS ONE, vol. 9, no. 10, pp. e110889, 2014. @article{Costela2014,Our eyes move continuously. Even when we attempt to fix our gaze, we produce "fixational" eye movements including microsaccades, drift and tremor. The potential role of microsaccades versus drifts in the control of eye position has been debated for decades and remains in question today. Here we set out to determine the corrective functions of microsaccades and drifts on gaze-position errors due to blinks in non-human primates (Macaca mulatta) and humans. Our results show that blinks contribute to the instability of gaze during fixation, and that microsaccades, but not drifts, correct fixation errors introduced by blinks. These findings provide new insights about eye position control during fixation, and indicate a more general role of microsaccades in fixation correction than thought previously. |
Ellen Gurman Bard; Robin L. Hill; Mary Ellen Foster; Manabu Arai Tuning accessibility of referring expressions in situated dialogue Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 8, pp. 928–949, 2014. @article{Bard2014,Accessibility theory associates more complex referring expressions with less accessible referents. Felicitous referring expressions should reflect accessibility from the addressee's perspective, which may be difficult for speakers to assess incrementally. If mechanisms shared by perception and production help interlocutors align internal representations, then dyads with different roles and different things to say should profit less from alignment. We examined introductory mentions of on-screen shapes within a joint task for effects of access to the addressee's attention, of players' actions and of speakers' roles. Only speakers' actions affected the form of referring expression and only different role dyads made egocentric use of actions hidden from listeners. Analysis of players' gaze around referring expressions confirmed this pattern; only same role dyads coordinated attention as the accessibility theory predicts. The results are discussed within a model distributing collaborative effort under the cons... |
Jean-Baptiste Bernard; Aurélie Calabrèse; Eric Castet Role of syllable segmentation processes in peripheral word recognition Journal Article In: Vision Research, vol. 105, pp. 226–232, 2014. @article{Bernard2014,Previous studies of foveal visual word recognition provide evidence for a low-level syllable decomposition mechanism occurring during the recognition of a word. We investigated if such a decomposition mechanism also exists in peripheral word recognition. Single words were visually presented to subjects in the peripheral field using a 6° square gaze-contingent simulated central scotoma. In the first experiment, words were either unicolor or had their adjacent syllables segmented with two different colors (color/syllable congruent condition). Reaction times for correct word identification were measured for the two different conditions and for two different print sizes. Results show a significant decrease in reaction time for the color/syllable congruent condition compared with the unicolor condition. A second experiment suggests that this effect is specific to syllable decomposition and results from strategic, presumably involving attentional factors, rather than stimulus-driven control. |
Dario Cazzoli; Chrystalina A. Antoniades; Christopher Kennard; Thomas Nyffeler; Claudio L. Bassetti; René M. Müri Eye movements discriminate fatigue due to chronotypical factors and time spent on task - A double dissociation Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e87146, 2014. @article{Cazzoli2014,Systematic differences in circadian rhythmicity are thought to be a substantial factor determining inter-individual differences in fatigue and cognitive performance. The synchronicity effect (when time of testing coincides with the respective circadian peak period) seems to play an important role. Eye movements have been shown to be a reliable indicator of fatigue due to sleep deprivation or time spent on cognitive tasks. However, eye movements have not been used so far to investigate the circadian synchronicity effect and the resulting differences in fatigue. The aim of the present study was to assess how different oculomotor parameters in a free visual exploration task are influenced by: a) fatigue due to chronotypical factors (being a 'morning type' or an 'evening type'); b) fatigue due to the time spent on task. Eighteen healthy participants performed a free visual exploration task of naturalistic pictures while their eye movements were recorded. The task was performed twice, once at their optimal and once at their non-optimal time of the day. Moreover, participants rated their subjective fatigue. The non-optimal time of the day triggered a significant and stable increase in the mean visual fixation duration during the free visual exploration task for both chronotypes. The increase in the mean visual fixation duration correlated with the difference in subjectively perceived fatigue at optimal and non-optimal times of the day. Conversely, the mean saccadic speed significantly and progressively decreased throughout the duration of the task, but was not influenced by the optimal or non-optimal time of the day for both chronotypes. The results suggest that different oculomotor parameters are discriminative for fatigue due to different sources. A decrease in saccadic speed seems to reflect fatigue due to time spent on task, whereas an increase in mean fixation duration a lack of synchronicity between chronotype and time of the day. |
Indra T. Mahayana; Chia-Lun Liu; Chi Fu Chang; Daisy L. Hung; Ovid J. L. Tzeng; Chi-Hung Juan; Neil G. Muggleton Far-space neglect in conjunction but not feature search following transcranial magnetic stimulation over right posterior parietal cortex Journal Article In: Journal of Neurophysiology, vol. 111, no. 4, pp. 705–714, 2014. @article{Mahayana2014,Near- and far-space coding in the human brain is a dynamic process. Areas in dorsal, as well as ventral visual association cortex, including right posterior parietal cortex (rPPC), right frontal eye field (rFEF), and right ventral occipital cortex (rVO), have been shown to be important in visuospatial processing, but the involvement of these areas when the information is in near or far space remains unclear. There is a need for investigations of these representations to help explain the pathophysiology of hemispatial neglect, and the role of near and far space is crucial to this. We used a conjunction visual search task using an elliptical array to investigate the effects of transcranial magnetic stimulation delivered over rFEF, rPPC, and rVO on the processing of targets in near and far space and at a range of horizontal eccentricities. As in previous studies, we found that rVO was involved in far-space search, and rFEF was involved regardless of the distance to the array. It was found that rPPC was involved in search only in far space, with a neglect-like effect when the target was located in the most eccentric locations. No effects were seen for any site for a feature search task. As the search arrays had higher predictability with respect to target location than is often the case, these data may form a basis for clarifying both the role of PPC in visual search and its contribution to neglect, as well as the importance of near and far space in these. |
Michael B. McCamy; Jorge Otero-Millan; Leandro Luigi Di Stasi; Stephen L. Macknik; Susana Martinez-Conde Highly informative natural scene regions increase microsaccade production during visual scanning Journal Article In: Journal of Neuroscience, vol. 34, no. 8, pp. 2956–2966, 2014. @article{McCamy2014a,Classical image statistics, such as contrast, entropy, and the correlation between central and nearby pixel intensities, are thought to guide ocular fixation targeting. However, these statistics are not necessarily task relevant and therefore do not provide a complete picture of the relationship between informativeness and ocular targeting. Moreover, it is not known whether either informativeness or classical image statistics affect microsaccade production; thus, the role of microsaccades in information acquisition is also unknown. The objective quantification of the informativeness of a scene region is a major challenge, because it can vary with both image features and the task of the viewer. Thus, previous definitions of informativeness suffered from subjectivity and inconsistency across studies. Here we developed an objective measure of informativeness based on fixation consistency across human observers, which accounts for both bottom-up and top-down influences in ocular targeting. We then analyzed fixations in more versus less informative image regions in relation to classical statistics. Observers generated more microsaccades on more informative than less informative image regions, and such regions also exhibited low redundancy in their classical statistics. Increased microsaccade production was not explained by increased fixation duration, suggesting that the visual system specifically uses microsaccades to heighten information acquisition from informative regions. |
Benjamin A. Parris Task conflict in the Stroop task: When Stroop interference decreases as Stroop facilitation increases in a low task conflict context Journal Article In: Frontiers in Psychology, vol. 5, pp. 1182, 2014. @article{Parris2014,In the present study participants completed two blocks of the Stroop task, one in which the response-stimulus interval (RSI) was 3500 ms and one in which RSI was 200 ms. It was expected that, in line with previous research, the shorter RSI would induce a low Task Conflict context by increasing focus on the color identification goal in the Stroop task and lead to a novel finding of an increase in facilitation and simultaneous decrease in interference. Such a finding would be problematic for models of Stroop effects that predict these indices of performance should be affected in tandem. A crossover interaction is reported supporting these predictions. As predicted, the shorter RSI resulted in incongruent and congruent trial reaction times (RTs) decreasing relative to a static neutral baseline condition; hence interference decreased as facilitation increased. An explanatory model (expanding on the work of Goldfarb and Henik, 2007) is presented that: (1) Shows how under certain conditions the predictions from single mechanism models hold true (i.e., when Task conflict is held constant); (2) Shows how it is possible that interference can be affected by an experimental manipulation that leaves facilitation apparently untouched; and (3) Predicts that facilitation cannot be independently affected by an experimental manipulation. |
Lihui Wang; Yunyan Duan; Jan Theeuwes; Xiaolin Zhou Reward breaks through the inhibitory region around attentional focus Journal Article In: Journal of Vision, vol. 14, no. 12, pp. 2–2, 2014. @article{Wang2014e,It is well known that directing attention to a location in space enhances the processing efficiency of stimuli presented at that location. Research has also shown that around this area of enhanced processing, there is an inhibitory region within which processing of information is suppressed. In this study, we investigated whether a reward-associated stimulus can break through the inhibitory surround. A distractor that was previously associated with high or low reward was presented near the target with a variable distance between them. For low-reward distractors, only the distractor very close to the target caused interference to target processing; for high-reward distractors, both near and relatively far distractors caused interference, demonstrating that task-irrelevant reward-associated stimuli can capture attention even when presented within the inhibitory surround. |
Lester C. Loschky; Ryan V. Ringer; Aaron P. Johnson; Adam M. Larson; Mark B. Neider; Arthur F. Kramer Blur detection is unaffected by cognitive load Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 522–547, 2014. @article{Loschky2014,Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects ofselective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N-back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N-back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N-back level on N-back performance, scene recognition memory, and gaze dispersion, but no effect of N-back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze- contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N-back task. |
Michaela Mahlberg; Kathy Conklin; Marie-Josée Bisson Reading Dickens's characters: Employing psycholinguistic methods to investigate the cognitive reality of patterns in texts Journal Article In: Language and Literature, vol. 23, no. 4, pp. 369–388, 2014. @article{Mahlberg2014,This article reports the findings of an empirical study that uses eye-tracking and follow-up interviews as methods to investigate how participants read body language clusters in novels by Charles Dickens. The study builds on previous corpus stylistic work that has identified patterns of body language presentation as techniques of characterisation in Dickens (Mahlberg, 2013). The article focuses on the reading of 'clusters', that is, repeated sequences of words. It is set in a research context that brings together observations from both corpus linguistics and psycholinguistics on the processing of repeated patterns. The results show that the body language clusters are read significantly faster than the overall sample extracts which suggests that the clusters are stored as units in the brain. This finding is complemented by the results of the follow-up questions which indicate that readers do not seem to refer to the clusters when talking about character information, although they are able to refer to clusters when biased prompts are used to elicit information. Beyond the specific results of the study, this article makes a contribution to the development of complementary methods in literary stylistics and it points to directions for further subclassifications of clusters that could not be achieved on the basis of corpus data alone. |
John M. Henderson; Jennifer Olejarczyk; Steven G. Luke; Joseph Schmidt Eye movement control during scene viewing: Immediate degradation and enhancement effects of spatial frequency filtering Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 486–502, 2014. @article{Henderson2014b,What controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations.$backslash$nWhat controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations. |
Maria Staudte; Matthew W. Crocker; Alexis Heloir; Michael Kipp The influence of speaker gaze on listener comprehension: Contrasting visual versus intentional accounts Journal Article In: Cognition, vol. 133, no. 1, pp. 317–328, 2014. @article{Staudte2014,Previous research has shown that listeners follow speaker gaze to mentioned objects in a shared environment to ground referring expressions, both for human and robot speakers. What is less clear is whether the benefit of speaker gaze is due to the inference of referen- tial intentions (Staudte and Crocker, 2011) or simply the (reflexive) shifts in visual atten- tion. That is, is gaze special in how it affects simultaneous utterance comprehension? In four eye-tracking studies we directly contrast speech-aligned speaker gaze of a virtual agent with a non-gaze visual cue (arrow). Our findings show that both cues similarly direct listeners' attention and that listeners can benefit in utterance comprehension from both cues. Only when they are similarly precise, however, does this equality extend to incongru- ent cueing sequences: that is, even when the cue sequence does not match the concurrent sequence of spoken referents can listeners benefit from gaze as well as arrows. The results suggest that listeners are able to learn a counter-predictive mapping of both cues to the sequence of referents. Thus, gaze and arrows can in principle be applied with equal flexi- bility and efficiency during language comprehension. |
Cyril Vienne; Laurent Sorin; Laurent Blondé; Quan Huynh-Thu; Pascal Mamassian Effect of the accommodation-vergence conflict on vergence eye movements Journal Article In: Vision Research, vol. 100, pp. 124–133, 2014. @article{Vienne2014,With the broader use of stereoscopic displays, a flurry of research activity about the accommodation- vergence conflict has emerged to highlight the implications for the human visual system. In stereoscopic displays, the introduction of binocular disparities requires the eyes to make vergence movements. In this study, we examined vergence dynamics with regard to the conflict between the stimulus-to- accommodation and the stimulus-to-vergence. In a first experiment, we evaluated the immediate effect of the conflict on vergence responses by presenting stimuli with conflicting disparity and focus on a stereoscopic display (i.e. increasing the stereoscopic demand) or by presenting stimuli with matched disparity and focus using an arrangement of displays and a beam splitter (i.e. focus and disparity specifying the same locations). We found that the dynamics of vergence responses were slower overall in the first case due to the conflict between accommodation and vergence. In a second experiment, we examined the effect of a prolonged exposure to the accommodation-vergence conflict on vergence responses, in which participants judged whether an oscillating depth pattern was in front or behind the fixation plane. An increase in peak velocity was observed, thereby suggesting that the vergence system has adapted to the stereoscopic demand. A slight increase in vergence latency was also observed, thus indicating a small decline of vergence performance. These findings offer a better understanding and document how the vergence system behaves in stereoscopic displays. We describe what stimuli in stereo-movies might produce these oculomotor effects, and discuss potential applications perspectives. |
Adele Diederich; Annette Schomburg; Marieke K. Vugt Fronto-central theta oscillations are related to oscillations in saccadic response times (SRT): An EEG and behavioral data analysis Journal Article In: PLoS ONE, vol. 9, no. 11, pp. e112974, 2014. @article{Diederich2014,The phase reset hypothesis states that the phase of an ongoing neural oscillation, reflecting periodic fluctuations in neural activity between states of high and low excitability, can be shifted by the occurrence of a sensory stimulus so that the phase value become highly constant across trials (Schroeder et al., 2008). From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multi sensory processing (Senkowski et al. 2008). We follow up on a study in which evidence of phase reset was found using a purely behavioral paradigm by including also EEG measures. In this paradigm, presentation of an auditory accessory stimulus was followed by a visual target with a stimulus-onset asynchrony (SOA) across a range from 0 to 404 ms in steps of 4 ms. This fine-grained stimulus presentation allowed us to do a spectral analysis on the mean SRT as a function of the SOA, which revealed distinct peak spectral components within a frequency range of 6 to 11 Hz with a modus of 7 Hz. The EEG analysis showed that the auditory stimulus caused a phase reset in 7-Hz brain oscillations in a widespread set of channels. Moreover, there was a significant difference in the average phase at which the visual target stimulus appeared between slow and fast SRT trials. This effect was evident in three different analyses, and occurred primarily in frontal and central electrodes. |
Claudio Lavín; René San Martín; Eduardo Rosales Jubal Pupil dilation signals uncertainty and surprise in a learning gambling task Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 218, 2014. @article{Lavin2014,Pupil dilation under constant illumination is a physiological marker where modulation is related to several cognitive functions involved in daily decision making. There is evidence for a role of pupil dilation change during decision-making tasks associated with uncertainty, reward-prediction errors and surprise. However, while some work suggests that pupil dilation is mainly modulated by reward predictions, others point out that this marker is related to uncertainty signaling and surprise. Supporting the latter hypothesis, the neural substrate of this marker is related to noradrenaline (NA) activity which has been also related to uncertainty signaling. In this work we aimed to test whether pupil dilation is a marker for uncertainty and surprise in a learning task. We recorded pupil dilation responses in 10 participants performing the Iowa Gambling Task (IGT), a decision-making task that requires learning and constant monitoring of outcomes' feedback, which are important variables within the traditional study of human decision making. Results showed that pupil dilation changes were modulated by learned uncertainty and surprise regardless of feedback magnitudes. Interestingly, greater pupil dilation changes were found during positive feedback (PF) presentation when there was lower uncertainty about a future negative feedback (NF); and by surprise during NF presentation. These results support the hypothesis that pupil dilation is a marker of learned uncertainty, and may be used as a marker of NA activity facing unfamiliar situations in humans. |
Fatema F. Ghasia; Deepak Gulati; Edward L. Westbrook; Aasef G. Shaikh Viewing condition dependence of the gaze-evoked nystagmus in Arnold Chiari type 1 malformation Journal Article In: Journal of the Neurological Sciences, vol. 339, no. 1-2, pp. 134–139, 2014. @article{Ghasia2014,Saccadic eye movements rapidly shift gaze to the target of interest. Once the eyes reach a given target, the brainstem ocular motor integrator utilizes feedback from various sources to assure steady gaze. One of such sources is cerebellum whose lesion can impair neural integration leading to gaze-evoked nystagmus. The gaze evoked nystagmus is characterized by drifts moving the eyes away from the target and a null position where the drifts are absent. The extent of impairment in the neural integration for two opposite eccentricities might determine the location of the null position. Eye in the orbit position might also determine the location of the null. We report this phenomenon in a patient with Arnold Chiari type 1 malformation who had intermittent esotropia and horizontal gaze-evoked nystagmus with a shift in the null position. During binocular viewing, the null was shifted to the right. During monocular viewing, when the eye under cover drifted nasally (secondary to the esotropia), the null of the gaze-evoked nystagmus reorganized toward the center. We speculate that the output of the neural integrator is altered from the bilateral conflicting eye in the orbit position secondary to the strabismus. This could possibly explain the reorganization of the location of the null position. |
Yuko Hara; Justin L. Gardner Encoding of graded changes in spatial specificity of prior cues in human visual cortex Journal Article In: Journal of Neurophysiology, vol. 112, no. 11, pp. 2834–2849, 2014. @article{Hara2014,Prior information about the relevance of spatial locations can vary in specificity; a single location, a subset of locations, or all locations may be of potential importance. Using a contrast-discrimination task with four possible targets, we asked whether performance benefits are graded with the spatial specificity of a prior cue and whether we could quantitatively account for behavioral performance with cortical activity changes measured by blood oxygenation level-dependent (BOLD) imaging. Thus we changed the prior probability that each location contained the target from 100 to 50 to 25% by cueing in advance 1, 2, or 4 of the possible locations. We found that behavioral performance (discrimination thresholds) improved in a graded fashion with spatial specificity. However, concurrently measured cortical responses from retinotopically defined visual areas were not strictly graded; response magnitude decreased when all 4 locations were cued (25% prior probability) relative to the 100 and 50% prior probability conditions, but no significant difference in response magnitude was found between the 100 and 50% prior probability conditions for either cued or uncued locations. Also, although cueing locations increased responses relative to noncueing, this cue sensitivity was not graded with prior probability. Furthermore, contrast sensitivity of cortical responses, which could improve contrast discrimination performance, was not graded. Instead, an efficient-selection model showed that even if sensory responses do not strictly scale with prior probability, selection of sensory responses by weighting larger responses more can result in graded behavioral performance benefits with increasing spatial specificity of prior information. |
Aleksandra Pieczykolan; Lynn Huestegge Oculomotor dominance in multitasking: Mechanisms of conflict resolution in cross-modal action Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–17, 2014. @article{Pieczykolan2014,In daily life, eye movement control usually occurs in the context of concurrent action demands in other effector domains. However, little research has focused on understanding how such cross-modal action demands are coordinated, especially when conflicting information needs to be processed conjunctly in different action modalities. In two experiments, we address this issue by studying vocal responses in the context of spatially conflicting eye movements (Experiment 1) and in the context of spatially conflicting manual actions (Experiment 2, under controlled eye fixation conditions). Crucially, a comparison across experiments allows us to assess resource scheduling priorities among the three effector systems by comparing the same (vocal) response demands in the context of eye movements in contrast to manual responses. The results indicate that in situations involving response conflict, eye movements are prioritized over concurrent action demands in another effector system. This oculomotor dominance effect corroborates previous observations in the context of multiple action demands without spatial response conflict. Furthermore, and in line with recent theoretical accounts of parallel multiple action control, resource scheduling patterns appear to be flexibly adjustable based on the temporal proximity of the two actions that need to be performed. |
Nicole C. White; Connor Reid; Timothy N. Welsh Responses of the human motor system to observing actions across species: A transcranial magnetic stimulation study Journal Article In: Brain and Cognition, vol. 92, pp. 11–18, 2014. @article{White2014,Ample evidence suggests that the role of the mirror neuron system (MNS) in monkeys is to represent the meaning of actions. The MNS becomes active in monkeys during execution, observation, and auditory experience of meaningful, object-oriented actions, suggesting that these cells represent the same action based on a variety of cues. The present study sought to determine whether the human motor system, part of the putative human MNS, similarly represents and reflects the meaning of actions rather than simply the mechanics of the actions. To this end, transcranial magnetic stimulation (TMS) of primary motor cor- tex was used to generate motor-evoked potentials (MEPs) from muscles involved in grasping while par- ticipants viewed object-oriented grasping actions performed by either a human, an elephant, a rat, or a body-less robotic arm. The analysis of MEP amplitudes suggested that activity in primary motor cortex during action observation was greatest during observation of the grasping actions of the rat and elephant, and smallest for the human and robotic arm. Based on these data, we conclude that the human action observation system can represent actions executed by non-human animals and shows sensitivity to spe- cies-specific differences in action mechanics. |
Manabu Arai; Reiko Mazuka The development of Japanese passive syntax as indexed by structural priming in comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 1, pp. 60–78, 2014. @article{Arai2014,A number of previous studies reported a phenomenon of syntactic priming with young children as evidence for cognitive representations required for processing syntactic structures. However, it remains unclear how syntactic priming reflects children's grammatical competence. The current study investigated structural priming of the Japanese passive structure with 5- and 6-year-old children in a visual-world setting. Our results showed a priming effect as anticipatory eye movements to an upcoming referent in these children but the effect was significantly stronger in magnitude in 6-year-olds than in 5-year-olds. Consistently, the responses to comprehension questions revealed that 6-year-olds produced a greater number of correct answers and more answers using the passive structure than 5-year-olds. We also tested adult participants who showed even stronger priming than the children. The results together revealed that language users with the greater linguistic competence with the passives exhibited stronger priming, demonstrating a tight relationship between the effect of priming and the development of grammatical competence. Furthermore, we found that the magnitude of the priming effect decreased over time. We interpret these results in the light of an error-based learning account. Our results also provided evidence for prehead as well as head-independent priming. |
Kevin C. Dieter; Bo Hu; David C. Knill; Randolph Blake; Duje Tadin Kinesthesis can make an invisible hand visible Journal Article In: Psychological Science, vol. 25, no. 1, pp. 66–75, 2014. @article{Dieter2014,Self-generated body movements have reliable visual consequences. This predictive association between vision and action likely underlies modulatory effects of action on visual processing. However, it is unknown whether actions can have generative effects on visual perception. We asked whether, in total darkness, self-generated body movements are sufficient to evoke normally concomitant visual perceptions. Using a deceptive experimental design, we discovered that waving one's own hand in front of one's covered eyes can cause visual sensations of motion. Conjecturing that these visual sensations arise from multisensory connectivity, we showed that grapheme-color synesthetes experience substantially stronger kinesthesis-induced visual sensations than nonsynesthetes do. Finally, we found that the perceived vividness of kinesthesis-induced visual sensations predicted participants' ability to smoothly track self-generated hand movements with their eyes in darkness, which indicates that these sensations function like typical retinally driven visual sensations. Evidently, even in the complete absence of external visual input, the brain predicts visual consequences of actions. |
Ashley Farris-Trimble; Bob McMurray; Nicole Cigrand; J. Bruce Tomblin The process of spoken word recognition in the face of signal degradation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 308–327, 2014. @article{FarrisTrimble2014,Though much is known about how words are recognized, little research has focused on how a degraded signal affects the fine-grained temporal aspects of real-time word recognition. The perception of degraded speech was examined in two populations with the goal of describing the time course of word recognition and lexical competition. Thirty-three postlingually deafened cochlear implant (CI) users and 57 normal hearing (NH) adults (16 in a CI-simulation condition) participated in a visual world paradigm eye-tracking task in which their fixations to a set of phonologically related items were monitored as they heard one item being named. Each degraded-speech group was compared with a set of age-matched NH participants listening to unfiltered speech. CI users and the simulation group showed a delay in activation relative to the NH listeners, and there is weak evidence that the CI users showed differences in the degree of peak and late competitor activation. In general, though, the degraded-speech groups behaved statistically similarly with respect to activation levels. |
Sabine Born; Isaline Mottet; Dirk Kerzel Presaccadic perceptual facilitation effects depend on saccade execution: Evidence from the stop-signal paradigm Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–10, 2014. @article{Born2014,Prior to the onset of a saccadic eye movement, perception is facilitated at the saccade target location. This has been attributed to a shift of attention. To test whether presaccadic attention shifts are strictly dependent on saccade execution, we examined whether they are found when observers are required to cancel the eye movement. We combined a dual task with the stop-signal paradigm: Subjects made saccades as quickly as possible to a cued location while discriminating a stimulus either at the saccade target or at the opposite location. A stop signal was presented on a subset of trials, asking subjects to cancel the eye movement. The delay of the stop signal was adjusted to yield successful inhibition of the saccade in 50% of trials. Results show similar perceptual facilitation at the saccade target for saccades with or without a stop signal, suggesting that presaccadic attention shifts are obligatory for all saccades. However, there was facilitation only when saccades were actually performed, not when observers successfully inhibited them. Thus, preparing an eye movement without subsequently executing it does not result in an attention shift. The results speak to a difference between saccade preparation and saccade programming. In light of the strong dependence on saccade execution, we discuss the functional role and causes of presaccadic attention shifts. |
Guido Maiello; Manuela Chessa; Fabio Solari; Peter J. Bex Simulated disparity and peripheral blur interact during binocular fusion Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–14, 2014. @article{Maiello2014,We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. |
Fatema F. Ghasia; Aasef G. Shaikh Source of high-frequency oscillations in oblique saccade trajectory Journal Article In: Experimental Eye Research, vol. 121, pp. 5–10, 2014. @article{Ghasia2014a,Most common eye movements, oblique saccades, feature rapid velocity, precise amplitude, but curved trajectory that is variable from trial-to-trial. In addition to curvature and inter-trial variability, the oblique saccade trajectory also features high-frequency oscillations. A number of studies proposed the physiological basis of the curvature and inter-trial variability of the oblique saccade trajectory, but kinematic characteristics of high-frequency oscillations are yet to be examined. We measured such oscillations and compared their properties with orthogonal pure horizontal and pure vertical oscillations generated during pure vertical and pure horizontal saccades, respectively. We found that the frequency of oscillations during oblique saccades ranged between 15 and 40Hz, consistent with the frequency of orthogonal saccadic oscillations during pure horizontal or pure vertical saccades. We also found that the amplitude of oblique saccade oscillations was larger than pure horizontal and pure vertical saccadic oscillations. These results suggest that the superimposed high-frequency sinusoidal oscillations upon the oblique saccade trajectory represent reverberations of disinhibited circuit of reciprocally innervated horizontal and vertical burst generators. |
Rebecca P. Lawson; Ben Seymour; Eleanor Loh; Antoine Lutti; Raymond J. Dolan; Peter Dayan; Nikolaus Weiskopf; Jonathan P. Roiser The habenula encodes negative motivational value associated with primary punishment in humans Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 32, pp. 11858–11863, 2014. @article{Lawson2014,Learning what to approach, and what to avoid, involves assigning value to environmental cues that predict positive and negative events. Studies in animals indicate that the lateral habenula encodes the previously learned negative motivational value of stimuli. However, involvement of the habenula in dynamic trial-by-trial aversive learning has not been assessed, and the functional role of this structure in humans remains poorly characterized, in part, due to its small size. Using high-resolution functional neuroimaging and computational modeling of reinforcement learning, we demonstrate positive habenula responses to the dynamically changing values of cues signaling painful electric shocks, which predict behavioral suppression of responses to those cues across individuals. By contrast, negative habenula responses to monetary reward cue values predict behavioral invigoration. Our findings show that the habenula plays a key role in an online aversive learning system and in generating associated motivated behavior in humans. |
Amy Rouinfar; Elise Agra; Adam M. Larson; N. Sanjay Rebello; Lester C. Loschky In: Frontiers in Psychology, vol. 5, pp. 1094, 2014. @article{Rouinfar2014,This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. |
Keir X. X. Yong; Timothy J. Shakespeare; Dave Cash; Susie M. D. Henley; Jennifer M. Nicholas; Gerard R. Ridgway; Hannah L. Golden; Elizabeth K. Warrington; Amelia M. Carton; Diego Kaski; Jonathan M. Schott; Jason D. Warren; Sebastian J. Crutch Prominent effects and neural correlates of visual crowding in a neurodegenerative disease population Journal Article In: Brain, vol. 137, no. 12, pp. 3284–3299, 2014. @article{Yong2014,Crowding is a breakdown in the ability to identify objects in clutter, and is a major constraint on object recognition. Crowding particularly impairs object perception in peripheral, amblyopic and possibly developing vision. Here we argue that crowding is also a critical factor limiting object perception in central vision of individuals with neurodegeneration of the occipital cortices. In the current study, individuals with posterior cortical atrophy (n=26), typical Alzheimer's disease (n=17) and healthy control subjects (n=14) completed centrally-presented tests of letter identification under six different flanking conditions (unflanked, and with letter, shape, number, same polarity and reverse polarity flankers) with two different target-flanker spacings (condensed, spaced). Patients with posterior cortical atrophy were significantly less accurate and slower to identify targets in the condensed than spaced condition even when the target letters were surrounded by flankers of a different category. Importantly, this spacing effect was observed for same, but not reverse, polarity flankers. The difference in accuracy between spaced and condensed stimuli was significantly associated with lower grey matter volume in the right collateral sulcus, in a region lying between the fusiform and lingual gyri. Detailed error analysis also revealed that similarity between the error response and the averaged target and flanker stimuli (but not individual target or flanker stimuli) was a significant predictor of error rate, more consistent with averaging than substitution accounts of crowding. Our findings suggest that crowding in posterior cortical atrophy can be regarded as a pre-attentive process that uses averaging to regularize the pathologically noisy representation of letter feature position in central vision. These results also help to clarify the cortical localization of feature integration components of crowding. More broadly, we suggest that posterior cortical atrophy provides a neurodegenerative disease model for exploring the basis of crowding. These data have significant implications for patients with, or who will go on to develop, dementia-related visual impairment, in whom acquired excessive crowding likely contributes to deficits in word, object, face and scene perception. |
Paul G. Middlebrooks; Jeffrey D. Schall Response inhibition during perceptual decision making in humans and macaques Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 2, pp. 353–366, 2014. @article{Middlebrooks2014,Response inhibition in stop signal tasks has been explained as the outcome of a race between GO and STOP processes (e.g., Logan, 1981). Response choice in two-alternative perceptual categorization tasks has been explained as the outcome of an accumulation of evidence for the alternative responses. To begin unifying these two powerful investigation frameworks, we obtained data from humans and macaque monkeys performing a stop signal task with responses guided by perceptual categorization and variable degrees of difficulty, ranging from low to high accuracy. Comparable results across species reinforced the validity of this animal model. Response times and errors increased with categorization difficulty. The probability of failing to inhibit responses on stop signal trials increased with stop signal delay, and the response times for failed stop signal trials were shorter than those for trials with no stop signal. Thus, the Logan race model could be applied to estimate the duration of the stopping process. We found that the duration of the STOP process did not vary across a wide range of discrimination accuracies. This is consistent with the functional, and possibly mechanistic, independence of choice and inhibition mechanisms. |
Anthony S. Barnhart; Stephen D. Goldinger Blinded by magic: Eye-movements reveal the misdirection of attention Journal Article In: Frontiers in Psychology, vol. 5, pp. 1461, 2014. @article{Barnhart2014,Recent studies (e.g., Kuhn and Tatler, 2005) have suggested that magic tricks can provide a powerful and compelling domain for the study of attention and perception. In particular, many stage illusions involve attentional misdirection, guiding the observer's gaze to a salient object or event, while another critical action, such as sleight of hand, is taking place. Even if the critical action takes place in full view, people typically fail to see it due to inattentional blindness (IB). In an eye-tracking experiment, participants watched videos of a new magic trick, wherein a coin placed beneath a napkin disappears, reappearing under a different napkin. Appropriately deployed attention would allow participants to detect the "secret" event that underlies the illusion (a moving coin), as it happens in full view and is visible for approximately 550 ms. Nevertheless, we observed high rates of IB. Unlike prior research, eye-movements during the critical event showed different patterns for participants, depending upon whether they saw the moving coin. The results also showed that when participants watched several "practice" videos without any moving coin, they became far more likely to detect the coin in the critical trial. Taken together, the findings are consistent with perceptual load theory (Lavie and Tsal, 1994). |
Wing Yee Chow; Shevaun Lewis; Colin Phillips Immediate sensitivity to structural constraints in pronoun resolution Journal Article In: Frontiers in Psychology, vol. 5, pp. 630, 2014. @article{Chow2014,Real-time interpretation of pronouns is sometimes sensitive to the presence of grammatically-illicit antecedents and sometimes not. This occasional sensitivity has been taken as evidence that structural constraints do not immediately impact the initial antecedent retrieval for pronoun interpretation. We argue that it is important to separate effects that reflect the initial antecedent retrieval process from those that reflect later processes. We present results from five reading comprehension experiments. Both the current results and previous evidence support the hypothesis that agreement features and structural constraints immediately constrain the antecedent retrieval process for pronoun interpretation. Occasional sensitivity to grammatically-illicit antecedents may be due to repair processes triggered when the initial retrieval fails to return a grammatical antecedent. |
Koji Kashihara; Kazuo Okanoya; Nobuyuki Kawai Emotional attention modulates microsaccadic rate and direction Journal Article In: Psychological Research, vol. 78, no. 2, pp. 166–179, 2014. @article{Kashihara2014,Involuntary microsaccades and voluntary saccades reflect human brain activities during attention and cognitive tasks. Our eye movements can also betray our emotional state. However, the effects of attention to emotion on microsaccadic activity remain unknown. The present study was conducted in healthy volunteers to investigate the effects of devoting attention to exogenous emotional stimuli on microsaccadic response, with change in pupil size as an index of sympathetic nervous system activity. Event-related responses to unpleasant images significantly inhibited the rate of microsaccade appearance and altered pupil size (Experiment 1). Additionally, microsaccadic responses to covert orienting of attention to emotional stimuli appeared significantly in the anti-direction to a target, with a fast reaction time (Experiment 2). Therefore, we concluded that attentional shifts induced by exogenous emotional stimuli can modulate microsaccadic activities. Future studies of the interaction between miniature eye movements and emotion may be beneficial in the assessment of pathophysiological responses in mental disorders. |
Jens Kremkow; Jianzhong Jin; Stanley J. Komban; Yushi Wang; Reza Lashgari; Xiaobing Li; Michael Jansen; Qasim Zaidi; Jose-Manuel Alonso Neuronal nonlinearity explains greater visual spatial resolution for darks than lights Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 8, pp. 3170–3175, 2014. @article{Kremkow2014,Astronomers and physicists noticed centuries ago that visual spatial resolution is higher for dark than light stimuli, but the neuronal mechanisms for this perceptual asymmetry remain unknown. Here we demonstrate that the asymmetry is caused by a neuronal nonlinearity in the early visual pathway. We show that neurons driven by darks (OFF neurons) increase their responses roughly linearly with luminance decrements, independent of the background luminance. However, neurons driven by lights (ON neurons) saturate their responses with small increases in luminance and need bright backgrounds to approach the linearity of OFF neurons. We show that, as a consequence of this difference in linearity, receptive fields are larger in ON than OFF thalamic neurons, and cortical neurons are more strongly driven by darks than lights at low spatial frequencies. This ON/OFF asymmetry in linearity could be demonstrated in the visual cortex of cats, monkeys, and humans and in the cat visual thalamus. Furthermore, in the cat visual thalamus, we show that the neuronal nonlinearity is present at the ON receptive field center of ON-center neurons and ON receptive field surround of OFF-center neurons, suggesting an origin at the level of the photoreceptor. These results demonstrate a fundamental difference in visual processing between ON and OFF channels and reveal a competitive advantage for OFF neurons over ON neurons at low spatial frequencies, which could be important during cortical development when retinal images are blurred by immature optics in infant eyes. |
Stephen Layfield; Wesley Burge; William G. Mitchell; Lesley A. Ross; Christine Denning; Frank Amthor; Kristina M. Visscher The effect of speed of processing training on microsaccade amplitude Journal Article In: PLoS ONE, vol. 9, no. 9, pp. e107808, 2014. @article{Layfield2014,Older adults experience cognitive deficits that can lead to driving errors and a loss of mobility. Fortunately, some of these deficits can be ameliorated with targeted interventions which improve the speed and accuracy of simultaneous attention to a central and a peripheral stimulus called Speed of Processing training. To date, the mechanisms behind this effective training are unknown. We hypothesized that one potential mechanism underlying this training is a change in distribution of eye movements of different amplitudes. Microsaccades are small amplitude eye movements made when fixating on a stimulus, and are thought to counteract the "visual fading" that occurs when static stimuli are presented. Due to retinal anatomy, larger microsaccadic eye movements are needed to move a peripheral stimulus between receptive fields and counteract visual fading. Alternatively, larger microsaccades may decrease performance due to neural suppression. Because larger microsaccades could aid or hinder peripheral vision, we examine the distribution of microsaccades during stimulus presentation. Our results indicate that there is no statistically significant change in the proportion of large amplitude microsaccades during a Useful Field of View-like task after training in a small sample of older adults. Speed of Processing training does not appear to result in changes in microsaccade amplitude, suggesting that the mechanism underlying Speed of Processing training is unlikely to rely on microsaccades. |
Sébastien Miellet; Roberto Caldara; Christopher Gillberg; Monika Raju; Helen Minnis Disinhibited reactive attachment disorder symptoms impair social judgements from faces Journal Article In: Psychiatry Research, vol. 215, no. 3, pp. 747–752, 2014. @article{Miellet2014,Typically developing adults and children can rapidly reach consensus regarding the trustworthiness of unfamiliar faces. Maltreated children can have problems with trusting others, yet those with the disinhibited form of reactive attachment disorder (dRAD) can be indiscriminately friendly. Whether children with dRAD symptoms appraise and conform to typical judgements about trustworthiness of faces is still unknown. We recorded eye movements of 10 maltreated dRAD children and 10 age and gender matched typically developing control children while they made social judgements from faces. Children were presented with a series of pairs of faces previously judged by adults to have high or low attractiveness or trustworthiness ratings. Typically developing children reached a consensus regarding which faces were the most trustworthy and attractive. There was less agreement among the children with dRAD symptoms. Judgments from the typically developing children showed a strong correlation between the attractiveness and trustworthiness tasks. This was not the case for the dRAD group, who showed less agreement and no significant correlation between trustworthiness and attractiveness judgments. Finally, both groups of children sampled the eye region to perform social judgments. Our data offer a unique insight in children with dRAD symptoms, providing novel and important knowledge for their rehabilitation. |
Lutz Schega; Daniel Hamacher; Sandra Erfuth; Wolfgang Behrens-Baumann; Juliane Reupsch; Michael B. Hoffmann Differential effects of head-mounted displays on visual performance Journal Article In: Ergonomics, vol. 57, no. 1, pp. 1–11, 2014. @article{Schega2014,Head-mounted displays (HMDs) virtually augment the visual world to aid visual task completion. Three types of HMDs were compared [look around (LA); optical see-through with organic light emitting diodes and virtual retinal display] to determine whether LA, leaving the observer functionally monocular, is inferior. Response times and error rates were determined for a combined visual search and Go-NoGo task. The costs of switching between displays were assessed separately. Finally, HMD effects on basic visual functions were quantified. Effects of HMDs on visual search and Go-NoGo task were small, but for LA display-switching costs for the Go-NoGo-task the effects were pronounced. Basic visual functions were most affected for LA (reduced visual acuity and visual field sensitivity, inaccurate vergence movements and absent stereo-vision). LA involved comparatively high switching costs for the Go-NoGo task, which might indicate reduced processing of external control cues. Reduced basic visual functions are a likely cause of this effect. |
Barrie P. Klein; Ben M. Harvey; Serge O. Dumoulin Attraction of position preference by spatial attention throughout human visual cortex Journal Article In: Neuron, vol. 84, no. 1, pp. 227–237, 2014. @article{Klein2014,Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. |
Paul Roux; Baudoin Forgeot d'Arc; Christine Passerieux; Franck Ramus Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation? Journal Article In: Schizophrenia Research, vol. 157, no. 1-3, pp. 78–83, 2014. @article{Roux2014,Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation.We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations).29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore.Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. |
Joanna Pilarczyk; Michał Kuniecki Emotional content of an image attracts attention more than visually salient features in various signal-to-noise ratio conditions Journal Article In: Journal of Vision, vol. 14, no. 12, pp. 1–19, 2014. @article{Pilarczyk2014,Emotional images are processed in a prioritized manner, attracting attention almost immediately. In the present study we used eye tracking to reveal what type of features within neutral, positive, and negative images attract early visual attention: semantics, visual saliency, or their interaction. Semantic regions of interest were selected by observers, while visual saliency was determined using the Graph-Based Visual Saliency model. Images were transformed by adding pink noise in several proportions to be presented in a sequence of increasing and decreasing clarity. Locations of the first two fixations were analyzed. The results showed dominance of semantic features over visual saliency in attracting attention. This dominance was linearly related to the signal-to-noise ratio. Semantic regions were fixated more often in emotional images than in neutral ones, if signal-to-noise ratio was high enough to allow participants to comprehend the gist of a scene. Visual saliency on its own did not attract attention above chance, even in the case of pure noise images. Regions both visually salient and semantically relevant attracted a similar amount of fixation compared to semantic regions alone, or even more in the case of neutral pictures. Results provide evidence for fast and robust detection of semantically relevant features. |
Alexandra Fayel; Sylvie Chokron; Céline Cavézian; Dorine Vergilino-Perez; Christelle Lemoine; Karine Doré-Mazars Characteristics of contralesional and ipsilesional saccades in hemianopic patients Journal Article In: Experimental Brain Research, vol. 232, no. 3, pp. 903–917, 2014. @article{Fayel2014,In order to further our understanding of action-blindsight, four hemianopic patients suffering from visual field loss contralateral to a unilateral occipital lesion were compared to six healthy controls during a double task of verbally reported target detection and saccadic responses toward the target. Three oculomotor tasks were used: a fixation task (i.e., without saccade) and two saccade tasks (eliciting reflexive and voluntary saccades, using step and overlap 600 ms paradigms, respectively), in separate sessions. The visual target was briefly presented at two different eccentricities (5° and 8°), in the right or left visual hemifield. Blank trials were interleaved with target trials, and signal detection theory was applied. Despite their hemifield defect, hemianopic patients retained the ability to direct a saccade toward their contralesional hemifield, whereas verbal detection reports were at chance level. However, saccade parameters (latency and amplitude) were altered by the defect. Saccades to the contralesional hemifield exhibited longer latencies and shorter amplitudes compared to those of the healthy group, whereas only the latencies of reflexive saccades to the ipsilesional hemifield were altered. Furthermore, healthy participants showed the expected latency difference between reflexive and voluntary saccades, with the latter longer than the former. This difference was not found in three out of four patients in either hemifield. Our results show action-blindsight for saccades, but also show that unilateral occipital lesions have effects on saccade generation in both visual hemifields. |
Arielle Borovsky; Sarah C. Creel Children and adults integrate talker and verb information in online processing Journal Article In: Developmental Psychology, vol. 50, no. 5, pp. 1600–1613, 2014. @article{Borovsky2014,Children seem able to efficiently interpret a variety of linguistic cues during speech comprehension, yet have difficulty interpreting sources of nonlinguistic and paralinguistic information that accompany speech. The current study asked whether (paralinguistic) voice-activated role knowledge is rapidly interpreted in coordination with a linguistic cue (a sentential action) during speech comprehension in an eye-tracked sentence comprehension task with children (ages 3-10 years) and college-aged adults. Participants were initially familiarized with 2 talkers who identified their respective roles (e.g., PRINCESS and PIRATE) before hearing a previously introduced talker name an action and object ("I want to hold the sword," in the pirate's voice). As the sentence was spoken, eye movements were recorded to 4 objects that varied in relationship to the sentential talker and action (target: SWORD, talker-related: SHIP, action-related: WAND, and unrelated: CARRIAGE). The task was to select the named image. Even young child listeners rapidly combined inferences about talker identity with the action, allowing them to fixate on the target before it was mentioned, although there were developmental and vocabulary differences on this task. Results suggest that children, like adults, store real-world knowledge of a talker's role and actively use this information to interpret speech. |
Elise Klein; S. Huber; Hans-Christoph Nuerk; Korbinian Moeller Operational momentum affects eye fixation behaviour Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 8, pp. 1614–1625, 2014. @article{Klein2014a,The operational momentum effect (OM) indicates an association of mental addition with a rightward spatial bias, whereas subtraction is associated with a leftward bias. To evaluate the assumed attentional origin of the OM effect, we evaluated not only participants' relative estimation error in a task requiring them to locate addition and subtraction results on a given number line but also their eye-fixation behaviour. Furthermore, to investigate the situatedness of spatial-numerical associations, the orientation of the number line (left-to-right vs. right-to left) was manipulated. OM biases in participants' explicit number line estimations and more implicit eye-fixation behaviour are integrated into a two-process hypothesis of the OM effect suggesting a first rough spatial anticipation followed by an evaluation/correction process. This account not only is capable of accounting for the results observed for participants' relative estimation error but is also corroborated by the eye-fixation results. Importantly, the fact that all effects were found independent of the orientation of the number line indicates that spatial-numerical associations such as the OM effect may not be hard-wired associations of spatial and numerical representations but rather reflect influences of situatedness on numerical cognition. |
Jianbo Xiao; Yu-Qiong Niu; Steven Wiesner; Xin Huang Normalization of neuronal responses in cortical area MT across signal strengths and motion directions Journal Article In: Journal of Neurophysiology, vol. 112, no. 6, pp. 1291–1306, 2014. @article{Xiao2014,Multiple visual stimuli are common in natural scenes, yet it remains unclear how multiple stimuli interact to influence neuronal responses. We investigated this question by manipulating relative signal strengths of two stimuli moving simultaneously within the receptive fields (RFs) of neurons in the extrastriate middle temporal (MT) cortex. Visual stimuli were overlapping random-dot patterns moving in two directions separated by 90°. We first varied the motion coherence of each random-dot pattern and characterized, across the direction tuning curve, the relationship between neuronal responses elicited by bidirectional stimuli and by the constituent motion components. The tuning curve for bidirectional stimuli showed response normalization and can be accounted for by a weighted sum of the responses to the motion components. Allowing nonlinear, multiplicative interaction between the two component responses significantly improved the data fit for some neurons, and the interaction mainly had a suppressive effect on the neuronal response. The weighting of the component responses was not fixed but dependent on relative signal strengths. When two stimulus components moved at different coherence levels, the response weight for the higher-coherence component was significantly greater than that for the lower-coherence component. We also varied relative luminance levels of two coherently moving stimuli and found that MT response weight for the higher-luminance component was also greater. These results suggest that competition between multiple stimuli within a neuron's RF depends on relative signal strengths of the stimuli and that multiplicative nonlinearity may play an important role in shaping the response tuning for multiple stimuli. |
Antoine Coutrot; N. Guyader How saliency, faces, and sound influence gaze in dynamic social scenes Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–17, 2014. @article{Coutrot2014,Conversation scenes are a typical example in which classical models of visual attention dramatically fail to predict eye positions. Indeed, these models rarely consider faces as particular gaze attractors and never take into account the important auditory information that always accompanies dynamic social scenes. We recorded the eye movements of participants viewing dynamic conversations taking place in various contexts. Conversations were seen either with their original soundtracks or with unrelated soundtracks (unrelated speech and abrupt or continuous natural sounds). First, we analyze how auditory conditions influence the eye movement parameters of participants. Then, we model the probability distribution of eye positions across each video frame with a statistical method (Expectation- Maximization), allowing the relative contribution of different visual features such as static low-level visual saliency (based on luminance contrast), dynamic low- level visual saliency (based on motion amplitude), faces, and center bias to be quantified. Through experimental and modeling results, we show that regardless of the auditory condition, participants look more at faces, and especially at talking faces. Hearing the original soundtrack makes participants follow the speech turn-taking more closely. However, we do not find any difference between the different types of unrelated soundtracks. These eye- tracking results are confirmed by our model that shows that faces, and particularly talking faces, are the features that best explain the gazes recorded, especially in the original soundtrack condition. Low-level saliency is not a relevant feature to explain eye positions made on social scenes, even dynamic ones. Finally, we propose groundwork for an audiovisual saliency model. |
Guilhem Ibos; David J. Freedman Dynamic integration of task-relevant visual features in posterior parietal cortex Journal Article In: Neuron, vol. 83, no. 6, pp. 1468–1480, 2014. @article{Ibos2014,The primate visual system consists of multiple hierarchically organized cortical areas, each specialized for processing distinct aspects of the visual scene. For example, color and form are encoded in ventralpathway areas such as V4 and inferior temporal cortex, while motion is preferentially processed in dorsal pathway areas such as the middletemporal area. Such representations often need to be integrated perceptually to solve tasks that depend on multiple features. We tested the hypothesis that the lateral intraparietal area (LIP) integrates disparate task-relevant visual features by recording from LIP neurons in monkeys trained to identify target stimuli composed of conjunctionsof color and motion features. We show that LIP neurons exhibit integrative representations of both color and motion features when they are taskrelevant and task-dependent shifts of both direction and color tuning. This suggests that LIP plays a role in flexibly integrating task-relevant sensory signals. |
Yamila Sevilla; Mora Maldonado; Diego E. Shalom Pupillary dynamics reveal computational cost in sentence planning Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 6, pp. 1041–1052, 2014. @article{Sevilla2014,This study investigated the computational cost associated with grammatical planning in sentence production. We measured people's pupillary responses as they produced spoken descriptions of depicted events. We manipulated the syntactic structure of the target by training subjects to use different types of sentences following a colour cue. The results showed higher increase in pupil size for the production of passive and object dislocated sentences than for active canonical subject-verb-object sentences, indicating that more cognitive effort is associated with more complex noncanonical thematic order. We also manipulated the time at which the cue that triggered structure-building processes was presented. Differential increase in pupil diameter for more complex sentences was shown to rise earlier as the colour cue was presented earlier, suggesting that the observed pupillary changes are due to differential demands in relatively independent structure-building processes during grammatical planning. Task-evoked pupillary responses provide a reliable measure to study the cognitive processes involved in sentence production. |
Zhenlan Jin; Scott N. J. Watamaniuk; Aarlenne Zein Khan; Elena Potapchuk; Stephen J. Heinen Motion integration for ocular pursuit does not hinder perceptual segregation of moving objects Journal Article In: Journal of Neuroscience, vol. 34, no. 17, pp. 5835–5841, 2014. @article{Jin2014,When confronted with a complex moving stimulus, the brain can integrate local element velocities to obtain a single motion signal, or segregate the elements to maintain awareness of their identities. The integrated motion signal can drive smooth-pursuit eye movements (Heinen and Watamaniuk, 1998), whereas the segregated signal guides attentive tracking of individual elements in multiple-object tracking tasks (MOT; Pylyshyn and Storm, 1988). It is evident that these processes can occur simultaneously, because we can effortlessly pursue ambulating creatures while inspecting disjoint moving features, such as arms and legs, but the underlying mechanism is unknown. Here, we provide evidence that separate neural circuits perform the mathematically opposed operations of integration and segregation, by demonstrating with a dual-task paradigm that the two processes do not share attentional resources. Human observers attentively tracked a subset of target elements composing a small MOT stimulus, while pursuing it ocularly as it translated across a computer display. Integration of the multidot stimulus yielded optimal pursuit. Importantly, performing MOT while pursuing the stimulus did not degrade performance on either task compared with when each was performed alone, indicating that they did not share attention. A control experiment showed that pursuit was not driven by integration of only the nontargets, leaving the MOT targets free for segregation. Nor was a predictive strategy used to pursue the stimulus, because sudden changes in its global velocity were accurately followed. The results suggest that separate neural mechanisms can simultaneously segregate and integrate the same motion signals. |
Arani Roy; Stephen V. Shepherd; Michael L. Platt Reversible inactivation of pSTS suppresses social gaze following in the macaque (Macaca mulatta) Journal Article In: Social Cognitive and Affective Neuroscience, vol. 9, no. 2, pp. 209–217, 2014. @article{Roy2014,Humans and other primates shift their attention to follow the gaze of others [gaze following (GF)]. This behavior is a foundational component of joint attention, which is severely disrupted in neurodevelopmental disorders such as autism and schizophrenia. Both cortical and subcortical pathways have been implicated in GF, but their contributions remain largely untested. While the proposed subcortical pathway hinges crucially on the amygdala, the cortical pathway is thought to require perceptual processing by a region in the posterior superior temporal sulcus (pSTS). To determine whether pSTS is necessary for typical GF behavior, we engaged rhesus macaques in a reward discrimination task confounded by leftward- and rightward-facing social distractors following saline or muscimol injections into left pSTS. We found that reversible inactivation of left pSTS with muscimol strongly suppressed GF, as assessed by reduced influence of observed gaze on target choices and saccadic reaction times. These findings demonstrate that activity in pSTS is required for normal GF by primates. |
L. L. Tanaka; J. C. Dessing; Pankhuri Malik; S. L. Prime; J. Douglas Crawford The effects of TMS over dorsolateral prefrontal cortex on trans-saccadic memory of multiple objects Journal Article In: Neuropsychologia, vol. 63, pp. 185–193, 2014. @article{Tanaka2014,Humans typically make several rapid eye movements (saccades) per second. It is thought that visual working memory can retain and spatially integrate three to four objects or features across each saccade but little is known about this neural mechanism. Previously we showed that transcranial magnetic stimulation (TMS) to the posterior parietal cortex and frontal eye fields degrade trans-saccadic memory of multiple object features (Prime, Vesia, & Crawford, 2008, Journal of Neuroscience, 28(27), 6938-6949; Prime, Vesia, & Crawford, 2010, Cerebral Cortex, 20(4), 759-772.). Here, we used a similar protocol to investigate whether dorsolateral prefrontal cortex (DLPFC), an area involved in spatial working memory, is also involved in trans-saccadic memory. Subjects were required to report changes in stimulus orientation with (saccade task) or without (fixation task) an eye movement in the intervening memory interval. We applied single-pulse TMS to left and right DLPFC during the memory delay, timed at three intervals to arrive approximately 100. ms before, 100. ms after, or at saccade onset. In the fixation task, left DLPFC TMS produced inconsistent results, whereas right DLPFC TMS disrupted performance at all three intervals (significantly for presaccadic TMS). In contrast, in the saccade task, TMS consistently facilitated performance (significantly for left DLPFC/. perisaccadic TMS and right DLPFC/. postsaccadic TMS) suggesting a dis-inhibition of trans-saccadic processing. These results are consistent with a neural circuit of trans-saccadic memory that overlaps and interacts with, but is partially separate from the circuit for visual working memory during sustained fixation. |
Richard A. I. Bethlehem; Serge O. Dumoulin; Edwin S. Dalmaijer; Miranda Smit; Tos T. J. M. Berendschot; Tanja C. W. Nijboer; Stefan Van Der Stigchel Decreased fixation stability of the preferred retinal location in juvenile macular degeneration Journal Article In: PLoS ONE, vol. 9, no. 6, pp. e100171, 2014. @article{Bethlehem2014,Macular degeneration is the main cause for diminished visual acuity in the elderly. The juvenile form of macular degeneration has equally detrimental consequences on foveal vision. To compensate for loss of foveal vision most patients with macular degeneration adopt an eccentric preferred retinal location that takes over tasks normally performed by the healthy fovea. It is unclear however, whether the preferred retinal locus also develops properties typical for foveal vision. Here, we investigated whether the fixation characteristics of the preferred retinal locus resemble those of the healthy fovea. For this purpose, we used the fixation-offset paradigm and tracked eye-position using a high spatial and temporal resolution infrared eye-tracker. The fixation-offset paradigm measures release from fixation under different fixation conditions and has been shown useful to distinguish between foveal and non-foveal fixation. We measured eye-movements in nine healthy age-matched controls and five patients with juvenile macular degeneration. In addition, we performed a simulation with the same task in a group of five healthy controls. Our results show that the preferred retinal locus does not adopt a foveal type of fixation but instead drifts further away from its original fixation and has overall increased fixation instability. Furthermore, the fixation instability is most pronounced in low frequency eye-movements representing a slow drift from fixation. We argue that the increased fixation instability cannot be attributed to fixation under an unnatural angle. Instead, diminished visual acuity in the periphery causes reduced oculomotor control and results in increased fixation instability. |
Antoine Coutrot; Nathalie Guyader; Gelu Ionescu; Alice Caplier Video viewing: Do auditory salient events capture visual attention? Journal Article In: Annals of Telecommunications, vol. 69, no. 1-2, pp. 89–97, 2014. @article{Coutrot2014a,We assess whether salient auditory events contained in soundtracks modify eye movements when exploring videos. In a previous study, we found that, on average, nonspatial sound contained in video soundtracks impacts on eye movements. This result indicates that sound could play a leading part in visual attention models to predict eye movements. In this research, we go further and test whether the effect of sound on eye movements is stronger just after salient auditory events. To automatically spot salient auditory events, we used two auditory saliency models: the discrete energy separation algorithm and the energy model. Both models provide a saliency time curve, based on the fusion of several elementary audio features. The most salient auditory events were extracted by thresholding these curves. We examined some eye movement parameters just after these events rather than on all the video frames. We showed that the effect of sound on eye movements (variability between eye positions, saccade amplitude, and fixation duration) was not stronger after salient auditory events than on average over entire videos. Thus, we suggest that sound could impact on visual exploration not only after salient events but in a more global way. © 2013 Institut Mines-Télécom and Springer-Verlag France. |
