All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2012 |
Morgan D. Barense; Iris I. A. Groen; Andy C. H. Lee; Lok-Kin Yeung; Sinead M. Brady; Mariella Gregori; Narinder Kapur; Timothy J. Bussey; Lisa M. Saksida; Richard N. A. Henson Intact memory for irrelevant information impairs perception in amnesia Journal Article In: Neuron, vol. 75, no. 1, pp. 157–167, 2012. @article{Barense2012, Memory and perception have long been considered separate cognitive processes, and amnesia resulting from medial temporal lobe (MTL) damage is thought to reflect damage to a dedicated memory system. Recent work has questioned these views, suggesting that amnesia can result from impoverished perceptual representations in the MTL, causing an increased susceptibility to interference. Using a perceptual matching task for which fMRI implicated a specific MTL structure, the perirhinal cortex, we show that amnesics with MTL damage including the perirhinal cortex, but not those with damage limited to the hippocampus, were vulnerable to object-based perceptual interference. Importantly, when we controlled such interference, their performance recovered to normal levels. These findings challenge prevailing conceptions of amnesia, suggesting that effects of damage to specific MTL regions are better understood not in terms of damage to a dedicated declarative memory system, but in terms of impoverished representations of the stimuli those regions maintain. © 2012 Elsevier Inc. |
Joseph Arizpe; Dwight J. Kravitz; Galit Yovel; Chris I. Baker Start position strongly influences fixation patterns during face processing: Difficulties with eye movements as a measure of information use Journal Article In: PLoS ONE, vol. 7, no. 2, pp. e31106, 2012. @article{Arizpe2012, Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale. |
Jane Ashby; Jinmian Yang; Kris H. C. Evans; Keith Rayner Eye movements and the perceptual span in silent and oral reading Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 4, pp. 634–640, 2012. @article{Ashby2012, Previous research has examined parafoveal processing during silent reading, but little is known about the role of these processes in oral reading. Given that masking parafoveal information slows down silent reading, we asked whether a similar effect also occurs in oral reading. To investigate the role of parafoveal processing in silent and oral reading, we manipulated the parafoveal information available to readers by changing the size of a gaze-contingent moving window. Participants read silently and orally in a one-word window and a three-word window condition as we monitored their eye movements. The lack of parafoveal information slowed reading speed in both oral and silent reading. However, the effects of parafoveal information were larger in silent reading than in oral reading, because of different effects of preview information on both when the eyes move and how often. Parafoveal information benefitted silent reading for faster readers more than for slower readers. |
Hiroshi Ashida; Ichiro Kuriki; Ikuya Murakami; Rumi Hisakata; Akiyoshi Kitaoka Direction-specific fMRI adaptation reveals the visual cortical network underlying the "Rotating Snakes" illusion Journal Article In: NeuroImage, vol. 61, no. 4, pp. 1143–1152, 2012. @article{Ashida2012, The "Rotating Snakes" figure elicits a clear sense of anomalous motion in stationary repetitive patterns. We used an event-related fMRI adaptation paradigm to investigate cortical mechanisms underlying the illusory motion. Following an adapting stimulus (S1) and a blank period, a probe stimulus (S2) that elicited illusory motion either in the same or in the opposite direction was presented. Attention was controlled by a fixation task, and control experiments precluded explanations in terms of artefacts of local adaptation, afterimages, or involuntary eye movements. Recorded BOLD responses were smaller for S2 in the same direction than S2 in the opposite direction in V1-V4, V3A, and MT+, indicating direction-selective adaptation. Adaptation in MT. + was correlated with adaptation in V1 but not in V4. With possible downstream inheritance of adaptation, it is most likely that adaptation predominantly occurred in V1. The results extend our previous findings of activation in MT. + (I. Kuriki, H. Ashida, I. Murakami, and A. Kitaoka, 2008), revealing the activity of the cortical network for motion processing from V1 towards MT+. This provides evidence for the role of front-end motion detectors, which has been assumed in proposed models of the illusion. |
Ricky K. C. Au; Fuminori Ono; Katsumi Watanabe Time dilation induced by object motion is based on spatiotopic but not retinotopic positions Journal Article In: Frontiers in Psychology, vol. 3, pp. 58, 2012. @article{Au2012, Time perception of visual events depends on the visual attributes of the scene. Previous studies reported that motion of object can induce an illusion of lengthened time. In the present study, we asked the question whether such time dilation effect depends on the actual physical motion of the object (spatiotopic coordinate), or its relative motion with respect to the retina (retinotopic coordinate). Observers were presented with a moving stimulus and a static reference stimulus in separate intervals, and judged which interval they perceived as having a longer duration, under conditions with eye fixation (Experiment 1) and with eye movement at same velocity as the moving stimulus (Experiment 2). The data indicated that the perceived duration was longer under object motion, and depended on the actual movement of the object rather than relative retinal motion. These results are in support with the notion that the brain possesses a spatiotopic representation regarding the real world positions of objects in which the perception of time is associated with. |
A. J. Austin; Theodora Duka Mechanisms of attention to conditioned stimuli predictive of a cigarette outcome Journal Article In: Behavioural Brain Research, vol. 232, no. 1, pp. 183–189, 2012. @article{Austin2012, Attention to stimuli associated with a rewarding outcome may be mediated by the incentive motivational properties that the stimulus acquires during conditioning. Other theories of attention state that the prediction error (the discrepancy between the expected and the actual outcome) during conditioning guides attention; once the outcome is fully predicted, attention should be abolished for the conditioned stimulus. The current study examined which of these mechanisms is dominant in conditioning when the outcome is highly rewarding. Allocation of attention to stimuli associated with cigarettes (the rewarding outcome) was tested in 16 smokers, who underwent a classical conditioning paradigm, where abstract visual stimuli were paired with a tobacco outcome. Stimuli were associated with 100% (stimulus A), 50% (stimulus B), or 0% (stimulus C) probability of receiving tobacco. Attention was measured using an eye-tracker device, and the appetitive value of the stimuli was measured with subjective pleasantness ratings during the conditioning process. Dwell time bias (duration of eye gaze) was greatest overall for the A stimulus, and increased over conditioning. Attention to stimulus A was dependent on the ratings of pleasantness that the stimulus evoked, and on the desire to smoke. These findings appear to support the theory that attention for conditioned stimuli is dominated by the incentive motivational qualities of the outcome they predict, and implicate a role for attention in the maintenance of addictive behaviours like smoking. |
Robert G. Alexander; Gregory J. Zelinsky Effects of part-based similarity on visual search: The Frankenbear experiment Journal Article In: Vision Research, vol. 54, pp. 20–30, 2012. @article{Alexander2012, Do the target-distractor and distractor-distractor similarity relationships known to exist for simple stimuli extend to real-world objects, and are these effects expressed in search guidance or target verification? Parts of photorealistic distractors were replaced with target parts to create four levels of target-distractor similarity under heterogenous and homogenous conditions. We found that increasing target-distractor similarity and decreasing distractor-distractor similarity impaired search guidance and target verification, but that target-distractor similarity and heterogeneity/homogeneity interacted only in measures of guidance; distractor homogeneity lessens effects of target-distractor similarity by causing gaze to fixate the target sooner, not by speeding target detection following its fixation. |
Arjen Alink; Felix Euler; Nikolaus Kriegeskorte; Wolf Singer; Axel Kohler Auditory motion direction encoding in auditory cortex and high-level visual cortex Journal Article In: Human Brain Mapping, vol. 33, no. 4, pp. 969–978, 2012. @article{Alink2012, The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/ from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the V5þ). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direc- tion of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5þ. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high- level visual cortex in healthy humans. |
Ava-Ann Allman; Ulrich Ettinger; Ridha Joober; Gillian A. O'Driscoll Effects of methylphenidate on basic and higher-order oculomotor functions Journal Article In: Journal of Psychopharmacology, vol. 26, no. 11, pp. 1471–1479, 2012. @article{Allman2012, Eye movements are sensitive indicators of pharmacological effects on sensorimotor and cognitive processing. Methylphenidate (MPH) is one of the most prescribed medications in psychiatry. It is increasingly used as a cognitive enhancer by healthy individuals. However, little is known of its effect on healthy cognition. Here we used oculomotor tests to evaluate the effects of MPH on basic oculomotor and executive functions. Twenty-nine males were given 20mg of MPH orally in a double-blind placebo-controlled crossover design. Participants performed visually-guided saccades, sinusoidal smooth pursuit, predictive saccades and antisaccades one hour post-capsule administration. Heart rate and blood pressure were assessed prior to capsule administration, and again before and after task performance. Visually-guided saccade latency decreased with MPH (p<0.004). Smooth pursuit gain increased on MPH (p<0.001) and number of saccades during pursuit decreased (p<0.001). Proportion of predictive saccades increased on MPH (p<0.004), specifically in conditions with predictable timing. Peak velocity of predictive saccades increased with MPH (p<0.01). Antisaccade errors and latency were unaffected. Physiological variables were also unaffected. The effects on visually-guided saccade latency and peak velocity are consistent with MPH effects on dopamine in basal ganglia. The improvements in predictive saccade conditions and smooth pursuit suggest effects on timing functions. |
Kaoru Amano; Tsunehiro Takeda; Tomoki Haji; Masahiko Terao; Kazushi Maruya; Kenji Matsumoto; Ikuya Murakami; Shin'ya Nishida Human neural responses involved in spatial pooling of locally ambiguous motion signals Journal Article In: Journal of Neurophysiology, vol. 107, no. 12, pp. 3493–3508, 2012. @article{Amano2012, Early visual motion signals are local and one-dimensional (1-D). For specification of global two-dimensional (2-D) motion vectors, the visual system should appropriately integrate these signals across orientation and space. Previous neurophysiological studies have suggested that this integration process consists of two computational steps (estimation of local 2-D motion vectors, followed by their spatial pooling), both being identified in the area MT. Psychophysical findings, however, suggest that under certain stimulus conditions, the human visual system can also compute mathematically correct global motion vectors from direct pooling of spatially distributed 1-D motion signals. To study the neural mechanisms responsible for this novel 1-D motion pooling, we conducted human magnetoencephalography (MEG) and functional MRI experiments using a global motion stimulus comprising multiple moving Gabors (global-Gabor motion). In the first experiment, we measured MEG and blood oxygen level-dependent responses while changing motion coherence of global-Gabor motion. In the second experiment, we investigated cortical responses correlated with direction-selective adaptation to the global 2-D motion, not to local 1-D motions. We found that human MT complex (hMT+) responses show both coherence dependency and direction selectivity to global motion based on 1-D pooling. The results provide the first evidence that hMT+ is the locus of 1-D motion pooling, as well as that of conventional 2-D motion pooling. |
Ken-ichi Amemori; Ann M. Graybiel Localized microstimulation of primate pregenual cingulate cortex induces negative decision-making Journal Article In: Nature Neuroscience, vol. 15, no. 5, pp. 776–785, 2012. @article{Amemori2012, The pregenual anterior cingulate cortex (pACC) has been implicated in human anxiety disorders and depression, but the circuit-level mechanisms underlying these disorders are unclear. In healthy individuals, the pACC is involved in cost-benefit evaluation. We developed a macaque version of an approach-avoidance decision task used to evaluate anxiety and depression in humans and, with multi-electrode recording and cortical microstimulation, we probed pACC function as monkeys performed this task. We found that the macaque pACC has an opponent process-like organization of neurons representing motivationally positive and negative subjective value. Spatial distribution of these two neuronal populations overlapped in the pACC, except in one subzone, where neurons with negative coding were more numerous. Notably, microstimulation in this subzone, but not elsewhere in the pACC, increased negative decision-making, and this negative biasing was blocked by anti-anxiety drug treatment. This cortical zone could be critical for regulating negative emotional valence and anxiety in decision-making. |
Brian A. Anderson; Steven Yantis Value-driven attentional and oculomotor capture during goal-directed, unconstrained viewing Journal Article In: Attention, Perception, and Psychophysics, vol. 74, pp. 1644–1653, 2012. @article{Anderson2012, Covert shifts of attention precede and direct overt eye movements to stimuli that are task relevant or physically salient. A growing body of evidence suggests that the learned value of perceptual stimuli strongly influences their attentional priority. For example, previously rewarded but otherwise irrelevant and inconspicuous stimuli capture covert attention involuntarily. It is unknown, however, whether stimuli also draw eye movements involuntarily as a consequence of their reward history. Here, we show that previously rewarded but currently task-irrelevant stimuli capture both attention and the eyes. Value-driven oculomotor capture was observed during unconstrained viewing, when neither eye movements nor fixations were required, and was strongly related to individual differences in visual working memory capacity. The appearance of a reward-associated stimulus came to evoke pupil dilation over the course of training, which provides physiological evidence that the stimuli that elicit value-driven capture come to serve as reward-predictive cues. These findings reveal a close coupling of value-driven attentional capture and eye movements that has broad implications for theories of attention and reward learning. |
Jens K. Apel; Angelo Cangelosi; Rob Ellis; Jeremy Goslin; Martin H. Fischer Object affordance influences instruction span Journal Article In: Experimental Brain Research, vol. 223, no. 2, pp. 199–206, 2012. @article{Apel2012, We measured memory span for assembly instructions involving objects with handles oriented to the left or right side. Right-handed participants remembered more instructions when objects' handles were spatially congruent with the hand used in forthcoming assembly actions. No such affordance-based memory benefit was found for left-handed participants. These results are discussed in terms of motor simulation as an embodied rehearsal mechanism. |
Jens K. Apel; John M. Henderson; Fernanda Ferreira Targeting regressions: Do readers pay attention to the left? Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1108–1113, 2012. @article{Apel2012a, The perceptual span during normal reading extends approximately 14 to 15 characters to the right and three to four characters to the left of a current fixation. In the present study, we investigated whether the perceptual span extends farther than three to four characters to the left immediately before readers execute a regression. We used a display-change paradigm in which we masked words beyond the three-to-four-character range to the left of a fixation. We hypothesized that if reading behavior was affected by this manipulation before regressions but not before progressions, we would have evidence that the perceptual span extends farther left before leftward eye movements. We observed significantly shorter regressive saccades and longer fixation and gaze durations in the masked condition when a regression was executed. Forward saccades were entirely unaffected by the manipulations. We concluded that the perceptual span during reading changes, depending on the direction of a following saccade. |
Markus Bauer; Thomas Akam; Sabine Joseph; Elliot Freeman; Jon Driver Does visual flicker phase at gamma frequency modulate neural signal propagation and stimulus selection? Journal Article In: Journal of Vision, vol. 12, no. 4, pp. 1–10, 2012. @article{Bauer2012, Oscillatory synchronization of neuronal populations has been proposed to play a role in perceptual integration and attentional processing. However, some conflicting evidence has been found with respect to its causal relevance for sensory processing, particularly when using flickering visual stimuli with the aim of driving oscillations. We tested psychophysically whether the relative phase of gamma frequency flicker (60 Hz) between stimuli modulates well-known facilitatory lateral interactions between collinear Gabor patches (Experiment 1) or crowding of a peripheral target by irrelevant distractors (Experiment 2). Experiment 1 assessed the impact of suprathreshold Gabor flankers on detection of a near-threshold central Gabor target ("Lateral interactions paradigm"). The flanking stimuli could flicker either in phase or in anti-phase with each other. The typical facilitation of target detection was found with collinear flankers, but this was unaffected by flicker phase. Experiment 2 employed a "crowding" paradigm, where orientation discrimination of a peripheral target Gabor patch is disrupted when surrounded by irrelevant distractors. We found the usual crowding effect, which declined with spatial separation, but this was unaffected by relative flicker phase between target and distractors at all separations. These results imply that externally driven manipulations of gamma frequency phase cannot modulate perceptual integration in vision. |
Oliver Baumann; Jason B. Mattingley Functional topography of primary emotion processing in the human cerebellum Journal Article In: NeuroImage, vol. 61, no. 4, pp. 805–811, 2012. @article{Baumann2012, The cerebellum has an important role in the control and coordination of movement. It is now clear, however, that the cerebellum is also involved in neural processes underlying a wide variety of perceptual and cognitive functions, including the regulation of emotional responses. Contemporary neurobiological models of emotion assert that a small set of discrete emotions are mediated through distinct cortical and subcortical areas. Given the connectional specificity of neural pathways that link the cerebellum with these areas, we hypothesized that distinct sub-regions of the cerebellum might subserve the processing of different primary emotions. We used functional magnetic resonance imaging (fMRI) to identify neural activity patterns within the cerebellum in 30 healthy human volunteers as they categorized images that elicited each of the five primary emotions: happiness, anger, disgust, fear and sadness. In support of our hypothesis, all five emotions evoked spatially distinct patterns of activity in the posterior lobe of the cerebellum. We also detected overlaps between cerebellar activations for particular emotion categories, implying the existence of shared neural networks. By providing a detailed map of the functional topography of emotion processing in the cerebellum, our study provides important clues to the diverse effects of cerebellar pathology on human affective function. |
Valerie M. Beck; Andrew Hollingworth; Steven J. Luck Simultaneous control of attention by multiple working memory representations Journal Article In: Psychological Science, vol. 23, no. 8, pp. 887–898, 2012. @article{Beck2012, Working memory representations play a key role in controlling attention by making it possible to shift attention to task-relevant objects. Visual working memory has a capacity of three to four objects, but recent studies suggest that only one representation can guide attention at a given moment. We directly tested this proposal by monitoring eye movements while observers performed a visual search task in which they attempted to limit attention to objects drawn in two colors. When the observers were motivated to attend to one color at a time, they searched many consecutive items of one color (long run lengths) and exhibited a delay prior to switching gaze from one color to the other (switch cost). In contrast, when they were motivated to attend to both colors simultaneously, observers' gaze switched back and forth between the two colors frequently (short run lengths), with no switch cost. Thus, multiple working memory representations can concurrently guide attention. |
Sara A. Beedie; Philip J. Benson; Ina Giegling; Dan Rujescu; David M. St. Clair Smooth pursuit and visual scanpaths: Independence of two candidate oculomotor risk markers for schizophrenia Journal Article In: The World Journal of Biological Psychiatry, vol. 13, no. 3, pp. 200–210, 2012. @article{Beedie2012, Objectives. Smooth pursuit and visual scanpath deficits are candidate trait markers for schizophrenia. It is not clear whether eye tracking dysfunction (ETD) and atypical scanpath behaviour are the product of the same underlying neurobiological processes. We have examined co-occurrence of ETD and scanpath disturbance in individuals with schizophrenia and healthy volunteers. Methods. Eye movements of individuals with schizophrenia (N = 96) and non-clinical age-matched comparison participants (N = 100) were recorded using non-invasive infrared oculography during smooth pursuit in both predictable (horizontal sinusoid) and less predictable (Lissajous sinusoid) conditions and a free viewing scanpath task. Results. Individuals with schizophrenia demonstrated scanning deficits in both tasks. There was no association between performance measures of smooth pursuit and scene scanpaths in patient or control groups. Odds ratios comparing the likelihood of scanpath dysfunction when ETD was present, and the likelihood of finding scanpath dysfunction when ETD was absent were not significant in patients or controls in either pursuit variant, suggesting that ETD and scanpath dysfunction are independent anomalies in schizophrenia. Conclusion. ETD and scanpath disturbance appear to reflect independent oculomotor or neurocognitive deficits in schizophrenia. Each task may confer unique information about the pathophysiology of psychosis. © 2012 Informa Healthcare. |
Nathalie N. Bélanger; Timothy J. Slattery; Rachel I. Mayberry; Keith Rayner Skilled deaf readers have an enhanced perceptual span in reading Journal Article In: Psychological Science, vol. 23, no. 7, pp. 816–823, 2012. @article{Belanger2012, Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading |
Artem V. Belopolsky; Jan Theeuwes Updating the premotor theory: The allocation of attention is not always accompanied by saccade preparation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 4, pp. 902–914, 2012. @article{Belopolsky2012, There is an ongoing controversy regarding the relationship between covert attention and saccadic eye movements. While there is quite some evidence that the preparation of a saccade is obligatory preceded by a shift of covert attention, the reverse is not clear: Is allocation of attention always accompanied by saccade preparation? Recently, a shifting and maintenance account was proposed suggesting that shifting and maintenance components of covert attention differ in their relation to the oculomotor system. Specifically, it was argued that a shift of covert attention is always accompanied by activation of the oculomotor program, while maintaining covert attention at a location can be accompanied either by activation or suppression of oculomotor program, depending on the probability of executing an eye movement to the attended location. In the present study we tested whether there is such an obligatory coupling between shifting of attention and saccade preparation and how quickly saccade preparation gets suppressed. The results showed that attention shifting was always accompanied by saccade preparation whenever covert attention had to be shifted during visual search, as well as in response to exogenous or endogenous cues. However, for the endogenous cues the saccade program to the attended location was suppressed very soon after the attention shift was completed. The current findings support the shifting and maintenance account and indicate that the premotor theory needs to be updated to include a shifting and maintenance component for the cases in which covert shifts of attention are made without the intention to execute a saccade. |
Philip J. Benson; Sara A. Beedie; Elizabeth Shephard; Ina Giegling; Dan Rujescu; David M. St. Clair Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy Journal Article In: Biological Psychiatry, vol. 72, no. 9, pp. 716–724, 2012. @article{Benson2012, Background: We have investigated which eye-movement tests alone and combined can best discriminate schizophrenia cases from control subjects and their predictive validity. Methods: A training set of 88 schizophrenia cases and 88 controls had a range of eye movements recorded; the predictive validity of the tests was then examined on eye-movement data from 34 9-month retest cases and controls, and from 36 novel schizophrenia cases and 52 control subjects. Eye movements were recorded during smooth pursuit, fixation stability, and free-viewing tasks. Group differences on performance measures were examined by univariate and multivariate analyses. Model fitting was used to compare regression, boosted tree, and probabilistic neural network approaches. Results: As a group, schizophrenia cases differed from control subjects on almost all eye-movement tests, including horizontal and Lissajous pursuit, visual scanpath, and fixation stability; fixation dispersal during free viewing was the best single discriminator. Effects were stable over time, and independent of sex, medication, or cigarette smoking. A boosted tree model achieved perfect separation of the 88 training cases from 88 control subjects; its predictive validity on retest assessments and novel cases and control subjects was 87.8%. However, when we examined the whole data set of 298 assessments, a cross-validated probabilistic neural network model was superior and could discriminate all cases from controls with near perfect accuracy at 98.3%. Conclusions: Simple viewing patterns can detect eye-movement abnormalities that can discriminate schizophrenia cases from control subjects with exceptional accuracy. © 2012 Society of Biological Psychiatry. |
Valerie Benson; Monica S. Castelhano; Sheena K. Au-Yeung; Keith Rayner Eye movements reveal no immediate "WOW" ("which one's weird") effect in autism spectrum disorder Journal Article In: Quarterly Journal of Experimental Psychology, vol. 65, no. 6, pp. 1139–1150, 2012. @article{Benson2012a, Autism spectrum disorder (ASD) and typically developed (TD) adult participants viewed pairs of scenes for a simple " spot the difference " (STD) and a complex " which one's weird " (WOW) task. There were no group differences in the STD task. In the WOW task, the ASD group took longer to respond manually and to begin fixating the target " weird " region. Additionally, as indexed by the first-fixation duration into the target region, the ASD group failed to " pick up " immediately on what was " weird " . The findings are discussed with reference to the complex information processing theory of ASD (Minshew & Goldstein, 1998). |
Valerie Benson; Magdalena Ietswaart; David Milner Eye movements and verbal report in a single case of visual neglect Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e43743, 2012. @article{Benson2012b, In this single case study, visuospatial neglect patient P1 demonstrated a dissociation between an intact ability to make appropriate reflexive eye movements to targets in the neglected field with latencies of <400 ms, while failing to report targets presented at such durations in a separate verbal detection task. In contrast, there was a failure to evoke the usually robust Remote Distractor Effect in P1, even though distractors in the neglected field were presented at above threshold durations. Together those data indicate that the tight coupling that is normally shown between attention and eye movements appears to be disrupted for low-level orienting in P1. A comparable disruption was also found for high-level cognitive processing tasks, namely reading and scene scanning. The findings are discussed in relation to sampling, attention and awareness in neglect. |
Elika Bergelson; Daniel Swingley At 6-9 months, human infants know the meanings of many common nouns Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 9, pp. 3253–3258, 2012. @article{Bergelson2012, It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins. |
Douwe P. Bergsma; Joris A. Elshout; G. J. Wildt; Albert V. Berg Transfer effects of training-induced visual field recovery in patients with chronic stroke Journal Article In: Topics in Stroke Rehabilitation, vol. 19, no. 3, pp. 212–225, 2012. @article{Bergsma2012, OBJECTIVE: Visual training of light detection in the transition zone between blind and healthy hemianopic visual fields leads to improvement of color and simple pattern recognition. Recently, we demonstrated that visual field enlargement (VFE) also occurs when an area just beyond the transition zone is stimulated. In the current study, we attempted to determine whether this peripheral training also causes improvement in color and shape perception and reading speed. Further, we evaluated which measure of VFE relates best to improvements in performance: the average border shift (ABS) in degrees or the estimated amount of cortical surface gain (ECSG) in millimeters, using the cortical magnification factor (CMF). METHOD: Twelve patients received 40 sessions of 1-hour restorative function training (RFT). Before and after training, we measured visual fields and reading speed. Additionally, color and shape perception in the trained visual field area was measured in 7 patients.$backslash$n$backslash$nRESULTS: VFE was found for 9 of 12 patients. Significant improvements were observed in reading speed for 8 of 12 patients and in color and shape perception for 3 of 7 patients. ECSG correlates significantly with performance; ABS does not. Our data indicate that the threshold ECSG, needed for significant changes in color and shape perception and reading speed, is about 6 mm. CONCLUSIONS: White stimulus training-induced VFE can lead to improved color and shape perception and to increased reading speed in and beyond the pretraining transition zone if ECSG is sufficiently large. The latter depends on the eccentricity of the VFE. |
Mario Bettenbühl; Marco Rusconi; Ralf Engbert; Matthias Holschneider Bayesian selection of Markov Models for symbol sequences: Application to microsaccadic eye movements Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e43388, 2012. @article{Bettenbuehl2012, Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems. |
Rainer Beurskens; Otmar Bock Age-related decline of peripheral visual processing: The role of eye movements Journal Article In: Experimental Brain Research, vol. 217, no. 1, pp. 117–124, 2012. @article{Beurskens2012, Earlier work suggests that the area of space from which useful visual information can be extracted (useful field of view, UFoV) shrinks in old age. We investigated whether this shrinkage, documented previously with a visual search task, extends to a bimanual tracking task. Young and elderly subjects executed two concurrent tracking tasks with their right and left arms. The separation between tracking displays varied from 3 to 35 cm. Subjects were asked to fixate straight ahead (condition FIX) or were free to move their eyes (condition FREE). Eye position was registered. In FREE, young subjects tracked equally well at all display separations. Elderly subjects produced higher tracking errors, and the difference between age groups increased with display separation. Eye movements were comparable across age groups. In FIX, elderly and young subjects tracked less well at large display separations. Seniors again produced higher tracking errors in FIX, but the difference between age groups did not increase reliably with display separation. However, older subjects produced a substantial number of illicit saccades, and when the effect of those saccades was factored out, the difference between young and older subjects' tracking did increase significantly with display separation in FIX. We conclude that the age-related shrinkage of UFoV, previously documented with a visual search task, is observable with a manual tracking task as well. Older subjects seem to partly compensate their deficit by illicit saccades. Since the deficit is similar in both conditions, it may be located downstream from the convergence of retinal and oculomotor signals. |
Hans-Joachim Bieg; Jean-Pierre Bresciani; Heinrich H. Bülthoff; Lewis L. Chuang Looking for discriminating Is dDifferent from looking for looking's sake Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e45445, 2012. @article{Bieg2012, Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude. |
Markus Bindemann; Adam Sandford; Katherine Gillatt; Meri Avetisyan; Ahmed M. Megreya Recognising faces seen alone or with others: Why are two heads worse than one? Journal Article In: Perception, vol. 41, no. 4, pp. 415–435, 2012. @article{Bindemann2012, The ability to identify an unfamiliar target face from an identity lineup declines when it is accompanied by a second face during visual encoding. This two-face disadvantage is still little studied and its basis remains poorly understood. This study investigated several possible explanations for this phenomenon. Experiments 1 and 2 varied the number of potential targets (1 or 2) and the number of faces in a lineup (5 or 10) to explore if this effect arises from the number of identity comparisons that need to be made to detect a target in a lineup. These experiments also explored if this effect arises from an uncertainty concerning which is the to-be-identified target in two-face displays, by cueing the relevant face during encoding. Experiment 3 then examined whether the two-face disadvantage reflects the depth of face encoding or a memory effect. The results show that this effect arises from the additional comparisons that are necessary to compare two potential targets to an identity lineup when memory demands are minimized (Experiment 1), but it reflects a difficulty in remembering several faces when targets and lineups cannot be viewed simultaneously (Experiments 2 and 3). However, in both cases the two-face disadvantage could not be eliminated fully by cueing the target. This hints at a further possible locus for this effect, which might reflect perceptual interference during the initial encoding of the target. The implications of these findings are discussed. |
Eileen E. Birch; Jingyun Wang; Joost Felius; David R. Stager; Richard W. Hertle Fixation control and eye alignment in children treated for dense congenital or developmental cataracts Journal Article In: Journal of AAPOS, vol. 16, no. 2, pp. 156–160, 2012. @article{Birch2012, Background: Many children treated for cataracts develop strabismus and nystagmus; however, little is known about the critical period for adverse ocular motor outcomes with respect to age of onset and duration. Methods: Children who had undergone extraction of dense cataracts by the age of 5 years were enrolled postoperatively. Ocular alignment was assessed regularly throughout follow-up. Fixation stability and associated ocular oscillations were determined from eye movement recordings at ≥5 years old. Multivariate logistic regression was used to evaluate whether laterality (unilateral vs bilateral), age at onset, and/or duration of visual deprivation were associated with adverse ocular motor outcomes and to determine multivariate odds ratios (ORs). Results: A total of 41 children were included. Of these, 27 (66%) developed strabismus; 29 (71%) developed nystagmus. Congenital onset was associated with significant risk for strabismus (OR, 5.3; 95% CI, 1.1-34.1); infantile onset was associated with significant risk for nystagmus (OR, 13.6; 95% CI, 1.6-302). Duration >6 weeks was associated with significant risk for both strabismus (OR, 9.1; 95% CI, 1.9-54.2) and nystagmus (OR, 46.2; 95% CI, 6.0-1005). Congenital onset was associated with significant risk for interocular asymmetry in severity of nystagmus (OR, 25.0; 95% CI, 2.6-649), as was unilateral cataract (OR, 58.9; 95% CI, 5.1-2318). Conclusions: Laterality (unilateral vs bilateral) and age at onset were significant nonmodifiable risk factors for adverse ocular motor outcomes. Duration of deprivation was a significant modifiable risk factor for adverse ocular motor outcomes. The current study demonstrated that reduced risk for nystagmus and strabismus was associated with deprivation ≤6 weeks. |
Gary D. Bird; Johan Lauwereyns; Matthew T. Crawford The role of eye movements in decision making and the prospect of exposure effects Journal Article In: Vision Research, vol. 60, pp. 16–21, 2012. @article{Bird2012, The aim of the current study was to follow on from previous findings that eye movements can have a causal influence on preference formation. Shimojo et al. (2003) previously found that faces that were pre- sented for a longer duration in a two alternative forced choice task were more likely to be judged as more attractive. This effect only occurred when an eye movement was made towards the faces (with no effect when faces were centrally presented). The current study replicated Shimojo et al.'s (2003) design, whilst controlling for potential inter-stimuli interference in central presentations. As per previous findings, when eye movements were made towards the stimuli, faces that were presented for longer durations were preferred. However, faces that were centrally presented (thus not requiring an eye movement) were also preferred in the current study. The presence of an exposure duration effect for centrally presented faces casts doubt on the necessity of the eye movement in this decision making process and has impli- cations for decision theories that place an emphasis on the role of eye movements in decision making. |
Arielle Borovsky; Jeffrey L. Elman; Anne Fernald In: Journal of Experimental Child Psychology, vol. 112, no. 4, pp. 417–436, 2012. @article{Borovsky2012, Adults can incrementally combine information from speech with astonishing speed to anticipate future words. Concurrently, a growing body of work suggests that vocabulary ability is crucially related to lexical processing skills in children. However, little is known about this relationship with predictive sentence processing in children or adults. We explore this question by comparing the degree to which an upcoming sentential theme is anticipated by combining information from a prior agent and action. 48 children, aged of 3 to 10, and 48 college-aged adults' eye-movements were recorded as they heard a sentence (e.g., The pirate hides the treasure) in which the object referred to one of four images that included an agent-related, action-related and unrelated distractor image. Pictures were rotated so that, across all versions of the study, each picture appeared in all conditions, yielding a completely balanced within-subjects design. Adults and children quickly made use of combinatory information available at the action to generate anticipatory looks to the target object. Speed of anticipatory fixations did not vary with age. When controlling for age, individuals with higher vocabularies were faster to look to the target than those with lower vocabulary scores. Together, these results support and extend current views of incremental processing in which adults and children make use of linguistic information to continuously update their mental representation of ongoing language. |
Gianfranco Bosco; Sergio Delle Monache; Francesco Lacquaniti Catching what we can't see: Manual interception of occluded fly-ball trajectories Journal Article In: PLoS ONE, vol. 7, no. 11, pp. e49381, 2012. @article{Bosco2012, Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories. |
Davide Bottari; Matteo Valsecchi; Francesco Pavani Prominent reflexive eye-movement orienting associated with deafness Journal Article In: Cognitive Neuroscience, vol. 3, no. 1, pp. 8–13, 2012. @article{Bottari2012, Profound deafness affects orienting of visual attention. Until now, research focused exclusively on covert attentional orienting, neglecting whether overt oculomotor behavior may also change in deaf people. Here we used the pro- and anti-saccade task to examine the relative contribution of reflexive and voluntary eye-movement control in profoundly deaf and hearing individuals. We observed a behavioral facilitation in reflexive compared to voluntary eye movements, indexed by faster saccade latencies and smaller error rates in pro- than anti-saccade trials, which was substantially larger in deaf than hearing participants. This provides the first evidence of plastic changes related to deafness in overt oculomotor behavior, and constitutes an ecologically relevant parallel to the modulations attributed to deafness in covert attention orienting. Our findings also have implications for designers of real and virtual environments for deaf people and reveal that experiments on deaf visual abilities must not ignore the prominent reflexive eye-movement orienting in this sensory-deprived population. |
Jeffrey D. Bower; Zheng Bian; George J. Andersen Effects of retinal eccentricity and acuity on global-motion processing Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 5, pp. 942–949, 2012. @article{Bower2012, The present study assessed direction discrimination with moving random-dot cinematograms at retinal eccentricities of 0, 8, 22, and 40deg. In addition, Landolt-C acuity was assessed at these eccentricities to determine whether changes in motion discrimination performance covaried with acuity in the retinal periphery. The results of the experiment indicated that discrimination thresholds increased with retinal eccentricity and directional variance (noise), independent of acuity. Psychophysical modeling indicated that the results for eccentricity and noise could be explained by an increase in channel bandwidth and an increase in internal multiplicative noise. |
Alison C. Bowling; Emily A. Hindman; James F. Donnelly Prosaccade errors in the antisaccade task: Differences between corrected and uncorrected errors and links to neuropsychological tests Journal Article In: Experimental Brain Research, vol. 216, no. 2, pp. 169–179, 2012. @article{Bowling2012, The relations among spatial memory, Stroop-like colour-word subtests, and errors on antisaccade and memory-guided saccadic eye-movement trials for older and younger adults were tested. Two types of errors in the antisaccade task were identified: short latency prosaccade errors that were immediately corrected and longer latency uncorrected prosaccade errors. The age groups did not differ on percentages of either corrected or uncorrected errors, but the latency and time to correct prosaccade errors were shorter for younger than older adults. Uncorrected prosaccade errors correlated significantly with spatial memory accuracy and errors on the colour-word subtests, but neither of these neuropsychological indices correlated with corrected prosaccade errors. These findings suggest that uncorrected prosaccade errors may be a result of cognitive factors involving a failure to maintain the goal of the antisaccade task in working memory. In contrast, corrected errors may be a consequence of a fixation system involving an initial failure to inhibit a reflexive prosaccade but with active goal maintenance enabling correction to take place. |
Miyoung Kwon; Chaithanya Ramachandra; PremNandhini Satgunam; Bartlett W. Mel; Eli Peli; Bosco S. Tjan Contour enhancement benefits older adults with simulated central field loss Journal Article In: Optometry and Vision Science, vol. 89, no. 9, pp. 1374–1384, 2012. @article{Kwon2012, PURPOSE: Age-related macular degeneration is the leading cause of vision loss among Americans aged >65 years. Currently, no effective treatment can reverse the central vision loss associated with most age-related macular degeneration. Digital image-processing techniques have been developed to improve image visibility for peripheral vision; however, both the selection and efficacy of such methods are limited. Progress has been difficult for two reasons: the exact nature of image enhancement that might benefit peripheral vision is not well understood, and efficient methods for testing such techniques have been elusive. The current study aims to develop both an effective image enhancement technique for peripheral vision and an efficient means for validating the technique. METHODS: We used a novel contour-detection algorithm to locate shape-defining edges in images based on natural-image statistics. We then enhanced the scene by locally boosting the luminance contrast along such contours. Using a gaze-contingent display, we simulated central visual field loss in normally sighted young (aged 18-30 years) and older adults (aged 58-88 years). Visual search performance was measured as a function of contour enhancement strength ["original" (unenhanced), "medium," and "high"]. For preference task, a separate group of subjects judged which image in a pair "would lead to better search performance." RESULTS: We found that although contour enhancement had no significant effect on search time and accuracy in young adults, Medium enhancement resulted in significantly shorter search time in older adults (about 13% reduction relative to original). Both age-groups preferred images with Medium enhancement over original (2-7 times). Furthermore, across age-groups, image content types, and enhancement strengths, there was a robust correlation between preference and performance. CONCLUSIONS: Our findings demonstrate a beneficial role of contour enhancement in peripheral vision for older adults. Our findings further suggest that task-specific preference judgments can be an efficient surrogate for performance testing. |
Kaitlin E. W. Laidlaw; Evan F. Risko; Alan Kingstone A new look at social attention: Orienting to the eyes is not (entirely) under volitional control Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 5, pp. 1132–1143, 2012. @article{Laidlaw2012, People tend to look at other people's eyes, but whether this bias is automatic or volitional is unclear. To discriminate between these two possibilities, we used a "don't look" (DL) paradigm. Participants looked at a series of upright or inverted faces, and were asked either to freely view the faces or to avoid looking at the eyes, or as a control, the mouth. As previously demonstrated, participants showed a bias to attend to both eyes and mouths during free viewing. In the DL condition, participants told to avoid the eyes of upright faces were unable to fully suppress the tendency to fixate on the faces' eyes, whereas participants told to avoid the mouth of upright faces successfully eliminated their bias to overtly attend to that feature. When faces were inverted, participants were equally able to suppress looks to the eyes and mouth. Together, these results suggest that the tendency to look at the eyes reflects orienting that is both volitional and automatic, and that the engagement of holistic or configural face processing mechanisms during upright face viewing has an influence in guiding gaze automatically to the eyes. |
Monique Lamers; Wilbert Spooren Tracking referents in discourse Journal Article In: Dutch Journal of Applied Linguistics, vol. 1, no. 1, pp. 59–79, 2012. @article{Lamers2012, <p>This reading study registered eye movements to investigate the influence of different discourse constructional factors on anaphor resolution in written discourse. More specifically, the study focused on the influence of the possible interplay of proximity between a possible referent and the anaphor and amount of elaboration on the time course of the different processes involved in anaphor resolution. Results at the anaphoric expression and the area immediately following the anaphoric expression reveal an effect of elaboration, but only in total reading times and second pass reading times. No effects were found at the reinstated referent. These results indicate that the difference in saliency between two possible referents almost directly influences anaphor resolution. We discuss these findings in relation to the time course of different processes in anaphor resolution such as bonding and resolution, in combination with a reading strategy that readers are satisfied with a superficial interpretation.</p> |
Elke B. Lange; Christian Starzynski; Ralf Engbert Capture of the gaze does not capture the mind Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 6, pp. 1168–1182, 2012. @article{Lange2012, Sudden visual changes attract our gaze, and related eye movement control requires attentional resources. Attention is a limited resource that is also involved in working memory–for instance, memory encoding. As a consequence, theory suggests that gaze capture could impair the buildup of memory respresentations due to an attentional resource bottleneck. Here we developed an experimental design combining a serial memory task (verbal or spatial) and concurrent gaze capture by a distractor (of high or low similarity to the relevant item). The results cannot be explained by a general resource bottleneck. Specifically, we observed that capture by the low-similar distractor resulted in delayed and reduced saccade rates to relevant items in both memory tasks. However, while spatial memory performance decreased, verbal memory remained unaffected. In contrast, the high-similar distractor led to capture and memory loss for both tasks. Our results lend support to the view that gaze capture leads to activation of irrelevant representations in working memory that compete for selection at recall. Activation of irrelevant spatial representations distracts spatial recall, whereas activation of irrelevant verbal features impairs verbal memory performance. |
Chao Hsuan Liu; Ovid J. L. Tzeng; Daisy L. Hung; Philip Tseng; Chi-Hung Juan Investigation of bistable perception with the "silhouette spinner": Sit still, spin the dancer with your will Journal Article In: Vision Research, vol. 60, pp. 34–39, 2012. @article{Liu2012, Many studies have used static and non-biologically related stimuli to investigate bistable perception and found that the percept is usually dominated by their intrinsic nature with some influence of voluntary control from the viewer. Here we used a dynamic stimulus of a rotating human body, the silhouette spinner illusion, to investigate how the viewers' intentions may affect their percepts. In two experiments, we manipulated observer intention (active or passive), fixation position (body or feet), and spinning velocity (fast, medium, or slow). Our results showed that the normalized alternating rate between two bistable percepts was greater when (1) participants actively attempted to switch percepts, (2) when participants fixated at the spinner's feet rather than the body, inducing as many as 25 switches of the bistable percepts within 1. min, and (3) when they watched the spinner at high velocity. These results suggest that a dynamic biologically-bistable percept can be quickly alternated by the viewers' intention. Furthermore, the higher alternating rate in the feet condition compared to the body condition suggests a role for biological meaningfulness in determining bistable percepts, where 'biologically plausible' interpretations are favored by the visual system. |
Wei Liu; Chengkun Liu; Damin Zhuang; Zhong Qi Liu; Xiugan Yuan Comparison of expert and novice eye movement behaviors during landing flight Journal Article In: Advanced Materials Research, vol. 383-390, pp. 2556–2560, 2012. @article{Liu2012a, Objective: To study expert and novice eye movement pattern during simulated landing flight for providing references to evaluate flight performance and training of pilots. Methods The subjects were divided in to two group s of expert and novice according to their flight simulation experience. Eye movement data were recorded when they were performing landing task. Comparison of expert and novice flight performance data and eye movement data was made. Results: It was found that the differences between expert and novice lay not only in flight performance but also in eye movement pattern. Performance of expert was better than novice. Expert had shorter fixation time, more fixation points, faster scan velocity, greater scan frequency and wider scan area than novice. It was also found that eye movement pattern of expert bring lower mental workload than novice. Conclusion: Flight performance is related to eye movement pattern. Effective eye movement pattern is related to good flight performance. The analysis of eye movement indices can evaluate pilots' flight performance and provide reference for flight training. |
Sam London; Christopher W. Bishop; Lee M. Miller Spatial attention modulates the precedence effect Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 6, pp. 1371–1379, 2012. @article{London2012, Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such interference. In a rapid phenomenon known as the precedence effect, reflections are perceptually fused with the veridical primary sound. The brain can also use spatial attention to highlight a target sound at the expense of distracters. Although attention has been shown to modulate many auditory perceptual phenomena, rarely does it alter how acoustic energy is first parsed into objects, as with the precedence effect. This brief report suggests that both endogenous (voluntary) and exogenous (stimulus-driven) spatial attention have a profound influence on the precedence effect depending on where they are oriented. Moreover, we observed that both types of attention could enhance perceptual fusion while only exogenous attention could hinder it. These results demonstrate that attention, by altering how auditory objects are formed, guides the basic perceptual organization of our acoustic environment. |
P. Christiaan Klink; Anna Oleksiak; Martin J. Lankheet; Richard J. A. Wezel Intermittent stimulus presentation stabilizes neuronal responses in macaque area MT Journal Article In: Journal of Neurophysiology, vol. 108, no. 8, pp. 2101–2114, 2012. @article{Klink2012, Repeated stimulation impacts neuronal responses. Here we show how response characteristics of sensory neurons in macaque visual cortex are influenced by the duration of the interruptions during intermittent stimulus presentation. Besides effects on response magnitude consistent with neuronal ad- aptation, the response variability was also systematically influenced. Spike rate variability in motion-sensitive area MT decreased when interruption durations were systematically increased from 250 to 2,000 ms. Activity fluctuations between subsequent trials and Fano factors over full response sequences were both lower with longer interruptions, while spike timing patterns became more regular. These variability changes partially depended on the response magnitude, but another significant effect that was uncorrelated with adaptation-in- duced changes in response magnitude was also present. Reduced response variability was furthermore accompanied by changes in spike-field coherence, pointing to the possibility that reduced spiking variability results from interactions in the local cortical network. While neuronal response stabilization may be a general effect of repeated sensory stimulation, we discuss its potential link with the phenomenon of perceptual stabilization of ambiguous stimuli as a result of interrupted presentation. |
Lisa Kloft; Benedikt Reuter; Jayalakshmi Viswanathan; Norbert Kathmann; Jason J. S. Barton Response selection in prosaccades, antisaccades, and other volitional saccades Journal Article In: Experimental Brain Research, vol. 222, pp. 345–353, 2012. @article{Kloft2012, Saccades made to the opposite side of a visual stimulus (antisaccades) and to central cues (simple volitional saccades) both require active response selection but whether the mechanisms of response selection differ between these tasks is unclear. Response selection can be assessed by increasing the number of response alternatives: this leads to increased reaction times when response selection is more demanding. We compared the reaction times of prosaccades, antisaccades, saccades cued by a central arrow, and saccades cued by a central number, in blocks of either two or six possible responses. In the two-response blocks, reaction times were fastest for prosaccades and antisaccades, and slowest for arrow-cued and number-cued saccades. Increasing response alternatives from two to six caused a paradoxical reduction in reaction times of prosaccades, had no effect on arrow-cued saccades, and led to a large increase in reaction times of number-cued saccades. For antisaccade reaction times, the effect of increasing response alternatives was intermediate, greater than that for arrow-cued saccades but less than that for number-cued saccades. We suggest that this pattern of results may reflect two components of saccadic processing: (a) response triggering, which is more rapid with a peripheral stimulus as in the prosaccade and antisaccade tasks and (b) response selection, which is more demanding for the antisaccade and number-cued saccade tasks, and more automatic when there is direct stimulus-response mapping as with prosaccades, or over-learned symbols as with arrow-cued saccades. |
Pia Knoeferle; Helene Kreysa Can speaker gaze modulate syntactic structuring and thematic role assignment during spoken sentence comprehension? Journal Article In: Frontiers in Psychology, vol. 3, pp. 538, 2012. @article{Knoeferle2012, During comprehension, a listener can rapidly follow a frontally seated speaker's gaze to an object before its mention, a behavior which can shorten latencies in speeded sentence verification. However, the robustness of gaze-following, its interaction with core comprehension processes such as syntactic structuring, and the persistence of its effects are unclear. In two "visual-world" eye-tracking experiments participants watched a video of a speaker, seated at an angle, describing transitive (non-depicted) actions between two of three Second Life characters on a computer screen. Sentences were in German and had either subject(NP1)-verb-object(NP2) or object(NP1)-verb-subject(NP2) structure; the speaker either shifted gaze to the NP2 character or was obscured. Several seconds later, participants verified either the sentence referents or their role relations. When participants had seen the speaker's gaze shift, they anticipated the NP2 character before its mention and earlier than when the speaker was obscured. This effect was more pronounced for SVO than OVS sentences in both tasks. Interactions of speaker gaze and sentence structure were more pervasive in role-relations verification: participants verified the role relations faster for SVO than OVS sentences, and faster when they had seen the speaker shift gaze than when the speaker was obscured. When sentence and template role-relations matched, gaze-following even eliminated the SVO-OVS response-time differences. Thus, gaze-following is robust even when the speaker is seated at an angle to the listener; it varies depending on the syntactic structure and thematic role relations conveyed by a sentence; and its effects can extend to delayed post-sentence comprehension processes. These results suggest that speaker gaze effects contribute pervasively to visual attention and comprehension processes and should thus be accommodated by accounts of situated language comprehension. |
Makoto Kobayashi; Atsuhiko Sugiyama In: Internal Medicine, vol. 51, no. 15, pp. 2025–2029, 2012. @article{Kobayashi2012, A 72-year-old man presented with dizziness and left hand muscle atrophy. Magnetic resonance imaging revealed a spinal cord cavity and descent of the cerebellar tonsils. His diagnosis was Chiari I malformation with syringomyelia. No cerebellar signs were observed on physical examination. The cause of dizziness was investigated using a video-based eye movement tracker, which revealed a downward smooth pursuit velocity gain significantly below normal when expressed relative to the horizontal pursuit velocity gain. Vestibulocerebellar damage can cause mild downward pursuit deficit. The downward to horizontal smooth pursuit velocity gain ratio may be a more sensitive means of detecting vestibulocerebellar damage early. |
Ellen M. Kok; Anique B. H. Bruin; Simon G. F. Robben; Jeroen J. G. Merriënboer Looking in the same manner but seeing it differently: Bottom-up and expertise effects in radiology Journal Article In: Applied Cognitive Psychology, vol. 26, no. 6, pp. 854–862, 2012. @article{Kok2012, Models of expertise differences in radiology often do not take into account visual differences between diseases. This study investigates the bottom-up effects of three types of images on viewing patterns of students, residents and radiologists: focal diseases (localized abnormality), diffuse diseases (distributed abnormality) and images showing no abnormalities (normal). Participants inspected conventional chest radiographs while their eye movements were recorded. Regardless of expertise, in focal diseases, participants fixated relatively long at specific locations, whereas in diffuse diseases, fixations were more dispersed and shorter. Moreover, for students, dispersion of fixations was higher on diffuse compared with normal images, whereas for residents and radiologists, dispersion was highest on normal images. Despite this difference, students showed relatively high performance on normal images but low performance on focal and diffuse images. Viewing patterns were strongly influenced by bottom-up stimulus effects. Although viewing behavior of students was similar to that of radiologists, they lack knowledge that helps them diagnose the disease correctly. |
Mark Rose Lewis; Michael C. Mensink Prereading questions and online text comprehension Journal Article In: Discourse Processes, vol. 49, no. 5, pp. 367–390, 2012. @article{Lewis2012, Prereading questions can be an effective tool for directing students' learning. However, it is not always clear what the online effects of a set of prereading questions will be. In two experiments, this study investigated whether readers direct additional attention to and learn more from sentences that are potentially relevant to a set of prereading questions. Eye-tracking data indicated that participants directed additional attention (as indicated by first-pass reinspection and lookback duration) to sentences that were potentially relevant to the prereading questions they had received. Participants also learned more information from these sentences (as indicated by free recall rates). Judgment data suggested that the featural similarity (both lexical and semantic) of a sentence to a prereading question can be a strong indicator that a sentence will be deemed potentially relevant by readers. Results are discussed with respect to an account of instructional effects in which featural similarity drives early attentional allocation through memory-based processes. |
Xingshan Li; Wenchan Zhao; Alexander Pollatsek Dividing lines at the word boundary position helps reading in Chinese Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 5, pp. 929–934, 2012. @article{Li2012, Unlike in English, the Chinese printing and writing systems usually do not respect a word boundary when they split lines; thus, characters belonging to a word can be on two different lines. In this study, we examined whether dividing a word across two lines interferes with Chinese reading and found that reading times were shorter when characters belonging to a word were on a single line rather than on adjacent lines. Eye movement data indicated that gaze durations in a region around the word boundary were longer and fixations were closer to the beginnings and ends of the lines when words were split across lines. These results suggest that words are processed as a whole in Chinese reading, so that word boundaries should be respected when deciding how to split lines in the Chinese writing system. They also suggest that the length of return sweeps in reading can be cognitively guided. |
Wei-Kuang Liang; Chi-Hung Juan Modulation of motor control in saccadic behaviors by TMS over the posterior parietal cortex Journal Article In: Journal of Neurophysiology, vol. 108, no. 3, pp. 741–752, 2012. @article{Liang2012, The right posterior parietal cortex (rPPC) has been found to be critical in shaping visual selection and distractor-induced saccade curvature in the context of predictive as well as nonpredictive visual cues by means of transcranial magnetic stimulation (TMS) interference. However, the dynamic details of how distractor-induced saccade curvatures are affected by rPPC TMS have not yet been investigated. This study aimed to elucidate the key dynamic properties that cause saccades to curve away from distractors with different degrees of curvature in various TMS and target predictability conditions. Stochastic optimal feedback control theory was used to model the dynamics of the TMS saccade data. This allowed estimation of torques, which was used to identify the critical dynamic mechanisms producing saccade curvature. The critical mechanisms of distractor-induced saccade curvatures were found to be the motor commands and torques in the transverse direction. When an unpredictable saccade target occurred with rPPC TMS, there was an initial period of greater distractor-induced torque toward the side opposite the distractor in the transverse direction, immediately followed by a relatively long period of recovery torque that brought the deviated trace back toward the target. The results imply that the mechanisms of distractor-induced saccade curvature may be comprised of two mechanisms: the first causing the initial deviation and the second bringing the deviated trace back toward the target. The pattern of the initial torque in the transverse direction revealed the former mechanism. Conversely, the later mechanism could be well explained as a consequence of the control policy in this model. To summarize, rPPC TMS increased the initial torque away from the distractor as well as the recovery torque toward the target. |
Sue Lord; Neil Archibald; Urs P. Mosimann; David J. Burn; Lynn Rochester Dorsal rather than ventral visual pathways discriminate freezing status in Parkinson's disease Journal Article In: Parkinsonism and Related Disorders, vol. 18, no. 10, pp. 1094–1096, 2012. @article{Lord2012, Background: Although visuospatial deficits have been linked with freezing of gait (FOG) in Parkinson's disease (PD), the specific effects of dorsal and ventral visual pathway dysfunction on FOG is not well understood. Method: We assessed visuospatial function in FOG using an angle discrimination test (dorsal visual pathway bias) and overlapping figure test (ventral visual pathway bias), and recorded overall response time, mean fixation duration and dwell time. Covariate analysis was conducted controlling for disease duration, motor severity, contrast sensitivity and attention with Bonferroni adjustments for multiple comparisons. Results: Twenty seven people with FOG, 27 people without FOG and 24 controls were assessed. Average fixation duration during angle discrimination distinguished freezing status: [F (1, 43) = 4.77 p < 0.05] (1-way ANCOVA). Conclusion: Results indicate a preferential dysfunction of dorsal occipito-parietal pathways in FOG, independent of disease severity, attentional deficit, and contrast sensitivity. |
Jean Lorenceau Cursive writing with smooth pursuit eye movements Journal Article In: Current Biology, vol. 22, no. 16, pp. 1506–1509, 2012. @article{Lorenceau2012, The eyes never cease to move: ballistic saccades quickly turn the gaze toward peripheral targets, whereas smooth pursuit maintains moving targets on the fovea where visual acuity is best. Despite the oculomotor system being endowed with exquisite motor abilities, any attempt to generate smooth eye movements against a static background results in saccadic eye movements [1, 2]. Although exceptions to this rule have been reported [3-5], volitional control over smooth eye movements is at best rudimentary. Here, I introduce a novel, temporally modulated visual display, which, although static, sustains smooth eye movements in arbitrary directions. After brief training, participants gain volitional control over smooth pursuit eye movements and can generate digits, letters, words, or drawings at will. For persons deprived of limb movement, this offers a fast, creative, and personal means of linguistic and emotional expression. |
Matthew W. Lowder; Peter C. Gordon The pistol that injured the cowboy: Difficulty with inanimate subject-verb integration is reduced by structural separation Journal Article In: Journal of Memory and Language, vol. 66, no. 4, pp. 819–832, 2012. @article{Lowder2012, Previous work has suggested that the difficulty normally associated with processing an object-extracted relative clause (ORC) compared to a subject-extracted relative clause (SRC) is increased when the head noun phrase (NP1) is animate and the embedded noun phrase (NP2) is inanimate, compared to the reverse animacy configuration. Two eye-tracking experiments were conducted to determine whether the apparent effects of NP animacy on the ORC-SRC asymmetry reflect distinct processes of interpretation that operate at NP2 and NP1. Experiment 1 revealed a localized difficulty interpreting the embedded action verb when the preceding NP2 was inanimate as compared to animate, but this difficulty in subject-verb integration did not extend to the broader region of words in the RC and matrix verb where difficulty was observed in processing ORCs as compared to SRCs. Experiment 2 demonstrated that the difficulty associated with integrating an inanimate NP with an action verb is reduced when the two appear in separate clauses, as in the case of an SRC. |
Casimir J. H. Ludwig; Simon Farrell; Lucy A. Ellis; Tom E. Hardwicke; Iain D. Gilchrist Context-gated statistical learning and its role in visual-saccadic decisions Journal Article In: Journal of Experimental Psychology: General, vol. 141, no. 1, pp. 150–169, 2012. @article{Ludwig2012, Adaptive behavior in a nonstationary world requires humans to learn and track the statistics of the environment. We examined the mechanisms of adaptation in a nonstationary environment in the context of visual-saccadic inhibition of return (IOR). IOR is adapted to the likelihood that return locations will be refixated in the near future. We examined 2 potential learning mechanisms underlying adaptation: (a) a local tracking or priming mechanism that facilitates behavior that is consistent with recent experience and (b) a mechanism that supports retrieval of knowledge of the environmental statistics based on the contextual features of the environment. Participants generated sequences of 2 saccadic eye movements in conditions where the probability that the 2nd saccade was directed back to the previously fixated location varied from low (.17) to high (.50). In some conditions, the contingency was signaled by a contextual cue (the shape of the movement cue). Adaptation occurred in the absence of contextual signals but was more pronounced in the presence of contextual cues. Adaptation even occurred when different contingencies were randomly intermixed, showing the parallel formation of multiple associations between context and statistics. These findings are accounted for by an evidence accumulation framework in which the resting baseline of decision alternatives is adjusted on a trial-by-trial basis. This baseline tracks the subjective prior beliefs about the behavioral relevance of the different alternatives and is updated on the basis of the history of recent events and the contextual features of the current environment. |
André Krügel; Françoise Vitu; Ralf Engbert Fixation positions after skipping saccades: A single space makes a large difference Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 8, pp. 1556–1561, 2012. @article{Kruegel2012, During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krügel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading. |
Christophe C. Le Dantec; Elizabeth E. Melton; Aaron R. Seitz A triple dissociation between learning of target, distractors, and spatial contexts Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 1–12, 2012. @article{LeDantec2012a, When we perform any task, we engage a diverse set of processes. These processes can be optimized with learning. While there exists substantial research that probes specific aspects of learning, there is a scarcity of research regarding interactions between different types of learning. Here, we investigate possible interactions between Perceptual Learning (PL) and Contextual Learning (CL), two types of implicit learning that have garnered much attention in the psychological sciences and that often co-occur in natural settings. PL increases sensitivity to features of task targets and distractors and is thought to involve improvements in low-level perceptual processing. CL regards learning of regularities in the environment (such as spatial relations between objects) and is consistent with improvements in higher level perceptual processes. Surprisingly, we found CL, PL for target features, and PL for distractor features to be independent. This triple dissociation demonstrates how different learning processes may operate in parallel as tasks are mastered. |
Christophe C. Le Dantec; Aaron R. Seitz High resolution, high capacity, spatial specificity in perceptual learning Journal Article In: Frontiers in Psychology, vol. 3, pp. 222, 2012. @article{LeDantec2012, Research of perceptual learning has received significant interest due to findings that training on perceptual tasks can yield learning effects that are specific to the stimulus features of that task. However, recent studies have demonstrated that while training a single stimulus at a single location can yield a high-degree of stimulus specificity, training multiple features, or at multiple locations can reveal a broad transfer of learning to untrained features or stimulus locations. We devised a high resolution, high capacity, perceptual learning procedure with the goal of testing whether spatial specificity can be found in cases where observers are highly trained to discriminate stimuli in many different locations in the visual field. We found a surprising degree of location specific learning, where performance was significantly better when target stimuli were presented at 1 of the 24 trained locations compared to when they were placed in 1 of the 12 untrained locations. This result is particularly impressive given that untrained locations were within a couple degrees of visual angle of those that were trained. Given the large number of trained locations, the fact that the trained and untrained locations were interspersed, and the high-degree of spatial precision of the learning, we suggest that these results are difficult to account for using attention or decision strategies and instead suggest that learning may have taken place for each location separately in retinotopically organized visual cortex. |
Anders Ledberg; Anna Montagnini; Richard Coppola; Steven L. Bressler Reduced variability of ongoing and evoked cortical activity leads to improved behavioral performance Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e43166, 2012. @article{Ledberg2012, Sensory responses of the brain are known to be highly variable, but the origin and functional relevance of this variability have long remained enigmatic. Using the variable foreperiod of a visual discrimination task to assess variability in the primate cerebral cortex, we report that visual evoked response variability is not only tied to variability in ongoing cortical activity, but also predicts mean response time. We used cortical local field potentials, simultaneously recorded from widespread cortical areas, to gauge both ongoing and visually evoked activity. Trial-to-trial variability of sensory evoked responses was strongly modulated by foreperiod duration and correlated both with the cortical variability before stimulus onset as well as with response times. In a separate set of experiments we probed the relation between small saccadic eye movements, foreperiod duration and manual response times. The rate of eye movements was modulated by foreperiod duration and eye position variability was positively correlated with response times. Our results indicate that when the time of a sensory stimulus is predictable, reduction in cortical variability before the stimulus can improve normal behavioral function that depends on the stimulus. |
Kyoung-Min Lee; Kyung-Ha Ahn; Edward L. Keller Saccade generation by the frontal eye fields in Rhesus monkeys is separable from visual detection and bottom-up attention shift Journal Article In: PLoS ONE, vol. 7, no. 6, pp. e39886, 2012. @article{Lee2012a, The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area's role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area's functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory. |
R. J. Lee; H. E. Smithson Context-dependent judgments of color that might allow color constancy in scenes with multiple regions of illumination Journal Article In: Journal of the Optical Society of America A, vol. 29, no. 2, pp. A247–A257, 2012. @article{Lee2012, For a color-constant observer, a change in the spectral composition of the illumination is accompanied by a corresponding change in the chromaticity associated with an achromatic percept. However, maintaining color constancy for different regions of illumination within a scene implies the maintenance of multiple perceptual references. We investigated the features of a scene that enable the maintenance of separate perceptual references for two displaced but overlapping chromaticity distributions. The time-averaged, retinotopically localized stimulus was the primary determinant of color appearance judgments. However, spatial separation of test samples additionally served as a symbolic cue that allowed observers to maintain two separate perceptual references. |
Elmar H. Pinkhardt; Reinhart Jurgens; Dorothée Lulé; Johanna Heimrath; Albert C. Ludolph; Wolfgang Becker; Jan Kassubek Eye movement impairments in Parkinson's disease: Possible role of extradopaminergic mechanisms Journal Article In: BMC Neurology, vol. 12, no. 5, pp. 1–8, 2012. @article{Pinkhardt2012, Background: The basal ganglia (BG) are thought to play an important role in the control of eye movements. Accordingly, the broad variety of subtle oculomotor alterations that has been described in Parkinson's disease (PD) are generally attributed to the dysfunction of the BG dopaminergic system. However, the present study suggest that dopamine substitution is much less effective in improving oculomotor performance than it is in restoring skeletomotor abilities.Methods: We investigated reactive, visually guided saccades (RS), smooth pursuit eye movements (SPEM), and rapidly left-right alternating voluntary gaze shifts (AVGS) by video-oculography in 34 PD patients receiving oral dopaminergic medication (PD-DA), 14 patients with deep brain stimulation of the nucleus subthalamicus (DBS-STN), and 23 control subjects (CTL);In addition, we performed a thorough review of recent literature according therapeuthic effects on oculomotor performance in PD by switching deep brain stimulation off and on in the PD-DBS patients, we achieved swift changes between their therapeutic states without the delays of dopamine withdrawal. In addition, participants underwent neuropsychological testing.Results: Patients exhibited the well known deficits such as increased saccade latency, reduced SPEM gain, and reduced frequency and amplitude of AVGS. Across patients none of the investigated oculomotor parameters correlated with UPDRS III whereas there was a negative correlation between SPEM gain and susceptibility to interference (Stroop score). Of the observed deficiencies, DBS-STN slightly improved AVGS frequency but neither AVGS amplitude nor SPEM or RS performance.Conclusions: We conclude that the impairment of SPEM in PD results from a cortical, conceivably non-dopaminergic dysfunction, whereas patients' difficulty to rapidly execute AVGS might be related to their BG dysfunction. |
Ebrahim Pishyareh; Mehdi Tehrani-Doost; Javad Mahmoudi-Gharaei; Anahita Khorrami; Mitra Joudi; Mehrnoosh Ahmadi Attentional bias towards emotional scenes in boys with attention deficit hyperactivity disorder Journal Article In: Iranian Journal of Psychiatry, vol. 7, no. 2, pp. 93–96, 2012. @article{Pishyareh2012, OBJECTIVE: Children with attention-deficit/hyperactivity disorder (ADHD) react explosively and inappropriately to emotional stimuli. It could be hypothesized that these children have some impairment in attending to emotional cues. Based on this hypothesis, we conducted this study to evaluate visual directions of children with ADHD towards paired emotional scenes. METHOD: Thirty boys between the ages of 6 and 11 years diagnosed with ADHD were compared with 30 age-matched normal boys. All participants were presented paired emotional and neutral scenes in the four following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral - neutral. Meanwhile, their visual orientations towards these pictures were evaluated using the eye tracking system. The number and duration of first fixation and duration of first gaze were compared between the two groups using the MANOVA analysis. The performance of each group in different categories was also analyzed using the Friedman test. RESULTS: With regards to duration of first gaze, which is the time taken to fixate on a picture before moving to another picture, ADHD children spent less time on pleasant pictures compared to normal group, while they were looking at pleasant - neutral and unpleasant - pleasant pairs. The duration of first gaze on unpleasant pictures was higher while children with ADHD were looking at unpleasant - neutral pairs (P<0.01). CONCLUSION: Based on the findings of this study it could be concluded that children with ADHD attend to unpleasant conditions more than normal children which leads to their emotional reactivity. |
Irina Pivneva; Caroline Palmer; Debra Titone Inhibitory control and L2 proficiency modulate bilingual language production: Evidence from spontaneous monologue and dialogue speech Journal Article In: Frontiers in Psychology, vol. 3, pp. 57, 2012. @article{Pivneva2012, Bilingual language production requires that speakers recruit inhibitory control (IC) to optimally balance the activation of more than one linguistic system when they produce speech. Moreover, the amount of IC necessary to maintain an optimal balance is likely to vary across individuals as a function of second language (L2) proficiency and inhibitory capacity, as well as the demands of a particular communicative situation. Here, we investigate how these factors relate to bilingual language production across monologue and dialogue spontaneous speech. In these tasks, 42 English–French and French–English bilinguals produced spontaneous speech in their first language (L1) and their L2, with and without a conversational partner. Participants also completed a separate battery that assessed L2 proficiency and inhibitory capacity. The results showed that L2 vs. L1 production was generally more effortful, as was dialogue vs. monologue speech production although the clarity of what was produced was higher for dialogues vs. monologues. As well, language production effort significantly varied as a function of individual differences in L2 proficiency and inhibitory capacity. Taken together, the overall pattern of findings suggests that both increased L2 proficiency and inhibitory capacity relate to efficient language production during spontaneous monologue and dialogue speech. |
Michael Plöchl; José P. Ossandón; Peter König Combining EEG and eye tracking: Identification, characterization, and correction of eye movement artifacts in electroencephalographic data Journal Article In: Frontiers in Human Neuroscience, vol. 6, pp. 278, 2012. @article{Ploechl2012, Eye movements introduce large artifacts to electroencephalographic recordings (EEG) and thus render data analysis difficult or even impossible. Trials contaminated by eye movement and blink artifacts have to be discarded, hence in standard EEG-paradigms subjects are required to fixate on the screen. To overcome this restriction, several correction methods including regression and blind source separation have been proposed. Yet, there is no automated standard procedure established. By simultaneously recording eye movements and 64-channel-EEG during a guided eye movement paradigm, we investigate and review the properties of eye movement artifacts, including corneo-retinal dipole changes, saccadic spike potentials and eyelid artifacts, and study their interrelations during different types of eye- and eyelid movements. In concordance with earlier studies our results confirm that these artifacts arise from different independent sources and that depending on electrode site, gaze direction, and choice of reference these sources contribute differently to the measured signal. We assess the respective implications for artifact correction methods and therefore compare the performance of two prominent approaches, namely linear regression and independent component analysis (ICA). We show and discuss that due to the independence of eye artifact sources, regression-based correction methods inevitably over- or under-correct individual artifact components, while ICA is in principle suited to address such mixtures of different types of artifacts. Finally, we propose an algorithm, which uses eye tracker information to objectively identify eye-artifact related ICA-components (ICs) in an automated manner. In the data presented here, the algorithm performed very similar to human experts when those were given both, the topographies of the ICs and their respective activations in a large amount of trials. Moreover it performed more reliable and almost twice as effective than human experts when those had to base their decision on IC topographies only. Furthermore, a receiver operating characteristic (ROC) analysis demonstrated an optimal balance of false positive and false negative at an area under curve (AUC) of more than 0.99. Removing the automatically detected ICs from the data resulted in removal or substantial suppression of ocular artifacts including microsaccadic spike potentials, while the relevant neural signal remained unaffected. In conclusion the present work aims at a better understanding of individual eye movement artifacts, their interrelations and the respective implications for eye artifact correction. Additionally, the proposed ICA-procedure provides a tool for optimized detection and correction of eye movement-related artifact components. |
Patrick Plummer; Keith Rayner Effects of parafoveal word length and orthographic features on initial fixation landing positions in reading Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 5, pp. 950–963, 2012. @article{Plummer2012, Previous research has demonstrated that readers use word length and word boundary information in targeting saccades into upcoming words while reading. Previous studies have also revealed that the initial landing positions for fixations on words are affected by parafoveal processing. In the present study, we examined the effects of word length and orthographic legality on targeting saccades into parafoveal words. Long (8-9 letters) and short (4-5 letters) target words, which were matched on lexical frequency and initial letter trigram, were paired and embedded into identical sentence frames. The gaze-contingent boundary paradigm (Rayner, 1975) was used to manipulate the parafoveal information available to the reader before direct fixation on the target word. The parafoveal preview was either identical to the target word or was a visually similar nonword. The nonword previews contained orthographically legal or orthographically illegal initial letters. The results showed that orthographic preprocessing of the word to the right of fixation affected eye movement targeting, regardless of word length. Additionally, the lexical status of an upcoming saccade target in the parafovea generally did not influence preprocessing. |
Hideyuki Matsumoto; Yasuo Terao; Toshiaki Furubayashi; Akihiro Yugeta; Hideki Fukuda; Masaki Emoto; Ritsuko Hanajima; Yoshikazu Ugawa Basal ganglia dysfunction reduces saccade amplitude during visual scanning in Parkinson's disease Journal Article In: Basal Ganglia, vol. 2, no. 2, pp. 73–78, 2012. @article{Matsumoto2012, For Parkinson's disease (PD), we have reported that small saccades restrict visual scanning (Matsumoto H, Terao Y, Furubayashi T et al. Small saccades restrict visual scanning area in Parkinson's disease. Mov Disord 2011;26:1619-26), possibly resulting in the disturbances of visual attention in PD. However, it remains unknown why the saccade amplitude is reduced during visual scanning. The aim of this paper is to study whether the small saccade amplitude during visual scanning results from basal ganglia (BG) dysfunction. This study examined 18 PD patients. The saccade amplitude during viewing visual images was recorded. The saccade amplitude in oculomotor tasks, visually guided saccades (VGS) and memory-guided saccades (MGS) were also recorded. We analyzed the correlation between the saccade amplitude during visual scanning and the saccade amplitude during oculomotor tasks. The saccade amplitude during visual scanning was reduced compared to normal subjects. Similarly, the saccade amplitudes in both VGS and MGS were also reduced. However, the saccade amplitude during visual scanning always correlated with MGS amplitude, whereas it hardly related to VGS amplitude. Our results indicate that BG dysfunction might reduce the saccade amplitude during visual scanning in this disorder. |
Justin T. Maxfield; Gregory J. Zelinsky Searching through the hierarchy: How level of target categorization affects visual search Journal Article In: Visual Cognition, vol. 20, no. 10, pp. 1153–1163, 2012. @article{Maxfield2012, Does the same basic-level advantage commonly observed in the categorization literature also hold for targets in a search task? We answered this question by first conducting a category verification task to define a set of categories showing a standard basic-level advantage, which we then used as stimuli in a search experiment. Participants were cued with a picture preview of the target or its category name at either superordinate, basic, or subordinate levels, then shown a target-present/absent search display. Although search guidance and target verification was best using pictorial cues, the effectiveness of the categorical cues depended on the hierarchical level. Search guidance was best for the specific subordinate-level cues, whereas target verification showed a standard basic-level advantage. These findings demonstrate different hierarchical advantages for guidance and verification in categorical search. We interpret these results as evidence for a common target representation underlying categorical search guidance and verification. |
Michael B. McCamy; Jorge Otero-Millan; Stephen L. Macknik; Yan Yang; Xoana G. Troncoso; Steven M. Baer; Sharon M. Crook; Susana Martinez-Conde Microsaccadic efficacy and contribution to foveal and peripheral vision Journal Article In: Journal of Neuroscience, vol. 32, no. 27, pp. 9194–9204, 2012. @article{McCamy2012, Our eyes move constantly, even when we try to fixate our gaze. Fixational eye movements prevent and restore visual loss during fixation, yet the relative impact of each type of fixational eye movement remains controversial. For over five decades, the debate has focused on microsaccades, the fastest and largest fixational eye movements. Some recent studies have concluded that microsaccades counteract visual fading during fixation. Other studies have disputed this idea, contending that microsaccades play no significant role in vision. The disagreement stems from the lack of methods to determine the precise effects of microsaccades on vision versus those of other eye movements, as well as a lack of evidence that microsaccades are relevant to foveal vision. Here we developed a novel generalized method to determine the precise quantified contribution and efficacy of human microsaccades to restoring visibility compared with other eye movements. Our results indicate that microsaccades are the greatest eye movement contributor to the restoration of both foveal and peripheral vision during fixation. Our method to calculate the efficacy and contribution of microsaccades to perception can determine the strength of connection between any two physiological and/or perceptual events, providing a novel and powerful estimate of causal influence; thus, we anticipate wide-ranging applications in neuroscience and beyond. |
Rachel McDonnell; Martin Breidty; Heinrich H. Bülthoff Render me real? Investigating the effect of render style on the perception of animated virtual humans Journal Article In: ACM Transactions on Graphics, vol. 31, no. 4, pp. 1–11, 2012. @article{McDonnell2012, The realistic depiction of lifelike virtual humans has been the goal of many movie makers in the last decade. Recently, films such as Tron: Legacy and The Curious Case of Benjamin Button have produced highly realistic characters. In the real-time domain, there is also a need to deliver realistic virtual characters, with the increase in popularity of interactive drama video games (such as L.A. NoireTM or Heavy RainTM). There have been mixed reactions from audiences to lifelike characters used in movies and games, with some saying that the increased realism highlights subtle imperfections, which can be disturbing. Some developers opt for a stylized rendering (such as cartoon-shading) to avoid a negative reaction [Thompson 2004]. In this paper, we investigate some of the consequences of choosing realistic or stylized rendering in order to provide guidelines for developers for creating appealing virtual characters. We conducted a series of psychophysical experiments to determine whether render style affects how virtual humans are perceived. Motion capture with synchronized eye-tracked data was used throughout to animate custom-made virtual model replicas of the captured actors. |
Robert D. McIntosh; Antimo Buonocore Dissociated effects of distractors on saccades and manual aiming Journal Article In: Experimental Brain Research, vol. 220, no. 3-4, pp. 201–211, 2012. @article{McIntosh2012, The remote distractor effect (RDE) is a robust phenomenon whereby target-directed saccades are delayed by the appearance of a distractor. This effect persists even when the target location is perfectly predictable. The RDE has been studied extensively in the oculomotor domain but it is unknown whether it generalises to other spatially oriented responses. In three experiments, we tested whether the RDE generalises to manual aiming. Experiment 1 required participants to move their hand or eyes to predictable targets presented alone or accompanied by a distractor in the opposite hemifield. The RDE was observed for the eyes but not for the hand. Experiment 2 replicated this dissociation in a more naturalistic task in which eye movements were not constrained during manual aiming. Experiment 3 confirmed the lack of manual RDE across a wider range of distractor delays (0, 50, 100, and 150 ms). Our data imply that the RDE is specific to the oculomotor system, at least for non-foveal distractors. We suggest that the oculomotor RDE reflects competitive interactions between target and distractor representations in the superior colliculus, which are not necessarily shared by manual aiming. |
James M. McQueen; Falk Huettig Changing only the probability that spoken words will be distorted changes how they are recognized Journal Article In: The Journal of the Acoustical Society of America, vol. 131, no. 1, pp. 509–517, 2012. @article{McQueen2012, An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated on onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes. © 2012 Acoustical Society of America. |
Eugene McSorley; Rachel McCloy; Clare Lyne The spatial impact of visual distractors on saccade latency Journal Article In: Vision Research, vol. 60, pp. 61–72, 2012. @article{McSorley2012, Remote transient changes in the environment, such as the onset of visual distractors, impact on the execution of target directed saccadic eye movements. Studies that have examined the latency of the saccade response have shown conflicting results. When there was an element of target selection, saccade latency increased as the distance between distractor and target increased. In contrast, when target selection is minimized by restricting the target to appear on one axis position, latency has been found to be slowest when the distractor is shown at fixation and reduces as it moves away from this position, rather than from the target. Here we report four experiments examining saccade latency as target and distractor positions are varied. We find support for both a dependence of saccade latency on distractor distance from target and from fixation: saccade latency was longer when distractor is shown close to fixation and even longer still when shown in an opposite location (180°) to the target. We suggest that this is due to inhibitory interactions between the distractor, fixation and the target interfering with fixation disengagement and target selection. |
Jennifer Malsert; Nathalie Guyader; Alan Chauvin; Christian Marendaz In: Cognitive Neuroscience, vol. 3, no. 2, pp. 105–111, 2012. @article{Malsert2012, Instructing participants to "identify a target" dramatically reduces saccadic reaction times in prosaccade tasks (PS). However, it has been recently shown that this effect disappears in antisaccade tasks (AS). The instruction effect observed in PS may result from top-down processes, mediated by pathways connecting the prefrontal cortex (PFC) to the superior colliculus. In AS, the PFC's prior involvement is in competition with the instruction process, annulling its effect. This study aims to discover whether the instruction effect persists in mixed paradigms. According to Dyckman's fMRI study (2007), the difficulty of mixed tasks leads to PFC involvement. The antisaccade-related PFC activation observed on comparison of blocked AS and PS therefore disappears when the two are compared in mixed paradigms. However, we continued to observe the instruction effect for both PS and AS. We therefore posit different types of PFC activation: phasic during blocked AS, and tonic during mixed saccadic experiments. |
Jennifer Malsert; Nathalie Guyader; Alan Chauvin; Mircea Polosan; Emmanuel Poulet; David Szekely; Thierry Bougerol; Christian Marendaz Antisaccades as a follow-up tool in major depressive disorder therapies: A pilot study Journal Article In: Psychiatry Research, vol. 200, no. 2-3, pp. 1051–1053, 2012. @article{Malsert2012a, Eight patients with major depression, included in a double-blind study, performed an antisaccade task. Results suggested a link between antisaccade performances and clinical scale scores in patients who respond to therapy. Moreover, error rates may well predict response from day of inclusion, thus serving as a state-marker for mood disorders. |
Marco Marelli; Claudio Luzzatti Frequency effects in the processing of Italian nominal compounds: Modulation of headedness and semantic transparency Journal Article In: Journal of Memory and Language, vol. 66, no. 4, pp. 644–664, 2012. @article{Marelli2012, There is a general debate as to whether constituent representations are accessed in compound processing. The present study addresses this issue, exploiting the properties of Italian compounds to test the role of headedness and semantic transparency in constituent access. In a first experiment, a lexical decision task was run on nominal compounds. Significant interactions between constituent-frequencies, headedness and semantic transparency emerged, indicating facilitatory frequency effects for transparent and head-final compounds, thus highlighting the importance of the semantic and structural properties of the compounds in lexical access. In a second experiment, converging evidence was sought in an eye-tracking study. The compounds were embedded into sentence contexts, and fixation durations were measured. The results did in fact confirm the effect observed in the first experiment. The results are consistent with a multi-route model of compound processing, but also indicate the importance of a semantic route dedicated to the conceptual combination of constituent meanings. |
Pamela J. Marsh; Gemma Luckett; Tamara A. Russell; Max Coltheart; Melissa J. Green Effects of facial emotion recognition remediation on visual scanning of novel face stimuli Journal Article In: Schizophrenia Research, vol. 141, no. 2-3, pp. 234–240, 2012. @article{Marsh2012, Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. |
Kathleen M. Masserang; Alexander Pollatsek Transposed letter effects in prefixed words: Implications for morphological decomposition Journal Article In: Journal of Cognitive Psychology, vol. 24, no. 4, pp. 476–495, 2012. @article{Masserang2012, Acrucial issue in word encoding is whether morphemes are involved in early stages. One paradigm that tests for this employs the transposed letter (TL) effect the difference in the times to process a word (misfile) when it is preceded by a transposed letter (TL) prime (mifsile) and when it is preceded by a substitute letter (SL) prime (mintile) and examines whether the TL effect is smaller when the two adjacent letters cross a morpheme boundary. The evidence from prior studies is not consistent. Experiments 1 and 2 employed a parafoveal preview paradigm in which the transposed letters either crossed the prefix-stem boundary or did not, and found a clear TL effect regardless of whether the two letters crossed the morpheme boundary. Experiment 3 replicated this finding employing a masked priming lexical-decision paradigm. It thus appears that morphemes are not involved in early processes in English that are sensitive to letter order. There is some evidence for morphemicmodulation of the TL effect in other languages; thus, the properties of the language may modulate when morphemes influence early letter position encoding. |
Sebastiaan Mathôt; Jan Theeuwes It's all about the transient: Intra-saccadic onset stimuli do not capture attention Journal Article In: Journal of Eye Movement Research, vol. 5, no. 2, pp. 1–12, 2012. @article{Mathot2012, An abrupt onset stimulus was presented while the participants' eyes were in motion. Be- cause of saccadic suppression, participants did not perceive the visual transient that nor- mally accompanies the sudden appearance of a stimulus. In contrast to the typical finding that the presentation of an abrupt onset captures attention and interferes with the parti - cipants' responses, we found that an intra-saccadic abrupt onset does not capture attention: It has no effect beyond that of increasing the set-size of the search array by one item. This finding favours the local transient account of attentional capture over the novel object hypothesis. |
Ahmed M. Megreya; Markus Bindemann; Catriona Havard; A. Mike Burton Identity-lineup location influences target selection: Evidence from eye movements Journal Article In: Journal of Police and Criminal Psychology, vol. 27, no. 2, pp. 167–178, 2012. @article{Megreya2012, Eyewitnesses often have to recognize the perpetrators of an observed crime from identity lineups. In the construction of these lineups, a decision must be made concerning where a suspect should be placed, but whether location in a lineup affects the identification of a perpetrator has received little attention. This study explored this problem with a face-matching task, in which observers decided if pairs of faces depict the same person or two different people (Experiment 1), and with a lineup task in which the presence of a target had to be detected in an identity parade of five faces (Experiment 2). In addition, this study also explored if high accuracy is related to a perceptual pop-out effect, whereby the target is detected rapidly among the lineup. In both experiments, observers' eye movements revealed that location determines the order in which people were viewed, whereby faces on the left side were consistently viewed first. This location effect was reflected also in observers' responses, so that a foil face on the left side of a lineup display was more likely to be misidentified as the target. However, identification accuracy was not related to a pop-out effect. The implications of these findings are discussed. |
Ben Meijering; Hedderik Rijn; Niels A. Taatgen; Rineke Verbrugge What eye movements can tell about theory of mind in a strategic game Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e45961, 2012. @article{Meijering2012, This study investigates strategies in reasoning about mental states of others, a process that requires theory of mind. It is a first step in studying the cognitive basis of such reasoning, as strategies affect tradeoffs between cognitive resources. Participants were presented with a two-player game that required reasoning about the mental states of the opponent. Game theory literature discerns two candidate strategies that participants could use in this game: either forward reasoning or backward reasoning. Forward reasoning proceeds from the first decision point to the last, whereas backward reasoning proceeds in the opposite direction. Backward reasoning is the only optimal strategy, because the optimal outcome is known at each decision point. Nevertheless, we argue that participants prefer forward reasoning because it is similar to causal reasoning. Causal reasoning, in turn, is prevalent in human reasoning. Eye movements were measured to discern between forward and backward progressions of fixations. The observed fixation sequences corresponded best with forward reasoning. Early in games, the probability of observing a forward progression of fixations is higher than the probability of observing a backward progression. Later in games, the probabilities of forward and backward progressions are similar, which seems to imply that participants were either applying backward reasoning or jumping back to previous decision points while applying forward reasoning. Thus, the game-theoretical favorite strategy, backward reasoning, does seem to exist in human reasoning. However, participants preferred the more familiar, practiced, and prevalent strategy: forward reasoning. |
David Melcher; Alessio Fracasso Remapping of the line motion illusion across eye movements Journal Article In: Experimental Brain Research, vol. 218, no. 4, pp. 503–514, 2012. @article{Melcher2012, Although motion processing in the brain has been classically studied in terms of retinotopically defined receptive fields, recent evidence suggests that motion perception can occur in a spatiotopic reference frame. We investigated the underlying mechanisms of spatiotopic motion perception by examining the role of saccade metrics as well as the capacity of trans-saccadic motion. To this end, we used the line motion illusion (LMI), in which a straight line briefly shown after a high contrast stimulus (inducer) is perceived as expanding away from the inducer position. This illusion provides an interesting test of spatiotopic motion because the neural correlates of this phenomenon have been found early in the visual cortex and the effect does not require focused attention. We measured the strength of LMI both with stable fixation and when participants were asked to perform a 10° saccade during the blank ISI between the inducer and the line. A strong motion illusion was found across saccades in spatiotopic coordinates. When the inducer was presented near in time to the saccade cue, saccadic latencies were longer, saccade amplitudes were shorter, and the strength of reported LMI was consistently reduced. We also measured the capacity of the trans-saccadic LMI by varying the number of inducers. In contrast to a visual-spatial memory task, we found that the LMI was largely eliminated by saccades when two or more inducers were displayed. Together, these results suggest that motion perceived in non-retinotopic coordinates depends on an active, saccade-dependent remapping process with a strictly limited capacity. |
Tamaryn Menneer; Michael J. Stroud; Kyle R. Cave; Xingshan Li; Hayward J. Godwin; Simon P. Liversedge; Nick Donnelly Search for two categories of target produces fewer fixations to target-color items Journal Article In: Journal of Experimental Psychology: Applied, vol. 18, no. 4, pp. 404–418, 2012. @article{Menneer2012, Searching simultaneously for metal threats (guns and knives) and improvised explosive devices (IEDs) in X-ray images is less effective than 2 independent single-target searches, 1 for metal threats and 1 for IEDs. The goals of this study were to (a) replicate this dual-target cost for categorical targets and to determine whether the cost remains when X-ray images overlap, (b) determine the role of attentional guidance in this dual-target cost by measuring eye movements, and (c) determine the effect of practice on guidance. Untrained participants conducted 5,376 trials of visual search of X-ray images, each specializing in single-target search for metal threats, single-target search for IEDs, or dual-target search for both. In dual-target search, only 1 target (metal threat or IED) at most appeared on any 1 trial. Eye movements, response time, and accuracy were compared across single-target and dual-target searches. Results showed a dual-target cost in response time, accuracy, and guidance, with fewer fixations to target-color objects and disproportionately more to non-target-color objects, compared with single-target search. Such reduction in guidance explains why targets are missed in dual-target search, which was particularly noticeable when objects overlapped. After extensive practice, accuracy, response time, and guidance remained better in single-target search than in dual-target search. The results indicate that, when 2 different target representations are required for search, both representations cannot be maintained as accurately as in separate single-target searches. They suggest that baggage X-ray security screeners should specialize in one type of threat, or be trained to conduct 2 independent searches, 1 for each threat item. |
Adam Palanica; Roxane J. Itier Attention capture by direct gaze is robust to context and task demands Journal Article In: Journal of Nonverbal Behavior, vol. 36, no. 2, pp. 123–134, 2012. @article{Palanica2012, Eye-tracking was used to investigate whether gaze direction would influence the visual scanning of faces, when presented in the context of a full character, in different social settings, and with different task demands. Participants viewed individual computer agents against either a blank background or a bar scene setting, during both a free-viewing task and an attractiveness rating task for each character. Faces with a direct gaze were viewed longer than faces with an averted gaze regardless of body context, social settings, and task demands. Additionally, participants evaluated characters with a direct gaze as more attractive than characters with an averted gaze. These results, obtained with pictures of computer agents rather than real people, suggest that direct gaze is a powerful attention grabbing stimulus that is robust to background context or task demands. |
Aspasia E. Paltoglou; Peter Neri Attentional control of sensory tuning in human visual perception Journal Article In: Journal of Neurophysiology, vol. 107, no. 5, pp. 1260–1274, 2012. @article{Paltoglou2012, Attention is known to affect the response properties of sensory neurons in visual cortex. These effects have been traditionally classified into two categories: 1) changes in the gain (overall amplitude) of the response; and 2) changes in the tuning (selectivity) of the response. We performed an extensive series of behavioral measurements using psychophysical reverse correlation to understand whether/how these neuronal changes are reflected at the level of our perceptual experience. This question has been addressed before, but by different laboratories using different attentional manipulations and stimuli/tasks that are not directly comparable, making it difficult to extract a comprehensive and coherent picture from existing literature. Our results demonstrate that the effect of attention on response gain (not necessarily associated with tuning change) is relatively aspecific: it occurred across all the conditions we tested, including attention directed to a feature orthogonal to the primary feature for the assigned task. Sensory tuning, however, was affected primarily by feature-based attention and only to a limited extent by spatially directed attention, in line with existing evidence from the electrophysiological and behavioral literature. |
Manuel Perea Revisiting Huey: On the importance of the upper part of words during reading Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1148–1153, 2012. @article{Perea2012, Recent research has shown that that the upper part of words enjoys an advantage over the lower part of words in the recognition of isolated words. The goal of the present article was to examine how removing the upper/lower part of the words influences eye movement control during silent normal reading. The participants' eye movements were monitored when reading intact sentences and when reading sentences in which the upper or the lower portion of the text was deleted. Results showed a greater reading cost (longer fixations) when the upper part of the text was removed than when the lower part of the text was removed (i.e., it influenced when to move the eyes). However, there was little influence on the initial landing position on a target word (i.e., on the decision as to where to move the eyes). In addition, lexical-processing difficulty (as inferred from the magnitude of the word frequency effect on a target word) was affected by text degradation. The implications of these findings for models of visual-word recognition and reading are discussed. |
Manuel Perea; Pablo Gomez Subtle increases in interletter spacing facilitate the encoding of words during normal reading Journal Article In: PLoS ONE, vol. 7, no. 10, pp. e47568, 2012. @article{Perea2012a, BACKGROUND: Several recent studies have revealed that words presented with a small increase in interletter spacing are identified faster than words presented with the default interletter spacing (i.e., w a t e r faster than water). Modeling work has shown that this advantage occurs at an early encoding level. Given the implications of this finding for the ease of reading in the new digital era, here we examined whether the beneficial effect of small increases in interletter spacing can be generalized to a normal reading situation. METHODOLOGY: We conducted an experiment in which the participant's eyes were monitored when reading sentences varying in interletter spacing: i) sentences were presented with the default (0.0) interletter spacing; ii) sentences presented with a +1.0 interletter spacing; and iii) sentences presented with a +1.5 interletter spacing. PRINCIPAL FINDINGS: Results showed shorter fixation duration times as an inverse function of interletter spacing (i.e., fixation durations were briefest with +1.5 spacing and slowest with the default spacing). CONCLUSIONS: Subtle increases in interletter spacing facilitate the encoding of the fixated word during normal reading. Thus, interletter spacing is a parameter that may affect the ease of reading, and it could be adjustable in future implementations of e-book readers. |
Juan A. Pérez; Stefano Passini Avoiding minorities: Social invisibility Journal Article In: European Journal of Social Psychology, vol. 42, no. 7, pp. 864–874, 2012. @article{Perez2012, Three experiments examined how self-consciousness has an impact on the visual exploration of a social field. The main hypothesis was that merely a photograph of people can trigger a dynamic process of social visual interaction such that minority images are avoided when people are in a state of self-reflective consciousness. In all three experiments, pairs of pictures—one with characters of social minorities and one with characters of social majorities—were shown to the participants. By means of eye-tracking technology, the results of Experiment 1 (n = 20) confirmed the hypothesis that in the reflective consciousness condition, people look more at the majority than minority characters. The results of Experiment 2 (n = 89) confirmed the hypothesis that reflective consciousness also induces avoiding reciprocal visual interaction with minorities. Finally, by manipulating the visual interaction (direct vs. non-direct) with the photos of minority and majority characters, the results of Experiment 3 (n = 56) confirmed the hypothesis that direct visual interaction with minority characters is perceived as being longer and more aversive. The overall conclusion is that self-reflective consciousness leads people to avoid visual interaction with social minorities, consigning them to social invisibility. |
Carolyn J. Perry; Mazyar Fallah Color improves speed of processing but not perception in a motion illusion Journal Article In: Frontiers in Psychology, vol. 3, pp. 92, 2012. @article{Perry2012, When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored. |
Yoni Pertzov; Mia Yuan Dong; Muy Cheng Peich; Masud Husain Forgetting what was where: The fragility of object-location binding Journal Article In: PLoS ONE, vol. 7, no. 10, pp. e48214, 2012. @article{Pertzov2012, Although we frequently take advantage of memory for objects locations in everyday life, understanding how an object's identity is bound correctly to its location remains unclear. Here we examine how information about object identity, location and crucially object-location associations are differentially susceptible to forgetting, over variable retention intervals and memory load. In our task, participants relocated objects to their remembered locations using a touchscreen. When participants mislocalized objects, their reports were clustered around the locations of other objects in the array, rather than occurring randomly. These 'swap' errors could not be attributed to simple failure to remember either the identity or location of the objects, but rather appeared to arise from failure to bind object identity and location in memory. Moreover, such binding failures significantly contributed to decline in localization performance over retention time. We conclude that when objects are forgotten they do not disappear completely from memory, but rather it is the links between identity and location that are prone to be broken over time. |
Anders Petersen; Søren Kyllingsbæk; Claus Bundesen Measuring and modeling attentional dwell time Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1029–1046, 2012. @article{Petersen2012, Attentional dwell time (AD) defines our inability to perceive spatially separate events when they occur in rapid succession. In the standard AD paradigm, subjects should identify two target stimuli presented briefly at differ- ent peripheral locations with a varied stimulus onset asyn- chrony (SOA). The AD effect is seen as a long-lasting impediment in reporting the second target, culminating at SOAs of 200–500 ms. Here, we present the first quantitative computational model of the effect—a theory of temporal visual attention. The model is based on the neural theory of visual attention (Bundesen, Habekost, & Kyllingsbæk, Psychological Review, 112, 291–328 2005) and introduces the novel assumption that a stimulus retained in visual short- term memory takes up visual processing-resources used to encode stimuli into memory. Resources are thus locked and cannot process subsequent stimuli until the stimulus in memory has been recoded, which explains the long-lasting AD effect. The model is used to explain results from two experiments providing detailed individual data from both a standard AD paradigm and an extension with varied expo- sure duration of the target stimuli. Finally, we discuss new predictions by the model. |
Mary A. Peterson; Laura Cacciamani; Morgan D. Barense; Paige E. Scalf The perirhinal cortex modulates V2 activity in response to the agreement between part familiarity and configuration familiarity Journal Article In: Hippocampus, vol. 22, no. 10, pp. 1965–1977, 2012. @article{Peterson2012, Research has demonstrated that the perirhinal cortex (PRC) represents complex object-level feature configurations, and participates in familiarity versus novelty discrimination. Barense et al. [(in press) Cerebral Cortex, 22:11, doi:10.1093/cercor/bhr347] postulated that, in addition, the PRC modulates part familiarity responses in lower-level visual areas. We used fMRI to measure activation in the PRC and V2 in response to silhouettes presented peripherally while participants maintained central fixation and performed an object recognition task. There were three types of silhouettes: Familiar Configurations portrayed real-world objects; Part-Rearranged Novel Configurations created by spatially rearranging the parts of the familiar configurations; and Control Novel Configurations in which both the configuration and the ensemble of parts comprising it were novel. For right visual field (RVF) presentation, BOLD responses revealed a significant linear trend in bilateral BA 35 of the PRC (highest activation for Familiar Configurations, lowest for Part-Rearranged Novel Configurations, with Control Novel Configurations in between). For left visual field (LVF) presentation, a significant linear trend was found in a different area (bilateral BA 38, temporal pole) in the opposite direction (Part-Rearranged Novel Configurations highest, Familiar Configurations lowest). These data confirm that the PRC is sensitive to the agreement in familiarity between the configuration level and the part level. As predicted, V2 activation mimicked that of the PRC: for RVF presentation, activity in V2 was significantly higher in the left hemisphere for Familiar Configurations than for Part-Rearranged Novel Configurations, and for LVF presentation, the opposite effect was found in right hemisphere V2. We attribute these patterns in V2 to feedback from the PRC because receptive fields in V2 encompass parts but not configurations. These results reveal two new aspects of PRC function: (1) it is sensitive to the congruency between the familiarity of object configurations and the parts comprising those configurations and (2) it likely modulates familiarity responses in visual area V2. |
Matthew F. Peterson; Miguel P. Eckstein Looking just below the eyes is optimal across face recognition tasks Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 48, pp. E3314–E3323, 2012. @article{Peterson2012a, When viewing a human face, people often look toward the eyes. Maintaining good eye contact carries significant social value and allows for the extraction of information about gaze direction. When identifying faces, humans also look toward the eyes, but it is unclear whether this behavior is solely a byproduct of the socially important eye movement behavior or whether it has functional importance in basic perceptual tasks. Here, we propose that gaze behavior while determining a person's identity, emotional state, or gender can be explained as an adaptive brain strategy to learn eye movement plans that optimize performance in these evolutionarily important perceptual tasks. We show that humans move their eyes to locations that maximize perceptual performance determining the identity, gender, and emotional state of a face. These optimal fixation points, which differ moderately across tasks, are predicted correctly by a Bayesian ideal observer that integrates information optimally across the face but is constrained by the decrease in resolution and sensitivity from the fovea toward the visual periphery (foveated ideal observer). Neither a model that disregards the foveated nature of the visual system and makes fixations on the local region with maximal information, nor a model that makes center-of-gravity fixations correctly predict human eye movements. Extension of the foveated ideal observer framework to a large database of real-world faces shows that the optimality of these strategies generalizes across the population. These results suggest that the human visual system optimizes face recognition performance through guidance of eye movements not only toward but, more precisely, just below the eyes. |
Matthieu Philippe; Anne-Emmanuelle Priot; Phillippe Fuchs; Corinne Roumes Vergence tracking: A tool to assess oculomotor performance in stereoscopic displays Journal Article In: Journal of Eye Movement Research, vol. 5, no. 2, pp. 1–8, 2012. @article{Philippe2012, Oculomotor conflict induced between the accommodative and vergence components in stereoscopic displays represents an unnatural viewing condition. There is now some evidence that stereoscopic viewing may induce discomfort and changes in oculomotor parameters. The present study sought to measure oculomotor performance during stereoscopic viewing. Using a 3D stereo setup and an eye-tracker, vergence responses were measured during 20-min exposure to a virtual visual target oscillating in depth, which participants had to track. The results showed a significant decline in the amplitude of the in-depth oscillatory vergence response over time. We propose that eye-tracking provides a useful tool to objectively assess the timevarying alterations of the vergence system when using stereoscopic displays. |
Catherine I. Phillips; Christopher R. Sears; Penny M. Pexman An embodied semantic processing effect on eye gaze during sentence reading Journal Article In: Language and Cognition, vol. 4, no. 2, pp. 99–114, 2012. @article{Phillips2012a, The present research examines the effects of body-object interaction (BOI) on eye gaze behaviour in a reading task. BOI measures perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g. cat) and a set of low BOI words (e.g. sun) were selected, matched on imageability and concreteness (as well as other lexical and semantic variables). Facilitatory BOI effects were observed: gaze durations and total fixation durations were shorter for high BOI words, and participants made fewer regressions to high BOI words. The results provide evidence of a BOI effect on non-manual responses and in a situation that taps normal reading processes. We discuss how the results (a) suggest that stored motor information (as measured by BOI ratings) is relevant to lexical semantics, and (b) are consistent with an embodied view of cognition (Wilson 2002). |
Jessica M. Phillips; Stefan Everling Neural activity in the macaque putamen associated with saccades and behavioral outcome Journal Article In: PLoS ONE, vol. 7, no. 12, pp. e51596, 2012. @article{Phillips2012, It is now widely accepted that the basal ganglia nuclei form segregated, parallel loops with neocortical areas. The prevalent view is that the putamen is part of the motor loop, which receives inputs from sensorimotor areas, whereas the caudate, which receives inputs from frontal cortical eye fields and projects via the substantia nigra pars reticulata to the superior colliculus, belongs to the oculomotor loop. Tracer studies in monkeys and functional neuroimaging studies in human subjects, however, also suggest a potential role for the putamen in oculomotor control. To investigate the role of the putamen in saccadic eye movements, we recorded single neuron activity in the caudal putamen of two rhesus monkeys while they alternated between short blocks of pro- and anti-saccades. In each trial, the instruction cue was provided after the onset of the peripheral stimulus, thus the monkeys could either generate an immediate response to the stimulus based on the internal representation of the rule from the previous trial, or alternatively, could await the visual rule-instruction cue to guide their saccadic response. We found that a subset of putamen neurons showed saccade-related activity, that the preparatory mode (internally- versus externally-cued) influenced the expression of task-selectivity in roughly one third of the task-modulated neurons, and further that a large proportion of neurons encoded the outcome of the saccade. These results suggest that the caudal putamen may be part of the neural network for goal-directed saccades, wherein the monitoring of saccadic eye movements, context and performance feedback may be processed together to ensure optimal behavioural performance and outcomes are achieved during ongoing behaviour. |
Muriel T. N. Panouillères; Sebastiaan F. W. Neggers; Tjerk P. Gutteling; Roméo Salemme; Stefan Stigchel; Josef N. Geest; Maarten A. Frens; Denis Pélisson Transcranial magnetic stimulation and motor plasticity in human lateral cerebellum: Dual effect on saccadic adaptation Journal Article In: Human Brain Mapping, vol. 33, no. 7, pp. 1512–1525, 2012. @article{Panouilleres2012, The cerebellum is a key area for movement control and sensory-motor plasticity. Its medial part is considered as the exclusive cerebellar center controlling the accuracy and adaptive calibration of saccadic eye movements. However, the contribution of other zones situated in its lateral part is unknown. We addressed this question in healthy adult volunteers by using magnetic resonance imaging (MRI)-guided transcranial magnetic stimulation (TMS). The double-step target paradigm was used to adaptively lengthen or shorten saccades. TMS pulses over the right hemisphere of the cerebellum were delivered at 0, 30, or 60 ms after saccade detection in separate recording sessions. The effects on saccadic adaptation were assessed relative to a fourth session where TMS was applied to Vertex as a control site. First, TMS applied upon saccade detection before the adaptation phase reduced saccade accuracy. Second, TMS applied during the adaptation phase had a dual effect on saccadic plasticity: adaptation after-effects revealed a potentiation of the adaptive lengthening and a depression of the adaptive shortening of saccades. For the first time, we demonstrate that TMS on lateral cerebellum can influence plasticity mechanisms underlying motor performance. These findings also provide the first evidence that the human cerebellar hemispheres are involved in the control of saccade accuracy and in saccadic adaptation, with possibly different neuronal populations concerned in adaptive lengthening and shortening. Overall, these results require a reappraisal of current models of cerebellar contribution to oculomotor plasticity. |
Muriel T. N. Panouillères; Roméo Salemme; Christian Urquizar; Denis Pélisson Effect of saccadic adaptation on sequences of saccades Journal Article In: Journal of Eye Movement Research, vol. 5, no. 1, pp. 1–13, 2012. @article{Panouilleres2012a, Accuracy of saccadic eye movements is maintained thanks to adaptation mechanisms. The adaptive lengthening and shortening of reactive and voluntary saccades rely on partially separate neural substrates. Although in daily-life we mostly perform sequences of saccades, the effect of saccadic adaptation has been mainly evaluated on single saccades. Here, sequences of two saccades were recorded before and after adaptation of rightward saccades. In 4 separate sessions, reactive and voluntary saccades were adaptively shortened or lengthened. We found that the second saccade of the sequence always remained accurate and compensated for the adaptive changes of the first rightward saccade size. This finding suggests that adaptation loci are upstream of the site where the efference copy involved in sequence planning originates. |
Benjamin A. Parris; Sarah Bate; Scott D. Brown; Timothy L. Hodgson Facilitating goal-oriented behaviour in the Stroop task: When executive control Is influenced by automatic processing Journal Article In: PLoS ONE, vol. 7, no. 10, pp. e46994, 2012. @article{Parris2012, A portion of Stroop interference is thought to arise from a failure to maintain goal-oriented behaviour (or goal neglect). The aim of the present study was to investigate whether goal- relevant primes could enhance goal maintenance and reduce the Stroop interference effect. Here it is shown that primes related to the goal of responding quickly in the Stroop task (e.g. fast, quick, hurry) substantially reduced Stroop interference by reducing reaction times to incongruent trials but increasing reaction times to congruent and neutral trials. No effects of the primes were observed on errors. The effects on incongruent, congruent and neutral trials are explained in terms of the influence of the primes on goal maintenance. The results show that goal priming can facilitate goal-oriented behaviour and indicate that automatic processing can modulate executive control. |
Kevin B. Paterson; Victoria A. McGowan; Timothy R. Jordan Eye movements reveal effects of visual content on eye guidance and lexical access during reading Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e41766, 2012. @article{Paterson2012, Background: Normal reading requires eye guidance and activation of lexical representations so that words in text can be identified accurately. However, little is known about how the visual content of text supports eye guidance and lexical activation, and thereby enables normal reading to take place. Methods and Findings: To investigate this issue, we investigated eye movement performance when reading sentences displayed as normal and when the spatial frequency content of text was filtered to contain just one of 5 types of visual content: very coarse, coarse, medium, fine, and very fine. The effect of each type of visual content specifically on lexical activation was assessed using a target word of either high or low lexical frequency embedded in each sentence Results: No type of visual content produced normal eye movement performance but eye movement performance was closest to normal for medium and fine visual content. However, effects of lexical frequency emerged early in the eye movement record for coarse, medium, fine, and very fine visual content, and were observed in total reading times for target words for all types of visual content. Conclusion: These findings suggest that while the orchestration of multiple scales of visual content is required for normal eye-guidance during reading, a broad range of visual content can activate processes of word identification independently. Implications for understanding the role of visual content in reading are discussed. |