全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2011 |
Kirsten A. Dalrymple; Elina Birmingham; Walter F. Bischof; Jason J. S. Barton; Alan Kingstone Opening a window on attention: Documenting and simulating recovery from simultanagnosia Journal Article In: Cortex, vol. 47, no. 7, pp. 787–799, 2011. @article{Dalrymple2011, Simultanagnosia is a disorder of visual attention: the inability to see more than one object at one time. Some hypothesize that this is due to a constriction of the visual " window" of attention. Little is known about how simultanagnosics explore complex stimuli and how their behaviour changes with recovery. We monitored the eye movements of simultanagnosic patient SL to see how she scans social scenes shortly after onset of simultanagnosia (Time 1) and after some recovery (Time 2). At Time 1 SL had an abnormally low proportion of fixations to the eyes of the people in the scenes. She made a significantly larger proportion of fixations to the eyes at Time 2. We hypothesized that this change was related to an expansion of her restricted window of attention. Previously we simulated SL's behaviour in healthy subjects by having them view stimuli through a restricted viewing window. We used this simulation paradigm here to test our expanding window hypothesis. Subjects viewing social scenes through a larger window allocated more fixations to the eyes of people in the scenes than subjects viewing scenes through a smaller window, supporting our hypothesis. Recovery in simultanagnosia may be related to the expansion of the restricted attentional window that characterizes the disorder. |
Sangita Dandekar; Claudio M. Privitera; Thom Carney; Stanley A. Klein Neural saccadic response estimation during natural viewing Sangita Journal Article In: Journal of Neurophysiology, vol. 107, no. 4, pp. 1776–1790, 2011. @article{Dandekar2011, Studying neural activity during natural viewing conditions is not often attempted. Isolating the neural response of a single saccade is necessary to study neural activity during natural viewing; however, the close temporal spacing of saccades that occurs during natural viewing makes it difficult to determine the response to a single saccade. Herein, a general linear model (GLM) approach is applied to estimate the EEG neural saccadic response for different segments of the saccadic main sequence separately. It is determined that, in visual search conditions, neural responses estimated by conventional event-related averaging are significantly and systematically distorted relative to GLM estimates due to the close temporal spacing of saccades during visual search. Before the GLM is applied, analyses are applied that demonstrate that saccades during visual search with intersaccadic spacings as low as 100-150 ms do not exhibit significant refractory effects. Therefore, saccades displaying different intersaccadic spacings during visual search can be modeled using the same regressor in a GLM. With the use of the GLM approach, neural responses were separately estimated for five different ranges of saccade amplitudes during visual search. Occipital responses time locked to the onsets of saccades during visual search were found to account for, on average, 79 percent of the variance of EEG activity in a window 90-200 ms after the onsets of saccades for all five saccade amplitude ranges that spanned a range of 0.2-6.0 degrees. A GLM approach was also used to examine the lateralized ocular artifacts associated with saccades. Possible extensions of the methods presented here to account for the superposition of microsaccades in event-related EEG studies conducted in nominal fixation conditions are discussed. |
J. Rhys Davies; Tom C. A. Freeman Simultaneous adaptation to non-collinear retinal motion and smooth pursuit eye movement Journal Article In: Vision Research, vol. 51, no. 14, pp. 1637–1647, 2011. @article{Davies2011, Simultaneously adapting to retinal motion and non-collinear pursuit eye movement produces a motion aftereffect (MAE) that moves in a different direction to either of the individual adapting motions. Mack, Hill and Kahn (1989, Perception, 18, 649-655) suggested that the MAE was determined by the perceived motion experienced during adaptation. We tested the perceived-motion hypothesis by having observers report perceived direction during simultaneous adaptation. For both central and peripheral retinal motion adaptation, perceived direction did not predict the direction of subsequent MAE. To explain the findings we propose that the MAE is based on the vector sum of two components, one corresponding to a retinal MAE opposite to the adapting retinal motion and the other corresponding to an extra-retina MAE opposite to the eye movement. A vector model of this component hypothesis showed that the MAE directions reported in our experiments were the result of an extra-retinal component that was substantially larger in magnitude than the retinal component when the adapting retinal motion was positioned centrally. However, when retinal adaptation was peripheral, the model suggested the magnitude of the components should be about the same. These predictions were tested in a final experiment that used a magnitude estimation technique. Contrary to the predictions, the results showed no interaction between type of adaptation (retinal or pursuit) and the location of adapting retinal motion. Possible reasons for the failure of component hypothesis to fully explain the data are discussed. |
Minglei Chen; Hwawei Ko Exploring the eye-movement patterns as Chinese children read texts: A developmental perspective Journal Article In: Journal of Research in Reading, vol. 34, no. 2, pp. 232–246, 2011. @article{Chen2011, This study was to investigate Chinese children's eye patterns while reading different text genres from a developmental perspective. Eye movements were recorded while children in the second through sixth grades read two expository texts and two narrative texts. Across passages, overall word frequency was not significantly different between the two genres. Results showed that all children had longer fixation durations for low-frequency words. They also had longer fixation durations on content words. These results indicate that children adopted a word-based processing strategy like skilled readers do. However, only older children's rereading times were affected by genre. Overall, eye-movement patterns of older children reported in this study are in accordance with those of skilled Chinese readers, but younger children are more likely to be responsive to word characteristics than text level when reading a Chinese text. |
Ying Chen; Patrick Byrne; J. Douglas Crawford Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach Journal Article In: Neuropsychologia, vol. 49, no. 1, pp. 49–60, 2011. @article{Chen2011a, Allocentric cues can be used to encode locations in visuospatial memory, but it is not known how and when these representations are converted into egocentric commands for behaviour. Here, we tested the influence of different memory intervals on reach performance toward targets defined in either egocentric or allocentric coordinates, and then compared this to performance in a task where subjects were implicitly free to choose when to convert from allocentric to egocentric representations. Reach and eye positions were measured using Optotrak and Eyelink Systems, respectively, in fourteen subjects. Our results confirm that egocentric representations degrade over a delay of several seconds, whereas allocentric representations remained relatively stable over the same time scale. Moreover, when subjects were free to choose, they converted allocentric representations into egocentric representations as soon as possible, despite the apparent cost in reach precision in our experimental paradigm. This suggests that humans convert allocentric representations into egocentric commands at the first opportunity, perhaps to optimize motor noise and movement timing in real-world conditions. |
Hui-Yan Chiau; Philip Tseng; Jia-Han Su; Ovid J. L. Tzeng; Daisy L. Hung; Neil G. Muggleton; Chi-Hung Juan Trial type probability modulates the cost of antisaccades Journal Article In: Journal of Neurophysiology, vol. 106, no. 2, pp. 515–526, 2011. @article{Chiau2011, The antisaccade task, where eye movements are made away from a target, has been used to investigate the flexibility of cognitive control of behavior. Antisaccades usually have longer saccade latencies than prosaccades, the so-called antisaccade cost. Recent studies have shown that this antisaccade cost can be modulated by event probability. This may mean that the antisaccade cost can be reduced, or even reversed, if the probability of surrounding events favors the execution of antisaccades. The probabilities of prosaccades and antisaccades were systematically manipulated by changing the proportion of a certain type of trial in an interleaved pro/antisaccades task. We aimed to disentangle the intertwined relationship between trial type probabilities and the antisaccade cost with the ultimate goal of elucidating how probabilities of trial types modulate human flexible behaviors, as well as the characteristics of such modulation effects. To this end, we examined whether implicit trial type probability can influence saccade latencies and also manipulated the difficulty of cue discriminability to see how effects of trial type probability would change when the demand on visual perceptual analysis was high or low. A mixed-effects model was applied to the analysis to dissect the factors contributing to the modulation effects of trial type probabilities. Our results suggest that the trial type probability is one robust determinant of antisaccade cost. These findings highlight the importance of implicit probability in the flexibility of cognitive control of behavior. |
Jan Churan; Daniel Guitton; Christopher C. Pack Context dependence of receptive field remapping in superior colliculus Journal Article In: Journal of Neurophysiology, vol. 106, no. 4, pp. 1862–1874, 2011. @article{Churan2011, Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available. |
Laetitia Cirilli; Philippe Timary; Philippe Lefèvre; Marcus Missal Individual differences in impulsivity predict anticipatory eye movements Journal Article In: PLoS ONE, vol. 6, no. 10, pp. e26699, 2011. @article{Cirilli2011, Impulsivity is the tendency to act without forethought. It is a personality trait commonly used in the diagnosis of many psychiatric diseases. In clinical practice, impulsivity is estimated using written questionnaires. However, answers to questions might be subject to personal biases and misinterpretations. In order to alleviate this problem, eye movements could be used to study differences in decision processes related to impulsivity. Therefore, we investigated correlations between impulsivity scores obtained with a questionnaire in healthy subjects and characteristics of their anticipatory eye movements in a simple smooth pursuit task. Healthy subjects were asked to answer the UPPS questionnaire (Urgency Premeditation Perseverance and Sensation seeking Impulsive Behavior scale), which distinguishes four independent dimensions of impulsivity: Urgency, lack of Premeditation, lack of Perseverance, and Sensation seeking. The same subjects took part in an oculomotor task that consisted of pursuing a target that moved in a predictable direction. This task reliably evoked anticipatory saccades and smooth eye movements. We found that eye movement characteristics such as latency and velocity were significantly correlated with UPPS scores. The specific correlations between distinct UPPS factors and oculomotor anticipation parameters support the validity of the UPPS construct and corroborate neurobiological explanations for impulsivity. We suggest that the oculomotor approach of impulsivity put forth in the present study could help bridge the gap between psychiatry and physiology. |
R. Contreras; Jamshid Ghajar; S. Bahar; M. Suh Effect of cognitive load on eye-target synchronization during smooth pursuit eye movement Journal Article In: Brain Research, vol. 1398, pp. 55–63, 2011. @article{Contreras2011, In mild traumatic brain injury (mTBI), the fiber tracts that connect the frontal cortex with the cerebellum may suffer shear damage, leading to attention deficits and performance variability. This damage also disrupts the enhancement of eye-target synchronization that can be affected by cognitive load when subjects are tested using a concurrent eye-tracking test and word-recall test. We investigated the effect of cognitive load on eye-target synchronization in normal and mTBI patients using the nonlinear dynamical technique of stochastic phase synchronization. Results demonstrate that eye-target synchronization was negatively affected by cognitive load in mTBI subjects. In contrast, eye-target synchronization improved under intermediate cognitive load in young (≤ 40 years old) normal subjects. |
Jennifer E. Corbett; Marisa Carrasco Visual performance fields: Frames of reference Journal Article In: PLoS ONE, vol. 6, no. 9, pp. e24470, 2011. @article{Corbett2011, Performance in most visual discrimination tasks is better along the horizontal than the vertical meridian (Horizontal-Vertical Anisotropy, HVA), and along the lower than the upper vertical meridian (Vertical Meridian Asymmetry, VMA), with intermediate performance at intercardinal locations. As these inhomogeneities are prevalent throughout visual tasks, it is important to understand the perceptual consequences of dissociating spatial reference frames. In all studies of performance fields so far, allocentric environmental references and egocentric observer reference frames were aligned. Here we quantified the effects of manipulating head-centric and retinotopic coordinates on the shape of visual performance fields. When observers viewed briefly presented radial arrays of Gabors and discriminated the tilt of a target relative to homogeneously oriented distractors, performance fields shifted with head tilt (Experiment 1), and fixation (Experiment 2). These results show that performance fields shift in-line with egocentric referents, corresponding to the retinal location of the stimulus. |
Julien Cotti; Gustavo Rohenkohl; Mark Stokes; Anna C. Nobre; Jennifer T. Coull Functionally dissociating temporal and motor components of response preparation in left intraparietal sulcus Journal Article In: NeuroImage, vol. 54, no. 2, pp. 1221–1230, 2011. @article{Cotti2011, To optimise speed and accuracy of motor behaviour, we can prepare not only the type of movement to be made but also the time at which it will be executed. Previous cued reaction-time paradigms have shown that anticipating the moment in time at which this response will be made ("temporal orienting") or selectively preparing the motor effector with which an imminent response will be made (motor intention or "motor orienting") recruits similar regions of left intraparietal sulcus (IPS), raising the possibility that these two preparatory processes are inextricably co-activated. We used a factorial design to independently cue motor and temporal components of response preparation within the same experimental paradigm. By differentially cueing either ocular or manual response systems, rather than spatially lateralised responses within just one of these systems, potential spatial confounds were removed. We demonstrated that temporal and motor orienting were behaviourally dissociable, each capable of improving performance alone. Crucially, fMRI data revealed that temporal orienting activated the left IPS even if the motor effector that would be used to execute the response was unpredictable. Moreover, temporal orienting activated left IPS whether the target required a saccadic or manual response, and whether this response was left- or right-sided, thus confirming the ubiquity of left IPS activation for temporal orienting. Finally, a small region of left IPS was also activated by motor orienting for manual, though not saccadic, responses. Despite their functional independence therefore, temporal orienting and manual motor orienting nevertheless engage partially overlapping regions of left IPS, possibly reflecting their shared ontogenetic roots. |
Julien Cotti; Jean-Louis Vercher; Alain Guillaume Hand-eye coordination relies on extra-retinal signals: Evidence from reactive saccade adaptation Journal Article In: Behavioural Brain Research, vol. 218, no. 1, pp. 248–252, 2011. @article{Cotti2011a, Execution of a saccadic eye movement towards the goal of a hand pointing movement improves the accuracy of this hand movement. Still controversial is the role of extra-retinal signals, i.e. efference copy of the saccadic command and/or ocular proprioception, in the definition of the hand pointing target. We report here that hand pointing movements produced without visual feedback, with accompanying saccades and towards a target extinguished at saccade onset, were modified after gain change of reactive saccades through saccadic adaptation. As we have previously shown that the adaptation of reactive saccades does not influence the target representations that are common to the eye and the hand motor sub-systems (Cotti J, Guillaume A, Alahyane N, Pelisson D, Vercher JL. Adaptation of voluntary saccades, but not of reactive saccades. Transfers to hand pointing movements. J Neurophysiol 2007;98:602-12), the results of the present study demonstrate that extra-retinal signals participate in defining the target of hand pointing movements. |
Reinier Cozijn; Edwin Commandeur; Wietske Vonk; Leo G. M. Noordman The time course of the use of implicit causality information in the processing of pronouns: A visual world paradigm study Journal Article In: Journal of Memory and Language, vol. 64, no. 4, pp. 381–403, 2011. @article{Cozijn2011, Several theoretical accounts have been proposed with respect to the issue how quickly the implicit causality verb bias affects the understanding of sentences such as "John beat Pete at the tennis match, because he had played very well" They can be considered as instances of two viewpoints: the focusing and the integration account. The focusing account claims that the bias should be manifest soon after the verb has been processed, whereas the integration account claims that the interpretation is deferred until disambiguating information is encountered. Up to now, this issue has remained unresolved because materials or methods have failed to address it conclusively. We conducted two experiments that exploited the visual world paradigm and ambiguous pronouns in subordinate because clauses. The first experiment presented implicit causality sentences with the task to resolve the ambiguous pronoun. To exclude strategic processing, in the second experiment, the task was to answer simple comprehension questions and only a minority of the sentences contained implicit causality verbs. In both experiments, the implicit causality of the verb had an effect before the disambiguating information was available. This result supported the focusing account. |
Trevor J. Crawford; Elisabeth Parker; Ivonne Solis-Trapala; Jenny Mayes Is the relationship of prosaccade reaction times and antisaccade errors mediated by working memory? Journal Article In: Experimental Brain Research, vol. 208, no. 3, pp. 385–397, 2011. @article{Crawford2011, The mechanisms that control eye movements in the antisaccade task are not fully understood. One influential theory claims that the generation of antisaccades is dependent on the capacity of working memory. Previous research also suggests that antisaccades are influenced by the relative processing speeds of the exogenous and endogenous saccadic pathways. However, the relationship between these factors is unclear, in particular whether or not the effect of the relative speed of the pro and antisaccade pathways is mediated by working memory. The present study contrasted the performance of healthy individuals with high and low working memory in the antisaccade and prosaccade tasks. Path analyses revealed that antisaccade errors were strongly predicted by the mean reaction times of prosaccades and that this relationship was not mediated by differences in working memory. These data suggest that antisaccade errors are directly related to the speed of saccadic programming. These findings are discussed in terms of a race competition model of antisaccade control. |
Sarah C. Creel; Melanie A. Tumlin On-line acoustic and semantic interpretation of talker information Journal Article In: Journal of Memory and Language, vol. 65, no. 3, pp. 264–285, 2011. @article{Creel2011, Recent work demonstrates that listeners utilize talker-specific information in the speech signal to inform real-time language processing. However, there are multiple representational levels at which this may take place. Listeners might use acoustic cues in the speech signal to access the talker's identity and information about what they tend to talk about, which then immediately constrains processing. Alternatively, or simultaneously, listeners might compare the signal to acoustically-detailed representations of words, without awareness of the talker's identity. In a series of eye-tracked comprehension experiments, we explore the circumstances under which listeners utilize talker-specific information. Experiments 1 and 2 demonstrate talker-specific recognition benefits for newly-learned words both in isolation (Experiment 1) and with preceding context (Experiment 2), but suggest that listeners do not strongly semantically associate talkers with referents. Experiment 3 demonstrates that listeners can recognize talkers rapidly, almost as soon as acoustic information is available, and can associate talkers with multiple arbitrary referents. Experiment 4 demonstrates that if talker identity is highly diagnostic on each trial, listeners readily associate talkers with specific referents, but do not seem to make such associations when diagnostic value is low. Implications for speech processing, talker processing, and learning are discussed. © 2011 Elsevier Inc. |
Sebastian J. Crutch; Manja Lehmann; Nikos Gorgoraptis; Diego Kaski; Natalie Ryan; Masud Husain; Elizabeth K. Warrington Abnormal visual phenomena in posterior cortical atrophy Journal Article In: Neurocase, vol. 17, no. 2, pp. 160–177, 2011. @article{Crutch2011, Individuals with posterior cortical atrophy (PCA) report a host of unusual and poorly explained visual disturbances. This preliminary report describes a single patient (CRO), and documents and investigates abnormally prolonged colour afterimages (concurrent and prolonged perception of colours complimentary to the colour of an observed stimulus), perceived motion of static stimuli, and better reading of small than large letters. We also evaluate CRO's visual and vestibular functions in an effort to understand the origin of her experience of room tilt illusion, a disturbing phenomenon not previously observed in individuals with cortical degenerative disease. These visual symptoms are set in the context of a 4-year longitudinal neuropsychological and neuroimaging investigation of CRO's visual and other cognitive skills. We hypothesise that prolonged colour after-images are attributable to relative sparing of V1 inhibitory interneurons; perceived motion of static stimuli reflects weak magnocellular function; better reading of small than large letters indicates a reduced effective field of vision; and room tilt illusion effects are caused by disordered integration of visual and vestibular information. This study contributes to the growing characterisation of PCA whose atypical early visual symptoms are often heterogeneous and frequently under-recognised. |
Jie Cui; Jorge Otero-Millan; Stephen L. Macknik; Mac King; Susana Martinez-Conde Social misdirection fails to enhance a magic illusion Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 103, 2011. @article{Cui2011, Visual, multisensory and cognitive illusions in magic performances provide new windows into the psychological and neural principles of perception, attention, and cognition. We investigated a magic effect consisting of a coin “vanish” (i.e., the perceptual disappear- ance of a coin after a simulated toss from hand to hand). Previous research has shown that magicians can use joint attention cues such as their own gaze direction to strengthen the observers' perception of magic. Herewe presented naïve observers with videos including real and simulated coin tosses to determine if joint attention might enhance the illusory perception of simulated coin tosses. The observers' eye positions were measured, and their perceptual responses simultaneously recorded via button press. To control for the magician's use of joint attention cues, we occluded his head in half of the trials.We found that subjects did not direct their gaze at the magician's face at the time of the coin toss, whether the face was visible or occluded, and that the presence of the magician's face did not enhance the illusion. Thus, our results show that joint attention is not necessary for the perception of this effect.We conclude that social misdirection is redundant and possibly detracting to this very robust sleight-of-hand illusion.We further determined that subjects required multiple trials to effectively distinguish real from simulated tosses; thus the illusion was resilient to repeated viewing. |
Katharine B. Porter; Gideon P. Caplovitz; Peter J. Kohler; Christina M. Ackerman; Peter U. Tse Rotational and translational motion interact independently with form Journal Article In: Vision Research, vol. 51, no. 23-24, pp. 2478–2487, 2011. @article{Porter2011, Do the mechanisms that underlie the perception of translational and rotational object motion show evidence of independent processing? By probing the perceived speed of translating and/or rotating objects, we find that an object's form contributes in independent ways to the processing of translational and rotational motion: In the context of translational motion, it has been shown that the more elongated an object is along its direction of motion, the faster it is perceived to translate; in the context of rotational motion, it has been shown that the sharper the maxima of curvature along an object's contour, the faster it appears to rotate. Here we demonstrate that such rotational form-motion interactions are due solely to the rotational component of combined rotational and translational motion. We conclude that the perception of rotational motion relies on form-motion interactions that are independent of the processing underlying translational motion. |
Elsie Premereur; Wim Vanduffel; Peter Janssen Functional heterogeneity of macaque lateral intraparietal neurons Journal Article In: Journal of Neuroscience, vol. 31, no. 34, pp. 12307–12317, 2011. @article{Premereur2011, The macaque lateral intraparietal area (LIP) has been implicated in manycognitive processes, ranging from saccade planning and spatial attention to timing and categorization. Importantly, different research groups have used different criteria for including LIP neurons in their studies. While some research groups have selected LIP neurons based on the presence of memory-delay activity, other research groups have used other criteria such as visual, presaccadic, and/or memory activity. We recorded from LIP neurons that were selected based on spatially selective saccadic activity but regardless ofmemory-delay activity in macaque monkeys. To test anticipatory climbing activity, we used a delayed visually guided saccade task with a unimodal schedule ofgo-times, for which the conditional probability that the go-signal will occur rises monotonically as a function of time. A subpopulation of LIP neurons showed anticipatory activity that mimicked the subjective hazard rate ofthe go-signal when the animal was planning a saccade toward the receptive field. Alarge subgroup ofLIP neurons, however, did not modulate their firing rates according to the subjective hazard function. These non-anticipatory neurons were strongly influenced by salient visual stimuli appearing in their receptive field, but less so by the direction ofthe impending saccade. Thus, LIP contains a heterogeneous population ofneurons related to saccade planning or visual salience, and these neurons are spatially intermixed.Our results suggest that between-study differences in neuronal selectionmayhave contributed significantly to the findings of different research groups with respect to the functional role ofarea LIP. |
Kerstin Preuschoff; Bernard Marius Hart; Wolfgang Einhäuser Pupil dilation signals surprise: Evidence for noradrenaline's role in decision making Journal Article In: Frontiers in Neuroscience, vol. 5, pp. 115, 2011. @article{Preuschoff2011, Our decisions are guided by the rewards we expect. These expectations are often based on incomplete knowledge and are thus subject to uncertainty. While the neurophysiology of expected rewards is well understood, less is known about the physiology of uncertainty. We hypothesize that uncertainty, or more specifically errors in judging uncertainty, are reflected in pupil dilation, a marker that has frequently been associated with decision-making, but so far has remained largely elusive to quantitative models. To test this hypothesis, we measure pupil dilation while observers perform an auditory gambling task. This task dissociates two key decision variables – uncertainty and reward – and their errors from each other and from the act of the decision itself. We first demonstrate that the pupil does not signal expected reward or uncertainty per se, but instead signals surprise, that is, errors in judging uncertainty. While this general finding is independent of the precise quantification of these decision variables, we then analyze this effect with respect to a specific mathematical model of uncertainty and surprise, namely risk and risk prediction error. Using this quantification, we find that pupil dilation and risk prediction error are indeed highly correlated. Under the assumption of a tight link between noradrenaline (NA) and pupil size under constant illumination, our data may be interpreted as empirical evidence for the hypothesis that NA plays the same role for uncertainty as dopamine does for reward, namely the encoding of error signals. |
Rodrigo Quian Quiroga; Carlos Pedreira How do we see art: An eye-tracker study Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 98, 2011. @article{Quiroga2011, We describe the pattern of fixations of subjects looking at figurative and abstract paintings from different artists (Molina, Mondrian, Rembrandt, della Francesca) and at modified versions in which different aspects of these art pieces were altered with simple digital manipulations. We show that the fixations of the subjects followed some general common principles (e.g., being attracted to saliency regions) but with a large variability for the figurative paintings, according to the subject's personal appreciation and knowledge. In particular, we found different gazing patterns depending on whether the subject saw the original or the modified version of the painting first. We conclude that the study of gazing patterns obtained by using the eye-tracker technology gives a useful approach to quantify how subjects observe art. |
Stefan Rach; Adele Diederich; Hans Colonius On quantifying multisensory interaction effects in reaction time and detection rate Journal Article In: Psychological Research, vol. 75, no. 2, pp. 77–94, 2011. @article{Rach2011, Both mean reaction time (RT) and detection rate (DR) are important measures for assessing the amount of multisensory interaction occurring in crossmodal experiments, but they are often applied separately. Here we demonstrate that measuring multisensory performance using either RT or DR alone misses out on important information. We suggest an integration of RT and DR into a single measure of multisensory performance: the first index (MRE*) is based on an arithmetic combination of RT and DR, the second (MPE) is constructed from parameters derived from fitting a sequential sampling model to RT and DR data simultaneously. Our approach is illustrated by data from two audio-visual experiments. In the first, a redundant targets detection experiment using stimuli of different intensity, both measures yield similar pattern of results supporting the "principle of inverse effectiveness". The second experiment, introducing stimulus onset asynchrony and differing instructions (focused attention vs. redundant targets task) further supports the usefulness of both indices. Statistical properties of both measures are investigated via bootstrapping procedures. |
M. Raemaekers; Douwe P. Bergsma; Richard J. A. Wezel; G. J. Wildt; Albert V. Berg In: Journal of Neurophysiology, vol. 105, no. 2, pp. 872–882, 2011. @article{Raemaekers2011, Cerebral blindness is a loss of vision as a result of postchiasmatic damage to the visual pathways. Parts of the lost visual field can be restored through training. However, the neuronal mechanisms through which training effects occur are still unclear. We therefore assessed training-induced changes in brain function in eight patients with cerebral blindness. Visual fields were measured with perimetry and retinotopic maps were acquired with functional magnetic resonance imaging (fMRI) before and after vision restoration training. We assessed differences in hemodynamic responses between sessions that represented changes in amplitudes of neural responses and changes in receptive field locations and sizes. Perimetry results showed highly varied visual field recovery with shifts of the central visual field border ranging between 1 and 7°. fMRI results showed that, although retinotopic maps were mostly stable over sessions, there was a small shift of receptive field locations toward a higher eccentricity after training in addition to increases in receptive field sizes. In patients with bilateral brain activation, these effects were stronger in the affected than in the intact hemisphere. Changes in receptive field size and location could account for limited visual field recovery (± 1°), although it could not account for the large increases in visual field size that were observed in some patients. Furthermore, the retinotopic maps strongly matched perimetry measurements before training. These results are taken to indicate that local visual field enlargements are caused by receptive field changes in early visual cortex, whereas large-scale improvement cannot be explained by this mechanism. |
Keith Rayner; Timothy J. Slattery; Denis Drieghe; Simon P. Liversedge Eye movements and word skipping during reading: Effects of word length and predictability Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 2, pp. 514–528, 2011. @article{Rayner2011, Eye movements were monitored as subjects read sentences containing high- or low-predictable target words. The extent to which target words were predictable from prior context was varied: Half of the target words were predictable, and the other half were unpredictable. In addition, the length of the target word varied: The target words were short (4–6 letters), medium (7–9 letters), or long (10–12 letters). Length and predictability both yielded strong effects on the probability of skipping the target words and on the amount of time readers fixated the target words (when they were not skipped). However, there was no interaction in any of the measures examined for either skipping or fixation time. The results demonstrate that word predictability (due to contextual constraint) and word length have strong and independent influences on word skipping and fixation durations. Furthermore, because the long words extended beyond the word identification span, the data indicate that skipping can occur on the basis of partial information in relation to word identity. |
Fabian Schnier; Markus Lappe Differences in inter-saccadic adaptation transfer between inward and outward adaptation Journal Article In: Journal of Neurophysiology, vol. 106, no. 3, pp. 1399–1410, 2011. @article{Schnier2011, Saccadic adaptation is a mechanism to increase or decrease the ampli- tude gain of subsequent saccades, if a saccade is not on target. Recent research has shown that the mechanism of gain increasing, or outward adaptation, and the mechanism of gain decreasing, or inward adapta- tion, rely on partly different processes. We investigate how outward and inward adaptation of reactive saccades transfer to other types of saccades, namely scanning, overlap, memory-guided, and gap sac- cades. Previous research has shown that inward adaptation of reactive saccades transfers only partially to these other saccade types, suggest- ing differences in the control mechanisms between these saccade categories. We show that outward adaptation transfers stronger to scanning and overlap saccades than inward adaptation, and that the strength of transfer depends on the duration for which the saccade target is visible before saccade onset. Furthermore, we show that this transfer is mainly driven by an increase in saccade duration, which is apparent for all saccade categories. Inward adaptation, in contrast, is accompanied by a decrease in duration and in peak velocity, but only the peak velocity decrease transfers from reactive saccades to other saccade categories, i.e., saccadic duration remains constant or even increases for test saccades of the other categories. Our results, there- fore, show that duration and peak velocity are independent parameters of saccadic adaptation and that they are differently involved in the transfer of adaptation between saccade categories. Furthermore, our results add evidence that inward and outward adaptation are different processes. |
Alexander C. Schütz Motion transparency: Depth ordering and smooth pursuit eye movements Journal Article In: Journal of Vision, vol. 11, no. 14, pp. 1–19, 2011. @article{Schuetz2011, When two overlapping, transparent surfaces move in different directions, there is ambiguity with respect to the depth ordering of the surfaces. Little is known about the surface features that are used to resolve this ambiguity. Here, we investigated the influence of different surface features on the perceived depth order and the direction of smooth pursuit eye movements. Surfaces containing more dots, moving opposite to an adapted direction, moving at a slower speed, or moving in the same direction as the eyes were more likely to be seen in the back. Smooth pursuit eye movements showed an initial preference for surfaces containing more dots, moving in a non-adapted direction, moving at a faster speed, and being composed of larger dots. After 300 to 500 ms, smooth pursuit eye movements adjusted to perception and followed the surface whose direction had to be indicated. The differences between perceived depth order and initial pursuit preferences and the slow adjustment of pursuit indicate that perceived depth order is not determined solely by the eye movements. The common effect of dot number and motion adaptation suggests that global motion strength can induce a bias to perceive the stronger motion in the back. |
Alexander C. Schütz; David Souto Adaptation of catch-up saccades during the initiation of smooth pursuit eye movements Journal Article In: Experimental Brain Research, vol. 209, no. 4, pp. 537–549, 2011. @article{Schuetz2011a, Reduction of retinal speed and alignment of the line of sight are believed to be the respective primary functions of smooth pursuit and saccadic eye movements. As the eye muscles strength can change in the short-term, continuous adjustments of motor signals are required to achieve constant accuracy. While adaptation of saccade amplitude to systematic position errors has been extensively studied, we know less about the adaptive response to position errors during smooth pursuit initiation, when target motion has to be taken into account to program saccades, and when position errors at the saccade endpoint could also be corrected by increasing pursuit velocity. To study short-term adaptation (250 adaptation trials) of tracking eye movements, we introduced a position error during the first catch-up saccade made during the initiation of smooth pursuit-in a ramp-step-ramp paradigm. The target position was either shifted in the direction of the horizontally moving target (forward step), against it (backward step) or orthogonally to it (vertical step). Results indicate adaptation of catch-up saccade amplitude to back and forward steps. With vertical steps, saccades became oblique, by an inflexion of the early or late saccade trajectory. With a similar time course, post-saccadic pursuit velocity was increased in the step direction, adding further evidence that under some conditions pursuit and saccades can act synergistically to reduce position errors. |
Jens Schwarzbach A simple framework (ASF) for behavioral and neuroimaging experiments based on the psychophysics toolbox for MATLAB Journal Article In: Behavior Research Methods, vol. 43, no. 4, pp. 1194–1201, 2011. @article{Schwarzbach2011, The cognitive neurosciences combine behavioral experiments with acquiring physiological data from different modalities, such as electroencephalography, magnetoencephalography, transcranial magnetic stimulation, and functional magnetic resonance imaging, all of which require excellent timing. A simple framework is proposed in which uni- and multimodal experiments can be conducted with minimal adjustments when one switches between modalities. The framework allows the beginner to quickly become productive and the expert to be flexible and not constrained by the tool by building on existing software such as MATLAB and the Psychophysics Toolbox, which already are serving a large community. The framework allows running standard experiments but also supports and facilitates exciting new possibilities for real-time neuroimaging and state-dependent stimulation. |
Christopher R. Sears; Kristin R. Newman; Jennifer D. Ference; Charmaine L. Thomas Attention to emotional images in previously depressed individuals: An eye-tracking study Journal Article In: Cognitive Therapy and Research, vol. 35, no. 6, pp. 517–528, 2011. @article{Sears2011, Depression and dysphoria are associated with attention and memory biases for emotional information (Williams et al. 1997; Yiend in Cogn Emot 24:3-47, 2010), which are postulated to reflect stable vulnerability factors for the development and recurrence of depression (Gotlib and Joormann in Annu Rev Clin Psychol 6:285-312, 2010). The present study looked for evidence of attention and memory biases in individuals with a self-reported history of depression, compared to individuals with dysphoria and individuals with no history of depression. Participants viewed sets of depression-related, anxiety-related, positive, and neutral images while their eye fixations were tracked and recorded. Incidental recognition of the images was assessed 7 days later. Consistent with previous studies (Kellough et al. in Behav Res Therapy 46:1238-1243, 2008; Sears et al. in Cogn Emot 24:1349-1368, 2010), dysphoric individuals spent significantly less time attending to positive images than never depressed individuals, and it was also found that previously depressed individuals exhibited the same attentional bias. Previously depressed individuals also attended to anxiety-related images more than never depressed individuals. A bias in the initial orienting of attention was observed, with previously depressed and dysphoric individuals orienting to depression-images more frequently than never depressed participants. The recognition memory data showed that previously depressed and dysphoric individuals had poorer memory than never depressed individuals, but there was no evidence of a memory bias for either group. Implications for cognitive models of depression and depression vulnerability are discussed. |
Rachael D. Rubin; Sarah Brown-Schmidt; Melissa C. Duff; Daniel Tranel; Neal J. Cohen How do I remember that I know you know that I know? Journal Article In: Psychological Science, vol. 22, no. 12, pp. 1574–1582, 2011. @article{Rubin2011, Communication is aided greatly when speakers and listeners take advantage of mutually shared knowledge (i.e., common ground). How such information is represented in memory is not well known. Using a neuropsychological-psycholinguistic approach to real-time language understanding, we investigated the ability to form and use common ground during conversation in memory-impaired participants with hippocampal amnesia. Analyses of amnesics' eye fixations as they interpreted their partner's utterances about a set of objects demonstrated successful use of common ground when the amnesics had immediate access to common-ground information, but dramatic failures when they did not. These findings indicate a clear role for declarative memory in maintenance of common-ground representations. Even when amnesics were successful, however, the eye movement record revealed subtle deficits in resolving potential ambiguity among competing intended referents; this finding suggests that declarative memory may be critical to more basic aspects of the on-line resolution of linguistic ambiguity. |
Adam J. Sachs; Paul S. Khayat; Robert Niebergall; Julio C. Martinez-Trujillo A metric-based analysis of the contribution of spike timing to contrast and motion direction coding by single neurons in macaque area MT Journal Article In: Brain Research, vol. 1368, pp. 163–184, 2011. @article{Sachs2011, Spike timing is thought to contribute to the coding of motion direction information by neurons in macaque area MT. Here, we examined whether spike timing also contributes to the coding of stimulus contrast. We applied a metric-based approach to spike trains fired by MT neurons in response to stimuli that varied in contrast, or direction. We assessed the performance of three metrics, Dspikeand Dproduct(containing spike count and timing information), and the spike count metric Dcount. We analyzed responses elicited during the first 200 msec of stimulus presentation from 205 neurons. For both contrast and direction, the large majority of neurons showed the highest mutual information using Dspike, followed by Dproduct, and Dcount. This was corroborated by the performance of a theoretical observer model at discriminating contrast and direction using the three metrics. Our results demonstrate that spike timing can contribute to contrast coding in MT neurons, and support previous reports of its potential contribution to direction coding. Furthermore, they suggest that a combination of spike count with periodic and non-periodic spike timing information (contained in Dspike, but not in Dproductand Dcountwhich are insensitive to spike counts and timing respectively) provides the largest coding advantage in spike trains fired by MT neurons during contrast and direction discrimination. |
Navid G. Sadeghi; Vani Pariyadath; Sameer Apte; David M. Eagleman; Erik P. Cook Neural correlates of subsecond time distortion in the middle temporal area of visual cortex Journal Article In: Journal of Cognitive Neuroscience, vol. 23, no. 12, pp. 3829–3840, 2011. @article{Sadeghi2011, How does the brain represent the passage of time at the subsecond scale? Although different conceptual models for time perception have been proposed, its neurophysiological basis remains unknown. We took advantage of a visual duration illusion produced by stimulus novelty to link changes in cortical activity in monkeys with distortions of duration perception in humans. We found that human subjects perceived the duration of a subsecond motion pulse with a novel direction longer than a motion pulse with a repeated direction. Recording from monkeys viewing identical motion stimuli but performing a different behavioral task, we found that both the duration and amplitude of the neural response in the middle temporal area of visual cortex were positively correlated with the degree of novelty of the motion direction. In contrast to previous accounts that attribute distortions in duration perception to changes in the speed of a putative internal clock, our results suggest that the known adaptive properties of neural activity in visual cortex contributes to subsecond temporal distortions. |
Anne Pier Salverda; Gerry T. M. Altmann Attentional capture of objects referred to by spoken language Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1122–1133, 2011. @article{Salverda2011, Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention. |
Ardi Roelofs Attention, exposure duration, and gaze shifting in naming performance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 3, pp. 860–873, 2011. @article{Roelofs2011, Two experiments are reported in which the role of attribute exposure duration in naming performance was examined by tracking eye movements. Participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to manually indicate its direction. The color attribute (Experiment 1) or the complete color-word stimulus (Experiment 2) was removed from the screen 100 ms after stimulus onset. Compared with presentation until trial offset, removing the color attribute diminished Stroop interference, as well as facilitation effects in color naming latencies, whereas removing the complete stimulus diminished interference only. Attribute and stimulus removal reduced the latency of gaze shifting, which suggests decreased rather than increased attentional demand. These results provide evidence that limiting exposure duration contributes to attribute naming performance by diminishing the extent to which irrelevant attributes are processed, which reduces attentional demand. |
Martin Rolfs; Donatas Jonikaitis; Heiner Deubel; Patrick Cavanagh Predictive remapping of attention across eye movements Journal Article In: Nature Neuroscience, vol. 14, no. 2, pp. 252–258, 2011. @article{Rolfs2011, Many cells in retinotopic brain areas increase their activity when saccades (rapid eye movements) are about to bring stimuli into their receptive fields. Although previous work has attempted to look at the functional correlates of such predictive remapping, no study has explicitly tested for better attentional performance at the future retinal locations of attended targets. We found that, briefly before the eyes start moving, attention drawn to the targets of upcoming saccades also shifted to those retinal locations that the targets would cover once the eyes had moved, facilitating future movements. This suggests that presaccadic visual attention shifts serve to both improve presaccadic perceptual processing at the target locations and speed subsequent eye movements to their new postsaccadic locations. Predictive remapping of attention provides a sparse, efficient mechanism for keeping track of relevant parts of the scene when frequent rapid eye movements provoke retinal smear and temporal masking. |
Nicholas M. Ross; Linda J. Lanyon; Jaya Viswanathan; Dara S. Manoach; Jason J. S. Barton Human prosaccades and antisaccades under risk: Effects of penalties and rewards on visual selection and the value of actions Journal Article In: Neuroscience, vol. 196, pp. 168–177, 2011. @article{Ross2011, Monkey studies report greater activity in the lateral intraparietal area and more efficient saccades when targets coincide with the location of prior reward cues, even when cue location does not indicate which responses will be rewarded. This suggests that reward can modulate spatial attention and visual selection independent of the "action value" of the motor response. Our goal was first to determine whether reward modulated visual selection similarly in humans, and next, to discover whether reward and penalty differed in effect, if cue effects were greater for cognitively demanding antisaccades, and if financial consequences that were contingent on stimulus location had spatially selective effects. We found that motivational cues reduced all latencies, more for reward than penalty. There was an "inhibition-of-return"-like effect at the location of the cue, but unlike the results in monkeys, cue valence did not modify this effect in prosaccades, and the inhibition-of-return effect was slightly increased rather than decreased in antisaccades. When financial consequences were contingent on target location, locations without reward or penalty consequences lost the benefits seen in noncontingent trials, whereas locations with consequences maintained their gains. We conclude that unlike monkeys, humans show reward effects not on visual selection but on the value of actions. The human saccadic system has both the capacity to enhance responses to multiple locations simultaneously, and the flexibility to focus motivational enhancement only on locations with financial consequences. Reward is more effective than penalty, and both interact with the additional attentional demands of the antisaccade task. |
Raju P. Sapkota; Shahina Pardhan; Ian Linde The impact of extrafoveal information on visual short-term memory for object position Journal Article In: Journal of Cognitive Psychology, vol. 23, no. 5, pp. 574–585, 2011. @article{Sapkota2011, The role of extrafoveal information in visual short-term memory has been investigated relatively little, and, in most existing studies, using verbalisable stimuli susceptible to the recruitment of long-term memory (LTM). In addition, little is known about the impact of extrafoveal information available pre-and posttarget foveation, as it is typical to provide extrafoveal information prior to the foveation of memory targets. In this study, two object-position recognition experiments were conducted (each with two conditions) to establish the impact of extrafoveal information provided before and after the foveation of memory targets. Stimuli comprised 1/f noise discs that minimised the recruitment of LTM by eliminating verbal and semantic cues. Overall, a greater hit rate was found where extrafoveal information was available; however, performance analyses in which extrafoveal information was considered relative to the temporal lag at which target stimuli were foveated reveals both costs and benefits. A beneficial effect arose only where extrafoveal information was provided after the target had been foveated, but not prior to target foveation. Findings are discussed in terms of recency and extrafoveal perception effects, incorporating a postfoveation object-file refresh mechanism. |
Paige E. Scalf; Chandramalika Basak; Diane M. Beck Attention does more than modulate suppressive interactions: Attending to multiple items Journal Article In: Experimental Brain Research, vol. 212, no. 2, pp. 293–304, 2011. @article{Scalf2011, Directing attention to a visual item enhances its representations, making it more likely to guide behavior (Corbetta et al. 1991). Attention is thought to produce this enhancement by biasing suppressive interactions among multiple items in visual cortex in favor of the attended item (e.g., Desimone and Duncan 1995; Reynolds and Heeger 2009). We ask whether target enhancement and modulation of suppressive interactions are in fact inextricably linked or whether they can be decoupled. In particular, we ask whether simultaneously directing attention to multiple items may be one means of dissociating the influence of attention-related enhancement from the effects of inter-item suppression. When multiple items are attended, suppressive interactions in visual cortex limit the effectiveness with which attention may act on their representations, presumably because "biasing" the interactions in favor of a single item is no longer possible (Scalf and Beck 2010). In this experiment, we directly investigate whether applying attention to multiple competing stimulus items has any influence on either their evoked signal or their suppressive interactions. Both BOLD signal evoked by the items in V4 and behavioral responses to those items were significantly compromised by simultaneous presentation relative to simultaneous presentation, indicating that when the items appeared at the same time, they interacted in a mutually suppressive manner that compromised their ability to guide behavior. Attention significantly enhanced signal in V4. The attentional status of the items, however, had no influence on the suppressive effects of simultaneous presentation. To our knowledge, these data are the first to explicitly decouple the effects of top-down attention from those of inter-item suppression. |
Patrick Schleifer; Karin Landerl Subitizing and counting in typical and atypical development Journal Article In: Developmental Science, vol. 14, no. 2, pp. 280–291, 2011. @article{Schleifer2011, Enumeration performance in standard dot counting paradigms was investigated for different age groups with typical and atypically poor development of arithmetic skills. Experiment 1 showed a high correspondence between response times and saccadic frequencies for four age groups with typical development. Age differences were more marked for the counting than the subitizing range. In Experiment 2 we found a discontinuity between subitizing and counting for dyscalculic children; however, their subitizing slopes were steeper than those of typically developing control groups, indicating a dysfunctional subitizing mechanism. Across both experiments a number of factors could be identified that affect enumeration in the subitizing and the counting range differentially. These differential patterns further support the assumption of two qualitatively different enumeration processes. |
Joseph Schmidt; Gregory J. Zelinsky Visual search guidance is best after a short delay Journal Article In: Vision Research, vol. 51, no. 6, pp. 535–545, 2011. @article{Schmidt2011, Search displays are typically presented immediately after a target cue, but in the real-world, delays often exist between target designation and search. Experiments 1 and 2 asked how search guidance changes with delay. Targets were cued using a picture or text label, each for 3000ms, followed by a delay up to 9000ms before the search display. Search stimuli were realistic objects, and guidance was quantified using multiple eye movement measures. Text-based cues showed a non-significant trend towards greater guidance following any delay relative to a no-delay condition. However, guidance from a pictorial cue increased sharply 300-600ms after preview offset. Experiment 3 replicated this guidance enhancement using shorter preview durations while equating the time from cue onset to search onset, demonstrating that the guidance benefit is linked to preview offset rather than a more complete encoding of the target. Experiment 4 showed that enhanced guidance persists even with a mask flashed at preview offset, suggesting an explanation other than visual priming. We interpret our findings as evidence for the rapid consolidation of target information into a guiding representation, which attains its maximum effectiveness shortly after preview offset. |
Eva Reinisch; Alexandra Jesse; James M. McQueen Speaking rate from proximal and distal contexts is used during word segmentation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 3, pp. 978–996, 2011. @article{Reinisch2011, A series of eye-tracking and categorization experiments investigated the use of speaking-rate information in the segmentation of Dutch ambiguous-word sequences. Juncture phonemes with ambiguous durations (e.g., [s] in 'eens (s)peer,' "once (s)pear," [t] in 'nooit (t)rap,' "never staircase/quick") were perceived as longer and hence more often as word-initial when following a fast than a slow context sentence. Listeners used speaking-rate information as soon as it became available. Rate information from a context proximal to the juncture phoneme and from a more distal context was used during on-line word recognition, as reflected in listeners' eye movements. Stronger effects of distal context, however, were observed in the categorization task, which measures the off-line results of the word-recognition process. In categorization, the amount of rate context had the greatest influence on the use of rate information, but in eye tracking, the rate information's proximal location was the most important. These findings constrain accounts of how speaking rate modulates the interpretation of durational cues during word recognition by suggesting that rate estimates are used to evaluate upcoming phonetic information continuously during prelexical speech processing. |
Benedikt Reuter; David Möllers; Julia Bender; Asysa Schwehn; Juliane Ziemek; Jürgen Gallinat; Norbert Kathmann Volitional saccades and attentional mechanisms in schizophrenia patients and healthy control subjects Journal Article In: Psychophysiology, vol. 48, no. 10, pp. 1333–1339, 2011. @article{Reuter2011, Schizophrenia (SZ) patients showed increased volitional saccade latencies, suggesting deficient volitional initiation of action. Yet increased volitional saccade latencies may also result from deficits in attention shifts. To dissociate attention shifting and saccade initiation, we asked 25 SZ patients and 25 healthy subjects to make saccades toward newly appearing (onset) targets and toward the loci of disappearing (offset) targets. Similar onsets and offsets were also used as attention cues in a Posner-type manual task. As expected, onsets and offsets had similar effects on attention. In contrast, saccade latencies were considerably longer with offset compared to onset targets, reflecting additional time for volitional saccade initiation. Unexpectedly, SZ patients had normal saccade latencies. Presumably, the expected deficit was compensated by decreased fixation-related neural activity, which was induced by the disappearance of fixation stimuli. |
Helen J. Richards; Julie A. Hadwin; Valerie Benson; Michael J. Wenger; Nick Donnelly The influence of anxiety on processing capacity for threat detection Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 5, pp. 883–889, 2011. @article{Richards2011, In the present study, we explored the proposition that an individual's capacity for threat detection is related to his or her trait anxiety. Using a redundant signals paradigm with concurrent measurements of reaction times and eye movements, participants indicated the presence or absence of an emotional target face (angry or happy) in displays containing no targets, one target, or two targets. We used estimates of the orderings on the hazard functions of the RT distributions as measures of processing capacity (Townsend & Ashby, 1978; Wenger & Gibson, Journal of Experimental Psychology. Human Perception and Performance, 30, 708–719, 2004) to assess whether self-reported anxiety and the affective state of the face interacted with the level of perceptual load (i.e., the number of targets). Results indicated that anxiety was associated with fewer eye movements and increased processing capacity to detect multiple (vs. single) threatening faces. The data are consistent with anxiety influencing threat detection via a broadly tuned attentional mechanism (Eysenck, Derakshan, Santos, & Calvo, Emotion, 7, 336–353, 2007). |
Brian A. Richardson; Anusha Ratneswaran; James Lyons; Ramesh Balasubramaniam The time course of online trajectory corrections in memory-guided saccades Journal Article In: Experimental Brain Research, vol. 212, no. 3, pp. 457–469, 2011. @article{Richardson2011, Recent investigations have revealed the kinematics of horizontal saccades are less variable near the end of the trajectory than during the course of execution. Converging evidence indicates that oculomotor networks use online sensorimotor feedback to correct for initial trajectory errors. It is also known that oculomotor networks express saccadic corrections with decreased efficiency when responses are made toward memorized locations. The present research investigated whether repetitive motor timekeeping influences online feedback-based corrections in predictive saccades. Predictive saccades are a subclass of memory-guided saccades and are observed when one makes series of timed saccades. We hypothesized that cueing predictive saccades in a sequence would facilitate the expression of trajectory corrections. Seven participants produced a number of single unpaced, visually guided saccades, and also sequences of timed predictive saccades. Kinematic and trajectory variability were used to measure the expression of online saccadic corrections at a number of time indices in saccade trajectories. In particular, we estimated the minimum time required to implement feedback-based corrections, which was consistently 37 ms. Our observations demonstrate that motor commands in predictive memory-guided saccades can be parameterized by spatial working memory and retain the accuracy of online trajectory corrections typically associated with visually guided behavior. In contrast, untimed memory-guided saccades exhibited diminished kinematic evidence for online corrections. We conclude that motor timekeeping and sequencing contributed to efficient saccadic corrections. These results contribute to an evolving view of the interactions between motor planning and spatial working memory, as they relate to oculomotor control. |
Lily Riggs; Douglas A. McQuiggan; Adam K. Anderson; Jennifer D. Ryan Evaluation of anti bacterial activity of fruit rind extract of Garcinia mangostana Linn on enteric pathogens-an in vitro study Journal Article In: Frontiers in Psychology, vol. 4, pp. 205, 2011. @article{Riggs2011, Research shows that memory for emotional aspects of an event may be enhanced at the cost of impaired memory for surrounding peripheral details. However, this has only been assessed directly via verbal reports which reveal the outcome of a long stream of processing but cannot shed light on how/when emotion may affect the retrieval process. In the present experiment, eye movement monitoring (EMM) was used as an indirect measure of memory as it can reveal aspects of online memory processing. For example, do emotions modulate the nature of memory representations or the speed with which such memories can be accessed? Participants viewed central negative and neutral scenes surrounded by three neutral objects and after a brief delay, memory was assessed indirectly via EMM and then directly via verbal reports. Consistent with the previous literature, emotion enhanced central and impaired peripheral memory as indexed by eye movement scanning and verbal reports. This suggests that eye movement scanning may contribute and/or is related to conscious access of memory. However, the central/peripheral tradeoff effect was not observed in an early measure of eye movement behavior, i.e., participants were faster to orient to a critical region of change in the periphery irrespective of whether it was previously studied in a negative or neutral context. These findings demonstrate emotion's differential influences on different aspects of retrieval. In particular, emotion appears to affect the detail within, and/or the evaluation of, stored memory representations, but it may not affect the initial access to those representations. |
Lily Riggs; Douglas A. McQuiggan; Norman A. S. Farb; Adam K. Anderson; Jennifer D. Ryan The role of overt attention in emotion-modulated memory Journal Article In: Emotion, vol. 11, no. 4, pp. 776–785, 2011. @article{Riggs2011a, The presence of emotional stimuli results in a central/peripheral tradeoff effect in memory: memory for central details is enhanced at the cost of peripheral items. It has been assumed that emotion-modulated differences in memory are the result of differences in attention, but this has not been tested directly. The present experiment used eye movement monitoring as an index of overt attention allocation and mediation analysis to determine whether differences in attention were related to subsequent memory. Participants viewed negative and neutral scenes surrounded by three neutral objects and were then given a recognition memory test. The results revealed evidence in support of a central/peripheral tradeoff in both attention and memory. However, contrary with previous assumptions, whereas attention partially mediated emotion-enhanced memory for central pictures, it did not explain the entire relationship. Further, although centrally presented emotional stimuli led to decreased number of eye fixations toward the periphery, these differences in viewing did not contribute to emotion-impaired memory for specific details pertaining to the periphery. These findings suggest that the differential influence of negative emotion on central versus peripheral memory may result from other cognitive influences in addition to overt visual attention or on postencoding processes. |
Sarah Risse; Reinhold Kliegl Adult age differences in the perceptual span during reading Journal Article In: Psychology and Aging, vol. 26, no. 2, pp. 451–460, 2011. @article{Risse2011, Following up on research suggesting an age-related reduction in the rightward extent of the perceptual span during reading (Rayner, Castelhano, & Yang, 2009), we compared old and young adults in an N + 2-boundary paradigm in which a nonword preview of word N + 2 or word N + 2 itself is replaced by the target word once the eyes cross an invisible boundary located after word N. The intermediate word N + 1 was always three letters long. Gaze durations on word N + 2 were significantly shorter for identical than nonword N + 2 preview both for young and for old adults, with no significant difference in this preview benefit. Young adults, however, did modulate their gaze duration on word N more strongly than old adults in response to the difficulty of the parafoveal word N + 1. Taken together, the results suggest a dissociation of preview benefit and parafoveal-on-foveal effect. Results are discussed in terms of age-related decline in resilience towards distributed processing while simultaneously preserving the ability to integrate parafoveal information into foveal processing. As such, the present results relate to proposals of regulatory compensation strategies older adults use to secure an overall reading speed very similar to that of young adults. |
Yang Zhang; Ming Zhang Spatial working memory load impairs manual but not saccadic inhibition of return Journal Article In: Vision Research, vol. 51, no. 1, pp. 147–153, 2011. @article{Zhang2011, Although spatial working memory has been shown to play a central role in manual IOR (Castel, Pratt, & Craik, 2003), it is so far unclear whether spatial working memory is involved in saccadic IOR. The present study sought to address this question by using a dual task paradigm, in which the participants performed an IOR task while keeping a set of locations in spatial working memory. While manual IOR was eliminated, saccadic IOR was not affected by spatial working memory load. These findings suggest that saccadic IOR does not rely on spatial working memory to process inhibitory tagging. |
Huihui Zhou; Robert Desimone Feature-based attention in the Frontal Eye Field and area V4 during visual search Journal Article In: Neuron, vol. 70, no. 6, pp. 1205–1217, 2011. @article{Zhou2011, When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target. |
Shlomit Yuval-Greenberg; Leon Y. Deouell Scalp-recorded induced gamma-band responses to auditory stimulation and its correlations with saccadic muscle-activity Journal Article In: Brain Topography, vol. 24, no. 1, pp. 30–39, 2011. @article{YuvalGreenberg2011, We previously showed that the transient broadband induced gamma-band response in EEG (iGBRtb) appearing around 200-300 ms following a visual stimulus reflects the contraction of extra-ocular muscles involved in the execution of saccades, rather than neural oscillations. Several previous studies reported induced gamma-band responses also following auditory stimulation. It is still an open question whether, similarly to visual paradigms, such auditory paradigms are also sensitive to the saccadic confound. In the current study we address this question using simultaneous eye-tracking and EEG recordings during an auditory oddball paradigm. Subjects were instructed to respond to a rare target defined by sound source location, while fixating on a central screen. Results show that, similar to what was found in visual paradigms, saccadic rate displayed typical temporal dynamics including a post-stimulus decrease followed by an increase. This increase was more moderate, had a longer latency, and was less consistent across subjects than was found in the visual case. Crucially, the temporal dynamics of the induced gamma response were similar to those of saccadic-rate modulation. This suggests that the auditory induced gamma-band responses recorded on the scalp may also be affected by saccadic muscle activity. |
C. Yu-Wai-Man; K. Petheram; A. W. Davidson; T. Williams; P. G. Griffiths A supranuclear disorder of ocular motility as a rare initial presentation of motor neurone disease Journal Article In: Neuro-Ophthalmology, vol. 35, no. 1, pp. 38–39, 2011. @article{YuWaiMan2011, A case is described of motor neurone disease presenting with an ocular motor disorder characterised by saccadic intrusions, impaired horizontal and vertical saccades, and apraxia of eyelid opening. The occurrence of eye movement abnormalities in motor neurone disease is discussed. |
Michael Zehetleitner; Michael Hegenloh; Hermann J. Muller Visually guided pointing movements are driven by the salience map Journal Article In: Journal of Vision, vol. 11, no. 1, pp. 24–24, 2011. @article{Zehetleitner2011, Visual salience maps are assumed to mediate target selection decisions in a motor-unspecific manner; accordingly, modulations of salience influence yes/no target detection or left/right localization responses in manual key-press search tasks, as well as ocular or skeletal movements to the target. Although widely accepted, this core assumption is based on little psychophysical evidence. At least four modulations of salience are known to influence the speed of visual search for feature singletons: (i) feature contrast, (ii) cross-trial dimension sequence and (iii) semantic pre-cueing of the target dimension, and (iv) dimensional target redundancy. If salience guides also manual pointing movements, their initiation latencies (and durations) should be affected by the same four manipulations of salience. Four experiments, each examining one of these manipulations, revealed this to be the case. Thus, these effects are seen independently of the motor response required to signal the perceptual decision (e.g., directed manual pointing as well as simple yes/no detection responses). This supports the notion of a motor-unspecific salience map, which guides covert attention as well as overt eye and hand movements. |
Minnan Xu-Wilson; Jing Tian; Reza Shadmehr; David S. Zee TMS perturbs saccade trajectories and unmasks an internal feedback controller for saccades Journal Article In: Journal of Neuroscience, vol. 31, no. 32, pp. 11537–11546, 2011. @article{XuWilson2011, When we applied a single pulse of transcranial magnetic stimulation (TMS) to any part of the human head during a saccadic eye movement, the ongoing eye velocity was reduced as early as 45 ms after the TMS, and lasted ∼32 ms. The perturbation to the saccade trajectory was not due to a mechanical effect of the lid on the eye (e.g., from blinks). When the saccade involved coordinated movements of both the eyes and the lids, e.g., in vertical saccades, TMS produced a synchronized inhibition of the motor commands to both eye and lid muscles. The TMS-induced perturbation of the eye trajectory did not show habituation with repetition, and was present in both pro-saccades and anti-saccades. Despite the perturbation, the eye trajectory was corrected within the same saccade with compensatory motor commands that guided the eyes to the target. This within-saccade correction did not rely on visual input, suggesting that the brain monitored the oculomotor commands as the saccade unfolded, maintained a real-time estimate of the position of the eyes, and corrected for the perturbation. TMS disrupted saccades regardless of the location of the coil on the head, suggesting that the coil discharge engages a nonhabituating startle-like reflex system. This system affects ongoing motor commands upstream of the oculomotor neurons, possibly at the level of the superior colliculus or omnipause neurons. Therefore, a TMS pulse centrally perturbs saccadic motor commands, which are monitored possibly via efference copy and are corrected via internal feedback. |
Qing Yang; Zoï Kapoula Distinct control of initiation and metrics of memory-guided saccades and vergence by the FEF: A TMS study Journal Article In: PLoS ONE, vol. 6, no. 5, pp. e20322, 2011. @article{Yang2011, BACKGROUND: The initiation of memory guided saccades is known to be controlled by the frontal eye field (FEF). Recent physiological studies showed the existence of an area close to FEF that controls also vergence initiation and execution. This study is to explore the effect of transcranial magnetic simulation (TMS) over FEF on the control of memory-guided saccade-vergence eye movements. METHODOLOGY/PRINCIPAL FINDINGS: Subjects had to make an eye movement in dark towards a target flashed 1 sec earlier (memory delay); the location of the target relative to fixation point was such as to require either a vergence along the median plane, or a saccade, or a saccade with vergence; trials were interleaved. Single pulse TMS was applied on the left or right FEF; it was delivered at 100 ms after the end of memory delay, i.e. extinction of fixation LED that was the "go" signal. Twelve healthy subjects participated in the study. TMS of left or right FEF prolonged the latency of all types of eye movements; the increase varied from 21 to 56 ms and was particularly strong for the divergence movements. This indicates that FEF is involved in the initiation of all types of memory guided movement in the 3D space. TMS of the FEF also altered the accuracy but only for leftward saccades combined with either convergence or divergence; intrasaccadic vergence also increased after TMS of the FEF. CONCLUSIONS/SIGNIFICANCE: The results suggest anisotropy in the quality of space memory and are discussed in the context of other known perceptual motor anisotropies. |
Shun-Nan Yang; Yu-Chi Tai; Hannu Laukkanen; James E. Sheedy Effects of ocular transverse chromatic aberration on peripheral word identification Journal Article In: Vision Research, vol. 51, no. 21-22, pp. 2273–2281, 2011. @article{Yang2011a, Transverse chromatic aberration (TCA) smears the retinal image of peripheral stimuli. We previously found that TCA significantly reduces the ability to recognize letters presented in the near fovea by degrading image quality and exacerbating crowding effect from adjacent letters. The present study examined whether TCA has a significant effect on near foveal and peripheral word identification, and whether within-word orthographic facilitation interacts with TCA effect to affect word identification. Subjects were briefly presented a 6- to 7-letter word of high or low frequency in each trial. Target words were generated with weak or strong horizontal color fringe to attenuate the TCA in the right periphery and exacerbate it in the left. The center of the target word was 1°, 2°, 4°, and 6° to the left or right of a fixation point. Subject's eye position was monitored with an eye-tracker to ensure proper fixation before target presentation. They were required to report the identity of the target word as soon and accurately as possible. Results show significant effect of color fringe on the latency and accuracy of word recognition, indicating existing TCA effect. Observed TCA effect was more salient in the right periphery, and was affected by word frequency more there. Individuals' subjective preference of color-fringed text was correlated to the TCA effect in the near periphery. Our results suggest that TCA significantly affects peripheral word identification, especially when it is located in the right periphery. Contextual facilitation such as word frequency interacts with TCA to influence the accuracy and latency of word recognition. |
Victoria Yanulevskaya; Jan Bernard C. Marsman; Frans W. Cornelissen; Jan Mark Geusebroek An image statistics-based model for fixation prediction Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 94–104, 2011. @article{Yanulevskaya2011, The problem of predicting where people look at, or equivalently salient region detection, has been related to the statistics of several types of low-level image features. Among these features, contrast and edge information seem to have the highest correlation with the fixation locations. The contrast distribution of natural images can be adequately characterized using a two-parameter Weibull distribution. This distribution catches the structure of local contrast and edge frequency in a highly meaningful way. We exploit these observations and investigate whether the parameters of the Weibull distribution constitute a simple model for predicting where people fixate when viewing natural images. Using a set of images with associated eye movements, we assess the joint distribution of the Weibull parameters at fixated and non-fixated regions. Then, we build a simple classifier based on the log-likelihood ratio between these two joint distributions. Our results show that as few as two values per image region are already enough to achieve a performance comparable with the state-of-the-art in bottom-up saliency prediction. |
Bo Yao; Christoph Scheepers Contextual modulation of reading rate for direct versus indirect speech quotations Journal Article In: Cognition, vol. 121, no. 3, pp. 447–453, 2011. @article{Yao2011, In human communication, direct speech (e.g., Mary said: " I'm hungry" ) is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2, eye-tracking) read written stories that contained either a direct speech or an indirect speech quotation. The context preceding those quotations described a situation that implied either a fast-speaking or a slow-speaking quoted protagonist. It was found that this context manipulation affected reading rates (in both oral and silent reading) for direct speech quotations, but not for indirect speech quotations. This suggests that readers are more likely to engage in perceptual simulations of the reported speech act when reading direct speech as opposed to meaning-equivalent indirect speech quotations, as part of a more vivid representation of the former. |
Jian-Gao Yao; Xin Gao; Hong-Mei Yan; Chao-Yi Li Field of attention for instantaneous object recognition Journal Article In: PLoS ONE, vol. 6, no. 1, pp. e16343, 2011. @article{Yao2011a, Instantaneous object discrimination and categorization are fundamental cognitive capacities performed with the guidance of visual attention. Visual attention enables selection of a salient object within a limited area of the visual field; we referred to as "field of attention" (FA). Though there is some evidence concerning the spatial extent of object recognition, the following questions still remain unknown: (a) how large is the FA for rapid object categorization, (b) how accuracy of attention is distributed over the FA, and (c) how fast complex objects can be categorized when presented against backgrounds formed by natural scenes. |
Eiling Yee; Stacy Huffstetler; Sharon L. Thompson-Schill Function follows form: Activation of shape and function features during object identification Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 3, pp. 348–363, 2011. @article{Yee2011, Most theories of semantic memory characterize knowledge of a given object as comprising a set of semantic features. But how does conceptual activation of these features proceed during object identification? We present the results of a pair of experiments that demonstrate that object recognition is a dynamically unfolding process in which function follows form. We used eye movements to explore whether activating one object's concept leads to the activation of others that share perceptual (shape) or abstract (function) features. Participants viewed 4-picture displays and clicked on the picture corresponding to a heard word. In critical trials, the conceptual representation of 1 of the objects in the display was similar in shape or function (i.e., its purpose) to the heard word. Importantly, this similarity was not apparent in the visual depictions (e.g., for the target Frisbee, the shape-related object was a triangular slice of pizza, a shape that a Frisbee cannot take); preferential fixations on the related object were therefore attributable to overlap of the conceptual representations on the relevant features. We observed relatedness effects for both shape and function, but shape effects occurred earlier than function effects. We discuss the implications of these findings for current accounts of the representation of semantic memory. |
Li-Hao Yeh; Ana I. Schwartz; Aaron L. Baule The impact of text-structure strategy instruction on the text recall and eye-movement patterns of second language English readers Journal Article In: Reading Psychology, vol. 32, no. 6, pp. 495–519, 2011. @article{Yeh2011, Previous studies have demonstrated the efficacy of the Text Structure Strategy for improving text recall. The strategy emphasizes the identification of text structure for encoding and recalling information. Traditionally, the efficacy of this strategy has been measured through free recall. The present study examined whether recall and eye-movement patterns of second language English readers would benefit from training on the strategy. Participants' free recall and eye-movement patterns were measured before and after training. There was a significant increase in recall at posttest and a change in eye-movement patterns, reflecting additional processing time of phrases and words signaling the text structure. |
Serap Yiǧit-Elliott; John Palmer; Cathleen M. Moore Distinguishing blocking from attenuation in visual selective attention Journal Article In: Psychological Science, vol. 22, no. 6, pp. 771–780, 2011. @article{YiǧitElliott2011, Sensory information must be processed selectively in order to represent the world and guide behavior. How does such selection occur? Here we consider two alternative classes of selection mechanisms: In blocking, unattended stimuli are blocked entirely from access to downstream processes, and in attenuation, unattended stimuli are reduced in strength but if strong enough can still access downstream processes. Existing evidence as to whether blocking or attenuation is a more accurate model of human performance is mixed. Capitalizing on a general distinction between blocking and attenuation—blocking cannot be overcome by strong stimuli, whereas attenuation can—we measured how attention interacted with the strength of stimuli in two spatial selection paradigms, spatial filtering and spatial monitoring. The evidence was consistent with blocking for the filtering paradigm and with attenuation for the monitoring paradigm. This approach provides a general measure of the fate of unattended stimuli. |
Lynsey Wolter; Kristen Skovbroten Gorman; Michael K. Tanenhaus Scalar reference, contrast and discourse: Separating effects of linguistic discourse from availability of the referent Journal Article In: Journal of Memory and Language, vol. 65, no. 3, pp. 299–317, 2011. @article{Wolter2011, Listeners expect that a definite noun phrase with a pre-nominal scalar adjective (e.g., the big ...) will refer to an entity that is part of a set of objects contrasting on the scalar dimension, e.g., size (Sedivy, Tanenhaus, Chambers, & Carlson, 1999). Two visual world experiments demonstrate that uttering a referring expression with a scalar adjective makes all members of the relevant contrast set more salient in the discourse model, facilitating subsequent reference to other members of that contrast set. Moreover, this discourse effect is caused primarily by linguistic mention of a scalar adjective and not by the listener's prior visual or perceptual experience. These experiments demonstrate that language processing is sensitive to which information was introduced by linguistic mention, and that the visual world paradigm can be use to tease apart the separate contributions of visual and linguistic information to reference resolution. |
Jason H. Wong; Matthew S. Peterson The interaction between memorized objects and abrupt onsets in oculomotor capture Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 6, pp. 1768–1779, 2011. @article{Wong2011, Recent evidence has been found for a source of task-irrelevant oculomotor capture (defined as when a salient event draws the eyes away from a primary task) that originates from working memory. An object memorized for a nonsearch task can capture the eyes during search. Here, an experiment was conducted that generated interactions between the presence of a memorized object (a colored disk) with the abrupt onset of a new object during visual search. The goal was to compare memory-driven oculomotor capture to oculomotor capture caused by an abrupt onset. This has implications for saccade programming theories, which have little to say about saccades that are influenced by object working memory. Results showed that memorized objects capture the eyes at nearly the same rate as abrupt onsets. When the abrupt onset and a memorized color coincide in the same object, this combination leads to even greater oculomotor capture. Finally, latencies support the competitive integration model: Shorter saccade latencies were found when the memorized color combined with the onset captured the eyes, as compared to either color or onset only. Longer latencies were also found when the color and onset occurred in the same display but were spatially separated. |
Z. V. J. Woodhead; S. L. E. Brownsett; N. S. Dhanjal; C. Beckmann; Richard J. S. Wise The visual word form system in context Journal Article In: Journal of Neuroscience, vol. 31, no. 1, pp. 193–199, 2011. @article{Woodhead2011, According to the “modular” hypothesis, reading is a serial feedforward process, with part of left ventral occipitotemporal cortex the earliest component tuned to familiar orthographic stimuli. Beyond this region, the model predicts no response to arrays of false font in reading-related neural pathways. An alternative “connectionist” hypothesis proposes that reading depends on interactions between feedforward projections from visual cortex and feedback projections from phonological and semantic systems, with no visual component exclusive to orthographic stimuli. This is compatible with automatic processing of false font throughout visual and heteromodal sensory pathways that support reading, in which responses to words may be greater than, but not exclusive of, responses to false font. This functional imaging study investigated these alternative hypotheses by using narrative texts and equivalent arrays of false font and varying the hemifield of presentation using rapid serial visual presentation. The “null” baseline comprised a decision on visually presented numbers. Preferential activity for narratives relative to false font, insensitive to hemifield of presentation, was distributed along the ventral left temporal lobe and along the extent of both superior temporal sulci. Throughout this system, activity during the false font conditions was significantly greater than during the number task, with activity specific to the number task confined to the intraparietal sulci. Therefore, both words and false font are extensively processed along the same temporal neocortical pathways, separate from the more dorsal pathways that process numbers. These results are incompatible with a serial, feedforward model of reading. |
Jessica M. Wright; Adam P. Morris; Bart Krekelberg Weighted integration of visual position information Journal Article In: Journal of Vision, vol. 11, no. 14, pp. 11–11, 2011. @article{Wright2011, The ability to localize visual objects is a fundamental component of human behavior and requires the integration of position information from object components. The retinal eccentricity of a stimulus and the locus of spatial attention can affect object localization, but it is unclear whether these factors alter the global localization of the object, the localization of object components, or both. We used psychophysical methods in humans to quantify behavioral responses in a centroid estimation task. Subjects located the centroid of briefly presented random dot patterns (RDPs). A peripheral cue was used to bias attention toward one side of the display. We found that although subjects were able to localize centroid positions reliably, they typically had a bias toward the fovea and a shift toward the locus of attention. We compared quantitative models that explain these effects either as biased global localization of the RDPs or as anisotropic integration of weighted dot component positions. A model that allowed retinal eccentricity and spatial attention to alter the weights assigned to individual dot positions best explained subjects' performance. These results show that global position perception depends on both the retinal eccentricity of stimulus components and their positions relative to the current locus of attention. |
Eckart Zimmermann; David C. Burr; M. Concetta Morrone Spatiotopic visual maps revealed by saccadic adaptation in humans Journal Article In: Current Biology, vol. 21, no. 16, pp. 1380–1384, 2011. @article{Zimmermann2011, Saccadic adaptation [1] is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system [2, 3], but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error [4-7]. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals. |
Marc Zirnsak; R. G. K. Gerhards; Roozbeh Kiani; Markus Lappe; Fred H. Hamker Anticipatory saccade target processing and the presaccadic transfer of visual features Journal Article In: Journal of Neuroscience, vol. 31, no. 49, pp. 17887–17891, 2011. @article{Zirnsak2011, As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007). |
Hamed Zivari Adab; Rufin Vogels Practicing coarse orientation discrimination improves orientation signals in macaque cortical area V4 Journal Article In: Current Biology, vol. 21, no. 19, pp. 1661–1666, 2011. @article{ZivariAdab2011, Practice improves the performance in visual tasks, but mechanisms underlying this adult brain plasticity are unclear. Single-cell studies reported no [1], weak [2], or moderate [3, 4] perceptual learning-related changes in macaque visual areas V1 and V4, whereas none were found in middle temporal (MT) [5]. These conflicting results and modeling of human (e.g., [6, 7]) and monkey data [8] suggested that changes in the readout of visual cortical signals underlie perceptual learning, rather than changes in these signals. In the V4 learning studies, monkeys discriminated small differences in orientation, whereas in the MT study, the animals discriminated opponent motion directions. Analogous to the latter study, we trained monkeys to discriminate static orthogonal orientations masked by noise. V4 neurons showed robust increases in their capacity to discriminate the trained orientations during the course of the training. This effect was observed during discrimination and passive fixation but specifically for the trained orientations. The improvement in neural discrimination was due to decreased response variability and an increase of the difference between the mean responses for the two trained orientations. These findings demonstrate that perceptual learning in a coarse discrimination task indeed can change the response properties of a cortical sensory area. |
Helmut Leder; Michael Forster; Gernot Gerger The glasses stereotype revisited: Effects of eyeglasses on perception, recognition, and impression of faces Journal Article In: Swiss Journal of Psychology, vol. 70, no. 4, pp. 211–222, 2011. @article{Leder2011, In face perception, besides physiognomic changes, accessories like eyeglasses can influence facial appearance. According to a stereotype, people who wear glasses are more intelligent, but less attractive. In a series of four experiments, we showed how full-rim and rimless glasses, differing with respect to the amount of face they cover, affect face perception, recognition, distinctiveness, and the attribution of stereotypes. Eyeglasses generally directed observers' gaze to the eye regions; rimless glasses made faces appear less distinctive and resulted in reduced distinctiveness in matching and in recognition tasks. Moreover, the stereotype was confirmed but depended on the kind of glasses—rimless glasses yielded an increase in perceived trustworthiness, but not a decrease in attractiveness. Thus, glasses affect how we perceive the faces of the people wearing them and, in accordance with an old stereotype, they can lower how attractive, but increase how intelligent and trustworthy people wearing them appear. These effects depend on the kind of glasses worn. |
Eun Ju Lee; Gusang Kwon; Aekyoung Lee; Jamshid Ghajar; Minah Suh Individual differences in working memory capacity determine the effects of oculomotor task load on concurrent word recall performance Journal Article In: Brain Research, vol. 1399, pp. 59–65, 2011. @article{Lee2011, In this study, the interaction between individual differences in working memory capacity, which were assessed by the Korean version of the California Verbal Learning Test (K-CVLT), and the effects of oculomotor task load on word recall performance are examined in a dual-task experiment. We hypothesized that varying levels of oculomotor task load should result in different demands on cognitive resources. The verbal working memory task used in this study involved a brief exposure to seven words to be remembered, followed by a 30-second delay during which the subject carried out an oculomotor task. Then, memory performance was assessed by having the subjects recall as many words as possible. Forty healthy normal subjects with no vision-related problems carried out four separate dual-tasks over four consecutive days of participation, wherein word recall performances were tested under unpredictable random SPEM (smooth pursuit eye movement), predictive SPEM, fixation, and eyes-closed conditions. The word recall performance of subjects with low K-CVLT scores was significantly enhanced under predictive SPEM conditions as opposed to the fixation and eyes-closed conditions, but performance was reduced under the random SPEM condition, thus reflecting an inverted-U relationship between the oculomotor task load and word recall performance. Subjects with high K-CVLT scores evidenced steady word recall performances, regardless of the type of oculomotor task performed. The concurrent oculomotor performance measured by velocity error did not differ significantly among the K-CVLT groups. However, the high-scoring subjects evidenced smaller phase errors under predictive SPEM conditions than did the low-scoring subjects; this suggests that different resource allocation strategies may be adopted, depending on individuals' working memory capacity. |
Jiyeon Lee; Cynthia K. Thompson Real-time production of unergative and unaccusative sentences in normal and agrammatic speakers: An eyetracking study Journal Article In: Aphasiology, vol. 25, no. 6-7, pp. 813–825, 2011. @article{Lee2011a, Background: Speakers with agrammatic aphasia have greater difficulty producing unaccusative (float) compared to unergative (bark) verbs (Kegl, 1995; Lee & Thompson, 2004; Thompson, 2003), putatively because the former involve movement of the theme to the subject position from the post-verbal position, and are therefore more complex than the latter (Burzio, 1986; Perlmutter, 1978). However, it is unclear if and how sentence production processes are affected by the linguistic distinction between these two types of verbs in normal and impaired speakers. Aims: This study examined real-time production of sentences with unergative (the black dog is barking) vs unaccusative (the black tube is floating) verbs in healthy young speakers and individuals with agrammatic aphasia, using eyetracking. Methods & Procedures: Participants' eye movements and speech were recorded while they produced a sentence using computer displayed written stimuli (e.g., black, dog, is barking). Outcomes & Results: Both groups of speakers produced numerically fewer unaccusative sentences than unergative sentences. However, the eye movement data revealed significant differences in fixations between the adjective (black) vs the noun (tube) when producing unaccusatives, but not when producing unergatives for both groups. Interestingly, whereas healthy speakers showed this difference during speech, speakers with agrammatism showed this difference prior to speech onset. Conclusions: These findings suggest that the human sentence production system differentially processes unaccusatives vs unergatives. This distinction is preserved in individuals with agrammatism; however, the time course of sentence planning appears to differ from healthy speakers (Lee & Thompson, 2010). |
Carly J. Leonard; Steven J. Luck The role of magnocellular signals in oculomotor attentional capture Journal Article In: Journal of Vision, vol. 11, no. 13, pp. 1–12, 2011. @article{Leonard2011, While it is known that salient distractors often capture covert and overt attention, it is unclear whether salience signals that stem from magnocellular visual input have a more dominant role in oculomotor capture than those that result from parvocellular input. Because of the direct anatomical connections between the magnocellular pathway and the superior colliculus, salience signals generated from the magnocellular pathway may produce greater oculomotor capture than those from the parvocellular pathway, which could be potentially harder to overcome with "top-down," goal-directed guidance. Although previous research has addressed this with regard to magnocellular transients, in the current research, we investigated whether a static singleton distractor defined along a dimension visible to the magnocellular pathway would also produce enhanced oculomotor capture. In two experiments, we addressed this possibility by comparing a parvo-biased singleton condition, in which the distractor was defined by isoluminant chromatic color contrast, with a magno + parvo singleton condition, in which the distractor also differed in luminance from the surrounding objects. In both experiments, magno + parvo singletons elicited faster eye movements than parvo-only singletons, presumably reflecting faster information transmission in the magnocellular pathway, but magno + parvo singletons were not significantly more likely to produce oculomotor capture. Thus, although magnocellular salience signals are available more rapidly, they have no sizable advantage over parvocellular salience signals in controlling oculomotor orienting when all stimuli have a common onset. |
Benjamin D. Lester; Paul Dassonville Attentional control settings modulate susceptibility to the induced Roelofs effect Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 5, pp. 1398–1406, 2011. @article{Lester2011, When a visible frame is offset laterally from an observer's objective midline, the subjective midline is pulled toward the frame's center, causing the frame and any enclosed targets to be misperceived as being shifted somewhat in the opposite direction. This illusion, the Roelofs effect, is driven by environmental (bottom-up) visual cues, but whether it can be modulated by top-down (e.g., task-relevant) information is unknown. Here, we used an attentional manipulation (i.e., the color-contingency effect) to test whether attentional filtering can modulate the magnitude of the illusion. When observers were required to report the location of a colored target, presented within an array of differently colored distractors, there was a greater effect of the illusion when the Roelofs-inducing frame was the same color as the target. These results indicate that feature-based attentional processes can modulate the impact of contextual information on an observer's perception of space. |
Steffen Klingenhoefer; F. Bremmer Saccadic suppression of displacement in face of saccade adaptation Journal Article In: Vision Research, vol. 51, pp. 881–889, 2011. @article{Klingenhoefer2011, Saccades challenge visual perception since they induce large shifts of the image on the retina. Nevertheless, we perceive the outer world as being stable. The saccadic system also can rapidly adapt to changes in the environment (saccadic adaptation). In such case, a dissociation is introduced between a driving visual signal (the original saccade target) and a motor output (the adapted saccade vector). The question arises, how saccadic adaptation interferes with perceptual visual stability. In order to answer this question, we engaged human subjects in a saccade adaptation paradigm and interspersed trials in which the saccade target was displaced perisaccadically to a random position. In these trials subjects had to report on their perception of displacements of the saccade target. Subjects were tested in two conditions. In the 'blank' condition, the saccade target was briefly blanked after the end of the saccade. In the 'no-blank' condition the target was permanently visible. Confirming previous findings, the visual system was rather insensitive to displacements of the saccade target in an unadapted state, an effect termed saccadic suppression of displacement (SSD). In all adaptation conditions, we found spatial perception to correlate with the adaptive changes in saccade landing site. In contrast, small changes in saccade amplitude that occurred on a trial by trial basis did not correlate with perception. In the 'no-blank' condition we observed a prominent increase in suppression strength during backward adaptation. We discuss our findings in the context of existing theories on transsaccadic perceptual stability and its neural basis. |
Lisa Kloft; Eva Kischkel; Norbert Kathmann; Benedikt Reuter Evidence for a deficit in volitional action generation in patients with obsessive-compulsive disorder Journal Article In: Psychophysiology, vol. 48, no. 6, pp. 755–761, 2011. @article{Kloft2011, Obsessive-compulsive disorder (OCD) patients show deficits in tasks of executive functioning like the antisaccade (AS) task. These deficits suggest problems in response inhibition or volitional saccade generation. Thirty patients (15 nonmedicated) and 30 healthy subjects performed antisaccades and simple volitional saccades (SVS), that is, centrally cued saccades. In SVS, two aspects of volitional saccade generation were disentangled: response selection and initiation. Latencies of OCD patients were increased in volitional saccades independent of response selection demands. AS performance did not differ. Across groups, latencies in AS were faster than in SVS. Medicated patients did not differ from nonmedicated patients. In sum, response initiation is deficient in OCD patients, which may reflect a general problem in volitional action generation. This deficit did not affect antisaccade performance, possibly due to a lower volitional demand in that task. |
Christian Kluge; Markus Bauer; Alexander P. Leff; Hans-Jochen Heinze; Raymond J. Dolan; Jon Driver; Alexander Paul Plasticity of human auditory-evoked fields induced by shock conditioning and contingency reversal Journal Article In: Proceedings of the National Academy of Sciences, vol. 108, no. 30, pp. 12545–12550, 2011. @article{Kluge2011, We used magnetoencephalography (MEG) to assess plasticity of human auditory cortex induced by classical conditioning and contingency reversal. Participants listened to random sequences of high or low tones. A first baseline phase presented these without further associations. In phase 2, one of the frequencies (CS(+)) was paired with shock on half its occurrences, whereas the other frequency (CS(-)) was not. In phase 3, the contingency assigning CS(+) and CS(-) was reversed. Conditioned pupil dilation was observed in phase 2 but extinguished in phase 3. MEG revealed that, during phase-2 initial conditioning, the P1m, N1m, and P2m auditory components, measured from sensors over auditory temporal cortex, came to distinguish between CS(+) and CS(-). After contingency reversal in phase 3, the later P2m component rapidly reversed its selectivity (unlike the pupil response) but the earlier P1m did not, whereas N1m showed some new learning but not reversal. These results confirm plasticity of human auditory responses due to classical conditioning, but go further in revealing distinct constraints on different levels of the auditory hierarchy. The later P2m component can reverse affiliation immediately in accord with an updated expectancy after contingency reversal, whereas the earlier auditory components cannot. These findings indicate distinct cognitive and emotional influences on auditory processing. |
Jonas Knöll; Paola Binda; M. Concetta Morrone; Frank Bremmer Spatiotemporal profile of peri-saccadic contrast sensitivity Journal Article In: Journal of Vision, vol. 11, no. 14, pp. 1–12, 2011. @article{Knoell2011, Sensitivity to luminance contrast is reduced just before and during saccades (saccadic suppression), whereas sensitivity to color contrast is unimpaired peri-saccadically and enhanced post-saccadically. The exact spatiotemporal map of these perceptual effects is as yet unknown. Here, we measured detection thresholds for briefly flashed Gaussian blobs modulated in either luminance or chromatic contrast, displayed at a range of eccentricities. Sensitivity to luminance contrast was reduced peri-saccadically by a scaling factor, which was almost constant across retinal space. Saccadic suppression followed a similar time course across all tested eccentricities and was maximal shortly after the saccade onset. Sensitivity to chromatic contrast was enhanced post-saccadically at all tested locations. The enhancement was not specifically linked to the execution of saccades, as it was also observed following a displacement of retinal images comparable to that caused by a saccade. We conclude that luminance and chromatic contrast sensitivities are subject to distinct modulations at the time of saccades, resulting from independent neural processes. |
Stephan Koenig; Harald Lachnit Curved saccade trajectories reveal conflicting predictions in associative learning Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 5, pp. 1164–1177, 2011. @article{Koenig2011, We report how the trajectories of saccadic eye movements are affected by memory interference acquired during associative learning. Human participants learned to perform saccadic choice responses based on the presentation of arbitrary central cues A, B, AC, BC, AX, BY, X, and Y that were trained to predict the appearance of a peripheral target stimulus at 1 of 3 possible locations, right (R), mid (M), or left (L), in the upper hemifield. We analyzed as measures of associative learning the frequency, latency, and curvature of saccades elicited by the cues and directed at the trained locations in anticipation of the targets. Participants were trained on two concurrent discrimination problems A+R, AC+R, AX+M, X+M and B+L, BC+L, BY+M, Y+M. From a connectionist perspective, cues were predicted to acquire associative links connecting the cues to the trained outcomes in memory. Model simulations based on the learning rule of the Rescorla and Wagner (1972) model revealed that for some cues, the prediction of the correct target location was challenged by the interfering prediction of an incorrect location. We observed that saccades directed at the correct location in anticipation of the target curved away from the location that was predicted by the interfering association. Furthermore, changes in curvature during training corresponded to predicted changes in associative memory. We propose that this curvature was caused by the inhibition of the incorrect prediction, as previously has been suggested with the concept of distractor inhibition (Sheliga, Riggio, & Rizzolatti, 1994; Tipper, Howard, & Houghton, 2000). The paradigm provides a new method to examine memory interference during associative learning. |
Gert Kootstra; Bart Boer; Lambert R. B. Schomaker Predicting eye fixations on complex visual stimuli using local symmetry Journal Article In: Cognitive Computation, vol. 3, pp. 223–240, 2011. @article{Kootstra2011, Most bottom-up models that predict human eye fixations are based on contrast features. The saliency model of Itti, Koch and Niebur is an example of such contrast-saliency models. Although the model has been successfully compared to human eye fixations, we show that it lacks preciseness in the prediction of fixations on mirror-symmetrical forms. The contrast model gives high response at the borders, whereas human observers consistently look at the symmetrical center of these forms. We propose a saliency model that predicts eye fixations using local mirror symmetry. To test the model, we performed an eye-tracking experiment with participants viewing complex photographic images and compared the data with our symmetry model and the contrast model. The results show that our symmetry model predicts human eye fixations significantly better on a wide variety of images including many that are not selected for their symmetrical content. Moreover, our results show that especially early fixations are on highly symmetrical areas of the images. We conclude that symmetry is a strong predictor of human eye fixations and that it can be used as a predictor of the order of fixation. |
Christof Körner Eye movements reveal distinct search and reasoning processes in comprehension of complex graphs Journal Article In: Applied Cognitive Psychology, vol. 25, no. 6, pp. 893–905, 2011. @article{Koerner2011, Hierarchical graphs (e.g. file system browsers, family trees) represent objects (e.g. files, folders) as graph nodes, and relations (subfolder relations) between them as lines. In three experiments, participants viewed such graphs and carried out tasks that either required search for two target nodes (Experiment 1A), reasoning about their relation (Experiment 1B), or both (Experiment 2). We recorded eye movements and used the number of fixations in different phases to identify distinct stages of comprehension. Search in graphs proceeded like search in standard visual search tasks and was mostly unaffected by graph properties. Reasoning occurred typically in a separate stage at the end ofcomprehension and was affected by intersecting graph lines. The alignment ofnodes, together with linguistic factors, may also affect comprehension. Overall, there was good evidence to suggest that participants read graphs in a sequential manner, and that this is an economical approach of comprehension. |
Victor Kuperman; Julie A. Van Dyke Effects of individual differences in verbal skills on eye-movement patterns during sentence reading Journal Article In: Journal of Memory and Language, vol. 65, no. 1, pp. 42–73, 2011. @article{Kuperman2011, This study is a large-scale exploration of the influence that individual reading skills exert on eye-movement behavior in sentence reading. Seventy-one non-college-bound 16-24. year-old speakers of English completed a battery of 18 verbal and cognitive skill assessments, and read a series of sentences as their eye-movements were monitored. Statistical analyses were performed to establish what tests of reading abilities were predictive of eye-movement patterns across this population and how strong the effects were. We found that individual scores in rapid automatized naming and word identification tests (i) were the only participant variables with reliable predictivity throughout the time-course of reading; (ii) elicited effects that superceded in magnitude the effects of established predictors like word length or frequency; and (iii) strongly modulated the influence of word length and frequency on fixation times. We discuss implications of our findings for testing reading ability, as well as for research of eye-movements in reading. |
Eric Lambert; Denis Alamargot; Denis Larocque; Gilles Caporossi Dynamics of the spelling process during a copy task: Effects of regularity and frequency Journal Article In: Canadian Journal of Experimental Psychology, vol. 65, no. 3, pp. 141–150, 2011. @article{Lambert2011, This study investigated the time course of spelling, and its influence on graphomotor execution, in a successive word copy task. According to the cascade model, these two processes may be engaged either sequentially or in parallel, depending on the cognitive demands of spelling. In this experiment, adults were asked to copy a series of words varying in frequency and spelling regularity. A combined analysis of eye and pen movements revealed periods where spelling occurred in parallel with graphomotor execution, but concerned different processing units. The extent of this parallel processing depended on the words' orthographic characteristics. Results also highlighted the specificity of word recognition for copying purposes compared with recognition for reading tasks. The results confirm the validity of the cascade model and clarify the nature of the dependence between spelling and graphomotor processes. |
Martijn J. M. Lamers; Ardi Roelofs Attention and gaze shifting in dual-task and go/no-go performance with vocal responding Journal Article In: Acta Psychologica, vol. 137, no. 3, pp. 261–268, 2011. @article{Lamers2011, Evidence from go/no-go performance on the Eriksen flanker task with manual responding suggests that individuals gaze at stimuli just as long as needed to identify them (e.g., Sanders, 1998). In contrast, evidence from dual-task performance with vocal responding suggests that gaze shifts occur after response selection (e.g., Roelofs, 2008a). This difference in results may be due to the nature of the task situation (go/no-go vs. dual task) or the response modality (manual vs. vocal). We examined this by having participants vocally respond to congruent and incongruent flanker stimuli and shift gaze to left- or right-pointing arrows. The arrows required a manual response (dual task) or determined whether the vocal response to the flanker stimuli had to be given or not (go/no-go). Vocal response and gaze shift latencies were longer on incongruent than congruent trials in both dual-task and go/no-go performance. The flanker effect was also present in the manual response latencies in dual-task performance. Ex-Gaussian analyses revealed that the flanker effect on the gaze shifts consisted of a shift of the entire latency distribution. These results suggest that gaze shifts occur after response selection in both dual-task and go/no-go performance with vocal responding. |
Wolf Gero Lange; Kathrin Heuer; Oliver Langner; Ger P. J. Keijsers; Eni S. Becker; Mike Rinck Face value: Eye movements and the evaluation of facial crowds in social anxiety Journal Article In: Journal of Behavior Therapy and Experimental Psychiatry, vol. 42, no. 3, pp. 355–363, 2011. @article{Lange2011, Scientific evidence is equivocal on whether Social Anxiety Disorder (SAD) is characterized by a biased negative evaluation of (grouped) facial expressions, even though it is assumed that such a bias plays a crucial role in the maintenance of the disorder. To shed light on the underlying mechanisms of face evaluation in social anxiety, the eye movements of 22 highly socially anxious (SAs) and 21 non-anxious controls (NACs) were recorded while they rated the degree of friendliness of neutral-angry and smiling-angry face combinations. While the Crowd Rating Task data showed no significant differences between SAs and NACs, the resultant eye-movement patterns revealed that SAs, compared to NACs, looked away faster when the face first fixated was angry. Additionally, in SAs the proportion of fixated angry faces was significantly higher than for other expressions. Independent of social anxiety, these fixated angry faces were the best predictor of subsequent affect ratings for either group. Angry faces influence attentional processes such as eye movements in SAs and by doing so reflect biased evaluations. As these processes do not correlate with explicit ratings of faces, however, it remains unclear at what point implicit attentional behaviors lead to anxiety-prone behaviors and the maintenance of SAD. The relevance of these findings is discussed in the light of the current theories. |
Georgia Laretzaki; Sotiris Plainis; Ioannis Vrettos; Anna Chrisoulakis; Ioannis G. Pallikaris; Panos Bitsios Threat and trait anxiety affect stability of gaze fixation Journal Article In: Biological Psychology, vol. 86, no. 3, pp. 330–336, 2011. @article{Laretzaki2011, Threat accelerates early visual information processing, as shown by shorter P100 latencies of pattern Visual Evoked Potentials in subjects with low trait anxiety, but the opposite is true for high anxious subjects. We sought to determine if, and how, threat and trait anxiety interact to affect stability of gaze fixation. We used video oculography to record gaze position in the presence and in the absence of a fixational stimulus, in a safe and a verbal threat condition in subjects characterised for their trait anxiety. Trait anxiety significantly predicted fixational instability in the threat condition. An extreme tertile analysis revealed that fixation was less stable in the high anxiety group, especially under threat or in the absence of a stimulus. The effects of anxiety extend to perceptual and sensorimotor processes. These results have implications for the understanding of individual differences in occulomotor planning and visually guided behavior. |
Louisa Lavergne; Dorine Vergilino-Perez; Christelle Lemoine; Thérèse Collins; Karine Doré-Mazars Exploring and targeting saccades dissociated by saccadic adaptation Journal Article In: Brain Research, vol. 1415, pp. 47–55, 2011. @article{Lavergne2011, Saccadic adaptation maintains saccade accuracy and has been studied with targeting saccades, i.e. saccades that bring the gaze to a target, with the classical intra-saccadic step procedure in which the target systematically jumps to a new position during saccade execution. Post-saccadic visual feedback about the error between target position and the saccade landing position is crucial to establish and maintain adaptation. However, recent research focusing on two-saccade sequences has shown that exploring saccades, i.e. saccades that explore an object, resists this classical intra-saccadic step procedure but can be adapted by systematically changing the main parameter used for their coding: stimulus size. Here, we adapted an exploring saccade and a targeting saccade in two separate experiments, using the appropriate adaptation procedure, and we tested whether the adaptation induced on one saccade type transferred to the other. We showed that whereas classical targeting saccade adaptation does not transfer to exploring saccades, the reciprocal transfer (i.e., from exploring to targeting saccades) occurred when targeting saccades aimed for a spatially extended stimulus, but not when they aimed for an isolated target. These results show that, in addition to position errors, size errors can drive adaptation, and confirm that exploring vs. targeting a stimulus leads to two different motor planning modes. |
I. Fan Lin; Andrei Gorea Location and identity memory of saccade targets Journal Article In: Vision Research, vol. 51, no. 3, pp. 323–332, 2011. @article{Lin2011, While the memory of objects' identity and of their spatiotopic location may sustain transsaccadic spatial constancy, the memory of their retinotopic location may hamper it. Is it then true that saccades perturb retinotopic but not spatiotopic memory? We address this issue by assessing localization performances of the last and of the penultimate saccade target in a series of 2-6 saccades. Upon fixation, nine letter-pairs, eight black and one white, were displayed at 3° eccentricity around fixation within a 20°×20° grey frame, and subjects were instructed to saccade to the white letter-pair; the cycle was then repeated. Identical conditions were run with the eyes maintaining fixation throughout the trial but with the grey frame moving so as to mimic its retinal displacement when the eyes moved. At the end of a trial, subjects reported the identity and/or the location of the target in either retinotopic (relative to the current fixation dot) or frame-based. 1In the context of this study " frame-based" and " spatiotopic" are equivalent terms and will be used interchangeably.1(relative to the grey frame) coordinates. Saccades degraded target's retinotopic location memory but not its frame-based location or its identity memory. Results are compatible with the notion that spatiotopic representation takes over retinotopic representation during eye movements thereby contributing to the stability of the visual world as its retinal projection jumps on our retina from saccade to saccade. |
Chia-Lun Liu; Philip Tseng; Hui-Yan Chiau; Wei-Kuang Liang; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan The location probability effects of saccade reaction times are modulated in the frontal eye fields but not in the supplementary eye field Journal Article In: Cerebral Cortex, vol. 21, no. 6, pp. 1416–1425, 2011. @article{Liu2011c, The visual system constantly utilizes regularities that are embedded in the environment and by doing so reduces the computational burden of processing visual information. Recent findings have demonstrated that probabilistic information can override attentional effects, such as the cost of making an eye movement away from a visual target (antisaccade cost). The neural substrates of such probability effects have been associated with activity in the superior colliculus (SC). Given the immense reciprocal connections to SC, it is plausible that this modulation originates from higher oculomotor regions, such as the frontal eye field (FEF) and the supplementary eye field (SEF). To test this possibility, the present study employed theta burst transcranial magnetic stimulation (TMS) to selectively interfere with FEF and SEF activity. We found that TMS disrupted the effect of location probability when TMS was applied over FEF. This was not observed in the SEF TMS condition. Together, these 2 experiments suggest that the FEF plays a critical role not only in initiating saccades but also in modulating the effects of location probability on saccade production. |
Donglai Liu; Yonghui Wang; Xiaolin Zhou Lexical- and perceptual-based object effects in the two-rectangle cueing paradigm Journal Article In: Acta Psychologica, vol. 138, no. 3, pp. 397–404, 2011. @article{Liu2011d, Previous studies demonstrate that attentional selection can be object-based, in which the object is defined in terms of Gestalt principles or lexical organizations. Here we investigate how attentional selection functions when the two types of objects are manipulated jointly. Experiment 1 replicated Li and Logan (2008) by showing that attentional shift between two Chinese characters is more efficient when they form a compound word than when they form a nonword. Experiment 2A presented characters either alone or within rectangles (Egly, Driver, & Rafal, 1994) and the characters in a rectangle formed either a word or a nonword. Experiment 2B differed from Experiment 2A in that the two characters forming a word were in different rectangles. Experiment 3A presented the two characters of a word either within a rectangle or in different rectangles. Experiment 3B used the same design as Experiment 3A but presented stimuli of different types in random orders, rather than in blocks as in Experiments 2A, 2B and 3A. In blocked presentation, detection responses to the target color on a character were faster when this character and the cue character formed a word than when they did not, and the size of this lexical-based object effect did not vary according to whether the two characters were presented alone or within or between rectangles. In random presentation, however, the lexical-based object effect was diminished when the two characters of a word were presented in different rectangles. Overall, these findings suggest that the processes that constrain attention deployment over conjoined objects can be strategically adjusted. |
Haoxue Liu; Guangming Ding; Weihua Zhao; Hui Wang; Kaizheng Liu; Ludan Shi Variation of drivers' visual features in long-tunnel entrance section on expressway Journal Article In: Journal of Transportation Safety and Security, vol. 3, no. 1, pp. 27–37, 2011. @article{Liu2011e, To avoid traffic accidents in long tunnel entrance sections, the authors studied the variation of driver's visual features based on real road experiments on the expressway. Drivers' visual feature parameters were recorded in real-time using Eyelink (eye tracking system) during the driving test. Mathematic models of drivers' fixation duration, the number of fixations, and saccade amplitude in tunnel entrance were established based on BP Neural Network (Error Back Propagation Network) simulation. Results showed that fixation duration increased gradually as the vehicle moving closer to the tunnel entrance, whereas the number of fixations and saccade amplitude decreased. Meanwhile, drivers' fixations shifted from straight ahead to the right side, which resulted in the number of fixations on the right side increased. After drivers entering the tunnel, fixation duration firstly decreased and then increased afterward, while the number of fixations and saccade amplitude kept increasing. |
Taosheng Liu; Luke Hospadaruk; David C. Zhu; Justin L. Gardner Feature-specific attentional priority signals in human cortex Journal Article In: Journal of Neuroscience, vol. 31, no. 12, pp. 4484–4495, 2011. @article{Liu2011f, Human can flexibly attend to a variety of stimulus dimensions, including spatial location and various features such as color and direction of motion. Although the locus of spatial attention has been hypothesized to be represented by priority maps encoded in several dorsal frontal and parietal areas, it is unknown how the brain represents attended features. Here we examined the distribution and organization of neural signals related to deployment of feature-based attention. Subjects viewed a compound stimulus containing two superimposed motion directions (or colors) and were instructed to perform an attention-demanding task on one of the directions (or colors). We found elevated and sustained functional magnetic resonance imaging response for the attention task compared with a neutral condition, without reliable differences in overall response amplitude between attending to different features. However, using multivoxel pattern analysis, we were able to decode the attended feature in both early visual areas (primary visual cortex to human motion complex hMT+) and frontal and parietal areas (e.g., intraparietal sulcus areas IPS1-IPS4 and frontal eye fields) that are commonly associated with spatial attention. Furthermore, analysis of the classifier weight maps showed that attending to motion and color evoked different patterns of activity, suggesting that different neuronal subpopulations in these regions are recruited for attending to different feature dimensions. Thus, our finding suggests that, rather than a purely spatial representation of priority, frontal and parietal cortical areas also contain multiplexed signals related to the priority of different nonspatial features. |
Taosheng Liu; Youyang Hou Global feature-based attention to orientation. Journal Article In: Journal of Vision, vol. 11, no. 10, pp. 1–8, 2011. @article{Liu2011a, Selective attention to motion direction can modulate the strength of direction-selective sensory responses regardless of their spatial locations. Although such spatially global modulation is thought to be a general property of feature-based attention, few studies have examined visual features other than motion. Here, we used an adaptation protocol combined with attentional instructions to assess whether attention to orientation, a prominent feature in early visual processing, also exhibit such spatially global modulation. We adapted observers to an orientation by cuing them to attend to the orientation in a compound grating that was presented at a peripheral location. We then assessed the size of the tilt aftereffect at three locations that were never stimulated by the adapter. Attending to orientation produced a tilt aftereffect in these locations, indicating that attention modulated orientation-selective mechanisms in remote locations from the adapter. Furthermore, there was no difference in the magnitude of the tilt aftereffect for test stimuli that were located at different distances and hemifields to the adapter. These results suggest that attention to orientation spreads uniformly across the visual field. Thus, spatially global modulation seems to be a general property of feature-based attention, and it provides a flexible mechanism to modulate feature salience across the visual field. |
Taosheng Liu; Irida Mance Constant spread of feature-based attention across the visual field Journal Article In: Vision Research, vol. 51, no. 1, pp. 26–33, 2011. @article{Liu2011b, Attending to a feature in one location can produce feature-specific modulation in a different location. This global feature-based attention effect has been demonstrated using two stimulus locations. Although the spread of feature-based attention is presumed to be constant across spatial locations, it has not been tested empirically. We examined the spread of feature-based attention by measuring attentional modulation of the motion aftereffect (MAE) at remote locations. Observers attended to one of two directions in a compound motion stimulus (adapter) and performed a speed-increment task. MAE was measured via a speed nulling procedure for a test stimulus at different distances from the adapter. In Experiment 1, the adapter was at fixation, while the test stimulus was located at different eccentricities. We also measured the magnitude of baseline MAE for each location in two control conditions that did not require feature-based selection necessitated by a compound stimulus. In Experiment 2, the adapter and test stimuli were all located in the periphery at the same eccentricity. Our results showed that attention induced MAE spread completely across the visual field, indicating a genuine global effect. These results add to our understanding of the deployment of feature-based attention and provide empirical constraints on theories of visual attention. |
Taosheng Liu; Timothy J. Pleskac Neural correlates of evidence accumulation in a perceptual decision task Journal Article In: Journal of Neurophysiology, vol. 106, no. 5, pp. 2383–2398, 2011. @article{Liu2011, Sequential sampling models provide a useful framework for understanding human decision making. A key component of these models is an evidence accumulation process in which information is accrued over time to a threshold, at which point a choice is made. Previous neurophysiological studies on perceptual decision making have suggested accumulation occurs only in sensorimotor areas involved in making the action for the choice. Here we investigated the neural correlates of evidence accumulation in the human brain using functional magnetic resonance imaging (fMRI) while manipulating the quality of sensory evidence, the response modality, and the foreknowledge of the response modality. We trained subjects to perform a random dot motion direction discrimination task by either moving their eyes or pressing buttons to make their responses. In addition, they were cued about the response modality either in advance of the stimulus or after a delay. We isolated fMRI responses for perceptual decisions in both independently defined sensorimotor areas and task-defined nonsensorimotor areas. We found neural signatures of evidence accumulation, a higher fMRI response on low coherence trials than high coherence trials, primarily in saccade-related sensorimotor areas (frontal eye field and intraparietal sulcus) and nonsensorimotor areas in anterior insula and inferior frontal sulcus. Critically, such neural signatures did not depend on response modality or foreknowledge. These results help establish human brain areas involved in evidence accumulation and suggest that the neural mechanism for evidence accumulation is not specific to effectors. Instead, the neural system might accumulate evidence for particular stimulus features relevant to a perceptual task. |
Sid Kouider; Vincent Berthet; Nathan Faivre; Sid Kouider; Vincent Berthet; Nathan Faivre Preference is biased by crowded facial expressions Journal Article In: Psychological Science, vol. 22, no. 2, pp. 184–189, 2011. @article{Kouider2011, Crowding occurs when nearby flankers impede the identification of a peripheral stimulus. Here, we studied whether crowded features containing inaccessible emotional information can nevertheless affect preference judgments. We relied on gaze-contingent crowding, a novel method allowing for constant perceptual unawareness through eye-tracking control, and we found that crowded facial expressions can bias evaluative judgments of neutral pictographs. Furthermore, this emotional bias was effective not only for static images of faces, but also for videos displaying dynamic facial expressions. In addition to showing an alternative approach for probing nonconscious cognition, this study reveals that crowded information, instead of being fully suppressed, can have important influences on decisions. |
Michael J. Koval; Stephen G. Lomber; Stefan Everling Prefrontal cortex deactivation in macaques alters activity in the superior colliculus and impairs voluntary control of saccades Journal Article In: Journal of Neuroscience, vol. 31, no. 23, pp. 8659–8668, 2011. @article{Koval2011, The cognitive control of action requires both the suppression of automatic responses to sudden stimuli and the generation of behavior specified by abstract instructions. Though patient, functional imaging and neurophysiological studies have implicated the dorsolateral prefrontal cortex (dlPFC) in these abilities, the mechanism by which the dlPFC exerts this control remains unknown. Here we examined the functional interaction of the dlPFC with the saccade circuitry by deactivating area 46 of the dlPFC and measuring its effects on the activity of single superior colliculus neurons in monkeys performing a cognitive saccade task. Deactivation of the dlPFC reduced preparatory activity and increased stimulus-related activity in these neurons. These changes in neural activity were accompanied by marked decreases in task performance as evidenced by longer reaction times and more task errors. The results suggest that the dlPFC participates in the cognitive control of gaze by suppressing stimulus-evoked automatic saccade programs. |
Lianne C. Krab; Arja Goede-Bolder; Femke K. Aarsen; Henriëtte A. Moll; Chris I. De Zeeuw; Ype Elgersma; Josef N. Geest Motor learning in children with Neurofibromatosis Type I Journal Article In: Cerebellum, vol. 10, no. 1, pp. 14–21, 2011. @article{Krab2011, The aim of this study was to quantify the frequently observed problems in motor control in Neurofibromatosis type 1 (NF1) using three tasks on motor performance and motor learning. A group of 70 children with NF1 was compared to age-matched controls. As expected, NF1 children showed substantial problems in visuo-motor integration (Beery VMI). Prism-induced hand movement adaptation seemed to be mildly affected. However, no significant impairments in the accuracy of simple eye or hand movements were observed. Also, saccadic eye movement adaptation, a cerebellum dependent task, appeared normal. These results suggest that the motor problems of children with NF1 in daily life are unlikely to originate solely from impairments in motor learning. Our findings, therefore, do not support a general dysfunction of the cerebellum in children with NF1. |
Stefanie E. Kuchinsky; Kathryn Bock; David E. Irwin Reversing the hands of time: Changing the mapping from seeing to saying Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 3, pp. 748–756, 2011. @article{Kuchinsky2011, To describe a scene, speakers must map visual information to a linguistic plan. Eye movements capture features of this linkage in a tendency for speakers to fixate referents just before they are mentioned. The current experiment examined whether and how this pattern changes when speakers create atypical mappings. Eye movements were monitored as participants told the time from analog clocks. Half of the participants did this in the usual manner. For the other participants, the denotations of the clock hands were reversed, making the big hand the hour and the little hand the minute. Eye movements revealed that it was not the visual features or configuration of the hands that determined gaze patterns, but rather top-down control from upcoming referring expressions. Differences in eye-voice spans further suggested a process in which scene elements are relationally structured before a linguistic plan is executed. This provides evidence for structural rather than lexical incrementality in planning and supports a "seeing-for-saying" hypothesis in which the visual system is harnessed to the linguistic demands of an upcoming utterance. |
Gustav Kuhn; Lauren Tewson; Lea Morpurgo; Susannah F. Freebody; Anna S. Musil; Susan R. Leekam Developmental changes in the control of saccadic eye movements in response to directional eye gaze and arrows Journal Article In: Quarterly Journal of Experimental Psychology, vol. 64, no. 10, pp. 1919–1929, 2011. @article{Kuhn2011a, We investigated developmental differences in oculomotor control between 10-year-old children and adults using a central interference task. In this task, the colour of a fixation point instructed participants to saccade either to the left or to the right. These saccade directions were either congruent or incongruent with two types of distractor cue: either the direction of eye gaze of a centrally presented schematic face, or the direction of arrows. Children had greater difficulties inhibiting the distractor cues than did adults, which revealed itself in longer saccade latencies for saccades that were incongruent with the distractor cues as well as more errors on these incongruent trials than on congruent trials. Counter to our prediction, in terms of saccade latencies, both children and adults had greater difficulties inhibiting the arrow than the eye gaze distractors. |
Gustav Kuhn; Jason Tipples Increased gaze following for fearful faces. It depends on what you're looking for! Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 1, pp. 89–95, 2011. @article{Kuhn2011, An oculomotor visual search task was used to investigate how participants follow the gaze of a nonpredictive and task irrelevant distractor gaze, and the way in which this gaze following is influenced by the emotional expression (fearful vs. happy) as well as participants' goal. Previous research has suggested that fearful emotions should result in stronger cueing effects than happy faces. Our results demonstrated that the degree to which the emotional expression influenced this gaze following varied as a function of the search target. When searching for a threatening target, participants were more likely to look in the direction of eye gaze on a fearful compared to a happy face. However, when searching for a pleasant target, this stronger cueing effect for fearful faces disappeared. Therefore, gaze following is influenced by contextual factors such as the emotional expression, as well as the participant's goal. |