全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2010 |
Gang Luo; Tyler W. Garaas; Marc Pomplun; Eli Peli Inconsistency between peri-saccadic mislocalization and compression: evidence for separate "what" and "where" visual systems Journal Article In: Journal of Vision, vol. 10, no. 12, pp. 1–8, 2010. @article{Luo2010, The view of two separate "what" and "where" visual systems is supported by compelling neurophysiological evidence. However, very little direct psychophysical evidence has been presented to suggest that the two functions can be separated in neurologically intact persons. Using a peri-saccadic perception paradigm in which bars of different lengths were flashed around saccade onset, we directly measured the perceived object size (a "what" attribute) and location (a "where" attribute). We found that the perceived object location shifted toward the saccade target to show strongly compressed localization, whereas the perceived object size was not compressed accordingly. This dissociation indicates that the perceived size is not determined by spatial localization of the object boundary, providing direct psychophysical evidence to support that "what" and "where" attributes of objects are indeed processed separately. |
Victor Kuperman; Raymond Bertram; R. Harald Baayen Processing trade-offs in the reading of Dutch derived words Journal Article In: Journal of Memory and Language, vol. 62, no. 2, pp. 83–97, 2010. @article{Kuperman2010, This eye-tracking study explores visual recognition of Dutch suffixed words (e.g., plaats+ing "placing") embedded in sentential contexts, and provides new evidence on the interplay between storage and computation in morphological processing. We show that suffix length crucially moderates the use of morphological properties. In words with shorter suffixes, we observe a stronger effect of full-forms (derived word frequency) on reading times than in words with longer suffixes. Also, processing times increase if the base word (plaats) and the suffix (-ing) differ in the amount of information carried by their morphological families (sets of words that share the base or the suffix). We model this imbalance of informativeness in the morphological families with the information-theoretical measure of relative entropy and demonstrate its predictivity for the processing times. The observed processing trade-offs are discussed in the context of current models of morphological processing. |
Victor Kuperman; Michael Dambacher; Antje Nuthmann; Reinhold Kliegl The effect of word position on eye-movements in sentence and paragraph reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 9, pp. 1838–1857, 2010. @article{Kuperman2010a, The present study explores the role of the word position-in-text in sentence and paragraph reading. Three eye-movement data sets based on the reading of Dutch and German unrelated sentences reveal a sizeable, replicable increase in reading times over several words at the beginning and the end of sentences. The data from the paragraph-based English-language Dundee corpus replicate the pattern and also indicate that the increase in inspection times is driven by the visual boundaries of the text organized in lines, rather than by syntactic sentence boundaries. We argue that this effect is independent of several established lexical, contextual, and oculomotor predictors of eye-movement behaviour. We also provide evidence that the effect of word position-in-text has two independent components: a start-up effect, arguably caused by a strategic oculomotor programme of saccade planning over the line of text, and a wrap-up effect, originating in cognitive processes of comprehension and semantic integration. |
Kaitlin E. W. Laidlaw; Alan Kingstone The time course of vertical, horizontal and oblique saccade trajectories: Evidence for greater distractor interference during vertical saccades Journal Article In: Vision Research, vol. 50, no. 9, pp. 829–837, 2010. @article{Laidlaw2010, The present study aimed to characterize the effect of a nearby distractor on vertical, horizontal, and oblique saccade curvature under normal saccade preparation times. Consistent with previous findings, longer-latency vertical saccades showed greater curvature away from a distractor than did oblique or horizontal saccades. At short latencies, vertical saccades also showed greater curvature towards the distractor. A neural explanation for why vertical saccades show greater interference from a distractor is theorized. |
Maren Lappe-Osthege; Silke Talamo; Christoph Helmchen; Andreas Sprenger Overestimation of saccadic peak velocity recorded by electro-oculography compared to video-oculography and scleral search coil Journal Article In: Clinical Neurophysiology, vol. 121, no. 10, pp. 1786–1787, 2010. @article{LappeOsthege2010, Peak velocity of saccadic eye movements is a crucial motor parameter in clinical neurology and oculomotor research. It may help to assign patients' lesions to even very small brain regions which are not (yet) recognizable with magnetic resonance imaging (MRI). Horizontal slowing of saccades is associated with pontine lesions of the paramedian pontine reticular formation while saccades in cerebellar or cortical lesions are usually not slowed. Accordingly, saccade velocity helps to classify and distinguish neurodegenerative and genetic movement disorders, e.g. Parkinson's disease or spinocerebellar ataxias. However, related studies are often not easily comparable as different recording techniques (e.g. electro-oculography, video-oculography, and scleral search coil) and different paradigms (e.g., reflexive, self-paced saccades) are used irrespective of their influence on saccade velocity. Therefore, we intraindividually compared saccadic peak velocities using electro-oculography (EOG), video-oculography (VOG) and scleral search coil (SSC) in a variety of saccade types and conditions to assess the comparability of these methods. |
Jochen Laubrock; Reinhold Kliegl; Martin Rolfs; Ralf Engbert When do microsaccades follow spatial attention? Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 3, pp. 683–694, 2010. @article{Laubrock2010, Following up on an exchange about the relation between microsaccades and spatial attention (Horowitz, Fenc- sik, Fine, Yurgenson, & Wolfe, 2007; Horowitz, Fine, Fencsik, Yurgenson, & Wolfe, 2007; Laubrock, Engbert, Rolfs, & Kliegl, 2007), we examine the effects of selection criteria and response modality. We show that for Posner cuing with saccadic responses, microsaccades go with attention in at least 75% of cases (almost 90% if probability matching is assumed) when they are first (or only) microsaccades in the cue–target interval and when they occur between 200 and 400 msec after the cue. The relation between spatial attention and the direction of microsaccades drops to chance level for unselected microsaccades collected during manual-response conditions. Analyses of data from four cross-modal cuing experiments demonstrate an above-chance, intermediate link for visual cues, but no systematic relation for auditory cues. Thus, the link between spatial attention and direction of microsaccades depends on the experimental condition and time of occurrence, but it can be very strong. |
Hyung Lee; Mathias Abegg; Amadeo Rodriguez; John D. Koehn; Jason J. S. Barton Why do humans make antisaccade errors? Journal Article In: Experimental Brain Research, vol. 201, no. 1, pp. 65–73, 2010. @article{Lee2010, Antisaccade errors are attributed to failure to inhibit the habitual prosaccade. We investigated whether the amount of information about the required response the patient has before the trial begins also contributes to error rate. Participants performed antisaccades in five conditions. The traditional design had two goals on the left and right horizontal meridians. In the second condition, stimulus-goal confusability between trials was eliminated by displacing one goal upward. In the third, hemifield uncertainty was eliminated by placing both goals in the same hemifield. In the fourth, goal uncertainty was eliminated by having only one goal, but interspersed with no-go trials. The fifth condition eliminated all uncertainty by having the same goal on every trial. Antisaccade error rate increased by 2% with each additional source of uncertainty, with the main effect being hemifield information, and a trend for stimulus-goal confusability. A control experiment for the effects of increasing angular separation between targets without changing these types of prior response information showed no effects on latency or error rate. We conclude that other factors besides prosaccade inhibition contribute to antisaccade error rates in traditional designs, possibly by modulating the strength of goal activation. |
Tomas Knapen; Martin Rolfs; Mark Wexler; Patrick Cavanagh The reference frame of the tilt aftereffect Journal Article In: Journal of Vision, vol. 10, no. 1, pp. 1–13, 2010. @article{Knapen2010, Perceptual aftereffects provide a sensitive tool to investigate the influence of eye and head position on visual processing. There have been recent indications that the TAE is remapped around the time of a saccade to remain aligned to the adapting location in the world. Here, we investigate the spatial frame of reference of the TAE by independently manipulating retinal position, gaze orientation, and head orientation between adaptation and test. The results show that the critical factor in the TAE is the correspondence between the adaptation and test locations in a retinotopic frame of reference, whereas world- and head-centric frames of reference do not play a significant role. Our results confirm that adaptation to orientation takes place at retinotopic levels of visual processing. We suggest that the remapping process that plays a role in visual stability does not transfer feature gain information around the time of eye (or head) movements. |
Peter Ko; Sepp Kollmorgen; Nora Nortmann; Sylvia Schröder; Peter König Influence of low-level stimulus features, task dependent factors, and spatial biases on overt visual attention Journal Article In: PLoS Computational Biology, vol. 6, no. 5, pp. e1000791, 2010. @article{Ko2010, Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention. |
Peter J. Kohler; G. P. Caplovitz; P. -J. Hsieh; J. Sun; P. U. Tse Motion fading is driven by perceived, not actual angular velocity Journal Article In: Vision Research, vol. 50, no. 11, pp. 1086–1094, 2010. @article{Kohler2010, After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. Here we examine the relationship between such 'motion fading' and perceived angular velocity. Using several different dot patterns that generate emergent virtual contours, we demonstrate that whenever there is a difference in the perceived angular velocity of two patterns of dots that are in fact rotating at the same angular velocity, there is also a difference in the time to undergo motion fading for those two patterns. Conversely, whenever two patterns show no difference in perceived angular velocity, even if in fact rotating at different angular velocities, we find no difference in the time to undergo motion fading. Thus, motion fading is driven by the perceived rather than actual angular velocity of a rotating stimulus. |
Andrew J. Kolarik; Tom H. Margrain; Tom C. A. Freeman Precision and accuracy of ocular following: Influence of age and type of eye movement Journal Article In: Experimental Brain Research, vol. 201, no. 2, pp. 271–282, 2010. @article{Kolarik2010, Previous work on ocular-following emphasises the accuracy of tracking eye movements. However, a more complete understanding of oculomotor control should account for variable error as well. We identify two forms of precision: 'shake', occurring over shorter timescales; 'drift', occurring over longer timescales. We show how these can be computed across a series of eye movements (e.g. a sequence of slow-phases or collection of pursuit trials) and then measure accuracy and precision for younger and older observers executing different types of eye movement. Overall, we found older observers were less accurate over a range of stimulus speeds and less precise at faster eye speeds. Accuracy declined more steeply for reflexive eye movements and shake was independent of speed. In all other instances, the two measures of precision expanded non-linearly with mean eye speed. We also found that shake during fixation was similar to shake for reflexive eye movement. The results suggest that deliberate and reflexive eye movement do not share a common non-linearity or a common noise source. The relationship of our data to previous studies is discussed, as are the consequences of imprecise eye movement for models of oculomotor control and perception during eye movement. |
Kerstin Königs; Frank Bremmer Localization of visual and auditory stimuli during smooth pursuit eye movements Journal Article In: Journal of Vision, vol. 10, no. 8, pp. 1–14, 2010. @article{Koenigs2010, Humans move their eyes more often than their heart beats. Although these eye movements induce large retinal image shifts, we perceive our world as stable. Yet, this perceptual stability is not complete. A number of studies have shown that visual targets which are briefly presented during such eye movements are mislocalized in a characteristic manner. It is largely unknown, however, if auditory stimuli are also mislocalized, i.e. whether or not perception generalizes across senses and space is represented supramodally. In our current study subjects were asked to localize brief visual and auditory stimuli that were presented during smooth pursuit in the dark. In addition, we measured auditory and visual detection thresholds. Confirming previous studies, perceived visual positions were shifted in direction of the pursuit. This shift was stronger for the hemifield the eye was heading towards (foveopetal). Perceptual auditory space was compressed towards the pursuit target (ventriloquism effect). This perceptual error was slightly reduced during pursuit as compared to fixation and differed clearly from the mislocalization of visual targets. While we found an influence of pursuit on localization, we found no such effect on the detection of visual and auditory stimuli. Taken together, our results do not provide evidence for the hypothesis of a supramodal representation of space during active oculomotor behavior. |
Xingshan Li; Gordon D. Logan; N. Jane Zbrodoff Where do we look when we count? The role of eye movements in enumeration Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 2, pp. 409–426, 2010. @article{Li2010, Two experiments addressed the coupling between eye movements and the cognitive processes underlying enumeration. Experiment 1 compared eye movements in a counting task with those in a “look” task, in which subjects were told to look at each dot in a pattern once and only once. Experiment 2 presented the same dot patterns to every subject twice, to measure the consistency with which dots were fixated between and within subjects. In both experiments, the number of fixations increased linearly with the number of objects to be enu- merated, consistent with tight coupling between eye movements and enumeration. However, analyses of fixation locations showed that subjects tended to look at dots in dense, central regions of the display and tended not to look at dots in sparse, peripheral regions of the display, suggesting a looser coupling between eye movements and enumeration. Thus, the eyes do not mirror the enumeration process very directly. |
Hanneke Liesker; Eli Brenner; Jeroen B. J. Smeets Eye-hand coupling is not the cause of manual return movements when searching Journal Article In: Experimental Brain Research, vol. 201, no. 2, pp. 221–227, 2010. @article{Liesker2010, When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control. |
Angelika Lingnau; Jens Schwarzbach; Dirk Vorberg (Un)-coupling gaze and attention outside central vision Journal Article In: Journal of Vision, vol. 10, no. 11, pp. 1–13, 2010. @article{Lingnau2010, In normal vision, shifts of attention and gaze are tightly coupled. Here we ask if this coupling affects performance also when central vision is not available. To this aim, we trained normal-sighted participants to perform a visual search task while vision was restricted to a gaze-contingent viewing window (" forced field location ") either in the left, right, upper, or lower visual field. Gaze direction was manipulated within a continuous visual search task that required leftward, rightward, upward, or downward eye movements. We found no general performance advantage for a particular part of the visual field or for a specific gaze direction. Rather, performance depended on the coordination of visual attention and eye movements, with impaired performance when sustained attention and gaze have to be moved in opposite directions. Our results suggest that during early stages of central visual field loss, the optimal location for the substitution of foveal vision does not depend on the particular retinal location alone, as has previously been thought, but also on the gaze direction required by the task the patient wishes to perform. |
Chia-Lun Liu; Hui-Yan Chiau; Philip Tseng; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan Antisaccade cost is modulated by contextual experience of location probability Journal Article In: Journal of Neurophysiology, vol. 103, no. 3, pp. 1438–1447, 2010. @article{Liu2010, It is well known that pro- and antisaccades may deploy different cognitive processes. However, the specific reason why antisaccades have longer latencies than prosaccades is still under debate. In three experiments, we studied the factors contributing to the antisaccade cost by taking attentional orienting and target location probabilities into account. In experiment 1, using a new antisaccade paradigm, we directly tested Olk and Kingstone's hypothesis, which attributes longer antisaccade latency to the time it takes to reorient from the visual target to the opposite saccadic target. By eliminating the reorienting component in our paradigm, we found no significant difference between the latencies of the two saccade types. In experiment 2, we varied the proportion of prosaccades made to certain locations and found that latencies in the high location-probability (75%) condition were faster than those in the low location-probability condition. Moreover, antisaccade latencies were significantly longer when location probability was high. This pattern can be explained by the notion of competing pathways for pro- and antisaccades in findings of others. In experiment 3, we further explored the degrees of modulation of location probability by decreasing the magnitude of high probability from 75 to 65%. We again observed a pattern similar to that seen in experiment 2 but with smaller modulation effects. Together, these experiments indicate that the reorienting process is a critical factor in producing the antisaccade cost. Furthermore, the antisaccade cost can be modulated by probabilistic contextual information such as location probabilities. |
Kentaro Kotani; Yuji Yamaguchi; Takafumi Asao; Ken Horii Design of eye-typing interface using saccadic latency of eye movement Journal Article In: International Journal of Human-Computer Interaction, vol. 26, no. 4, pp. 361–376, 2010. @article{Kotani2010, The objective of this study was to construct and empirically evaluate an improved, online eye-typing interface with respect to its practical usability. The system used the concept of saccadic latency, a silent period of 200 to 250 msec precedes the initiation of a saccade, for identifying the user's intentional text entry. Ten individuals participated in the experiment that was conducted on 2 consecutive days, with three blocks of trials conducted on each day. A block included five trials, each of which involved completing the text entry of a short sentence using this eye-typing interface. The proposed interface was evaluated by the user's performance based on indices including typing speed and an error index. For defining the error index, the overproduction rates (ORs) were used. The results showed an average OR of 0.032 and average typing speed of 27.1 characters typed per minute. The result revealed that the typing speed changed as an effect of participant, day, and block. The characteristics of the proposed interface with the related characteristics of an eye-typing interface were summarized to discuss a further study for the eye-typing interface. |
A. Kotowicz; Ueli Rutishauser; Christof Koch Time course of target recognition in visual search Journal Article In: Frontiers in Human Neuroscience, vol. 4, pp. 31, 2010. @article{Kotowicz2010, Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation ( approximately 170 ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10 ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy. |
André Krügel; Ralf Engbert On the launch-site effect for skipped words during reading Journal Article In: Vision Research, vol. 50, no. 16, pp. 1532–1539, 2010. @article{Kruegel2010, The launch-site effect, a systematic variation of within-word landing position as a function of launch-site distance, is among the most important oculomotor phenomena in reading. Here we show that the launch-site effect is strongly modulated in word skipping, a finding which is inconsistent with the view that the launch-site effect is caused by a saccadic-range error. We observe that distributions of landing positions in skipping saccades show an increased leftward shift compared to non-skipping saccades at equal launch-site distances. Using an improved algorithm for the estimation of mislocated fixations, we demonstrate the reliability of our results. |
Gustav Kuhn; Valerie Benson; Sue Fletcher-Watson; Hanna Kovshoff; Cristin A. McCormick; Julie A. Kirkby; Susan R. Leekam Eye movements affirm: automatic overt gaze and arrow cueing for typical adults and adults with autism spectrum disorder Journal Article In: Experimental Brain Research, vol. 201, no. 2, pp. 155–165, 2010. @article{Kuhn2010a, People with autism spectrum disorder (ASD) show reduced interest towards social aspects of the environment and a lesser tendency to follow other people's gaze in the real world. However, most studies have shown that people with ASD do respond to eye-gaze cues in experimental paradigms, though it is possible that this behaviour is based on an atypical strategy. We tested this possibility in adults with ASD using a cueing task combined with eye-movement recording. Both eye gaze and arrow pointing distractors resulted in overt cueing effects, both in terms of increased saccadic reaction times, and in proportions of saccades executed to the cued direction instead of to the target, for both participant groups. Our results confirm previous reports that eye gaze cues as well as arrow cues result in automatic orienting of overt attention. Moreover, since there were no group differences between arrow and eye gaze cues, we conclude that overt attentional orienting in ASD, at least in response to centrally presented schematic directional distractors, is typical. |
Gustav Kuhn; John M. Findlay Misdirection, attention and awareness: Inattentional blindness reveals temporal relationship between eye movements and visual awareness Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 1, pp. 136–146, 2010. @article{Kuhn2010, We designed a magic trick that could be used to investigate how misdirection can prevent people from perceiving a visually salient event, thus offering a novel paradigm to examine inattentional blindness. We demonstrate that participants' verbal reports reflect what they saw rather than inferences about how they thought the trick was done and thus provide a reliable index of conscious perception. Eye movements revealed that for a subset of participants their conscious perception was not related to where they were looking at the time of the event and thus demonstrate how overt and covert attention can be spatially dissociated. However, detection of the event resulted in rapid shifts of eye movements towards the detected event, thus indicating a strong temporal link between overt and covert attention, and that covert attention can be allocated at least 2 or 3 saccade targets ahead of where people are fixating. |
Gustav Kuhn; Anastasia Kourkoulou; Susan R. Leekam How magic changes our expectations about autism Journal Article In: Psychological Science, vol. 21, no. 10, pp. 1487–1493, 2010. @article{Kuhn2010b, In the vanishing-ball illusion, the magician's social cues misdirect the audience's expectations and attention so that the audience “sees” a ball vanish in the air. Because individuals with autism spectrum disorder (ASD) are less sensitive to social cues and have superior perception for nonsocial details compared with typically developing individuals, we predicted that they would be less susceptible to the illusion. Surprisingly, the opposite result was found, as individuals with ASD were more susceptible to the illusion than a comparison group. Eye-tracking data indicated that subtle temporal delays in allocating attention might explain their heightened susceptibility. Additionally, although individuals with ASD showed typical patterns of looking to the magician's face and eyes, they were slower to launch their first saccade to the face and had difficulty in fixating the fast-moving observable ball. Considered together, the results indicate that individuals with ASD have difficulties in rapidly allocatin... |
NaYoung So; Veit Stuphorn Supplementary eye field encodes option and action value for saccades with variable reward Journal Article In: Journal of Neurophysiology, vol. 104, pp. 2634–2653, 2010. @article{So2010, We recorded neuronal activity in the supplementary eye field (SEF) while monkeys made saccades to targets that yielded rewards of variable amount and uncertainty of delivery. Some SEF cells (29%) represented the anticipated value of the saccade target. These neurons encoded the value of the reward option but did not reflect the action necessary to obtain the reward. A plurality of cells (45%) represented both saccade direction and value. These neurons reflect action value, i.e., the value that is expected to follow from a specific saccade. Other cells (13%) represented only saccade direction. The SEF neurons matched the monkey's risk-seeking behavior by responding more strongly to the uncertain reward options than would be expected based on their response to the sure options and the cued outcome probability. Thus SEF neurons represented subjective, not expected, value. Across the SEF population, option-value signals developed early, ∼120 ms prior to saccade execution. Action-value and saccade direction signals developed ∼60 ms later. These results suggest that the SEF is involved in transforming option-value signals into action-value signals. However, in contrast to other oculomotor neurons, SEF neurons did not reach a constant level of activity before saccade onset. Instead the activity level of many (52%) SEF neurons still reflected value at the time just before saccade initiation. This suggests that SEF neurons guide the selection of a saccade based on value information but do not participate in the initiation of that saccade. |
John F. Soechting; Hrishikesh M. Rao; John Z. Juveli Incorporating prediction in models for two-dimensional smooth pursuit Journal Article In: PLoS ONE, vol. 5, no. 9, pp. e12574, 2010. @article{Soechting2010, A predictive component can contribute to the command signal for smooth pursuit. This is readily demonstrated by the fact that low frequency sinusoidal target motion can be tracked with zero time delay or even with a small lead. The objective of this study was to characterize the predictive contributions to pursuit tracking more precisely by developing analytical models for predictive smooth pursuit. Subjects tracked a small target moving in two dimensions. In the simplest case, the periodic target motion was composed of the sums of two sinusoidal motions (SS), along both the horizontal and the vertical axes. Motions following the same or similar paths, but having a richer spectral composition, were produced by having the target follow the same path but at a constant speed (CS), and by combining the horizontal SS velocity with the vertical CS velocity and vice versa. Several different quantitative models were evaluated. The predictive contribution to the eye tracking command signal could be modeled as a low-pass filtered target acceleration signal with a time delay. This predictive signal, when combined with retinal image velocity at the same time delay, as in classical models for the initiation of pursuit, gave a good fit to the data. The weighting of the predictive acceleration component was different in different experimental conditions, being largest when target motion was simplest, following the SS velocity profiles. |
Grayden J. F. Solman; Daniel Smilek Item-specific location memory in visual search Journal Article In: Vision Research, vol. 50, no. 23, pp. 2430–2438, 2010. @article{Solman2010, In two samples, we demonstrate that visual search performance is influenced by memory for the locations of specific search items across trials. We monitored eye movements as observers searched for a target letter in displays containing 16 or 24 letters. From trial to trial the configuration of the search items was either Random, fully Repeated or similar but not identical (i.e., Intermediate). We found a graded pattern of response times across conditions with slowest times in the Random condition and fastest responses in the Repeated condition. We also found that search was comparably efficient in the Intermediate and Random conditions but more efficient in the Repeated condition. Importantly, the target on a given trial was fixated more accurately in the Repeated and Intermediate conditions relative to the Random condition. We suggest a tradeoff between memory and perception in search as a function of the physical scale of the search space. |
Joo-Hyun Song; Robert M. McPeek Roles of narrow- and broad-spiking dorsal premotor area neurons in reach target selection and movement production Journal Article In: Journal of Neurophysiology, vol. 103, no. 4, pp. 2124–2138, 2010. @article{Song2010, Most visual scenes are complex and crowded, with several different objects competing for attention and action. Thus a complete understanding of the production of goal-directed actions must incorporate the higher-level process of target selection. To examine the neural substrates of target selection for visually guided reaching, we recorded the activity of isolated neurons in the dorsal premotor area (PMd) of monkeys performing a reaction-time visual search task. In this task, monkeys reached to an odd-colored target presented with three distractors. We found that PMd neurons typically discriminate the target before movement onset, ∼150-200 ms after the appearance of the search array. In one subset of neurons, discrimination occurred at a consistent time after search array onset regardless of when the reaching movement occurred, suggesting that these neurons are involved in target selection. In a second group of neurons, discrimination time depended on reach reaction time, consistent with involvement in movement production but not in target selection. To look for physiological corroboration of these two functionally defined groups, we analyzed the extracellular spike waveforms of recorded neurons. This analysis showed a population of neurons with narrow action potentials that carried signals related to target selection. A second population with broader action potentials was more heterogeneous, with some neurons showing activity related to target selection and others showing only movement production activity. These results suggest that PMd contains signals related to target selection and movement execution and that different signals are carried by distinct neural subpopulations. |
Andreas Sprenger; Maren Lappe-Osthege; Silke Talamo; Steffen Gais; Hubert Kimmig; Christoph Helmchen Eye movements during REM sleep and imagination of visual scenes Journal Article In: NeuroReport, vol. 21, no. 1, pp. 45–49, 2010. @article{Sprenger2010, It has been hypothesized that rapid eye movements (REMs) during sleep reflect the process of looking around in dreams. We questioned whether REMs differ from eye movements in wakefulness while imagining previously seen visual stimuli (dots, static images, videos). After looking at these stimuli individuals were asked to remember and imagine them. Subsequently, their REMs were recorded at the sleep laboratory. Kinematic parameters of REMs were similar to saccadic eye movements to remembered stimuli with closed eyes, irrespective of the stimulus type. In contrast, peak velocity of eye movements with open eyes was similar to REMs when semantic, but not nonsemantic, contents were imagined. Thus, REMs may be related to exploratory saccadic behaviour in the awake to remember visual stimuli. |
Aidan A. Thompson; Denise Y. P. Henriques Locations of serial reach targets are coded in multiple reference frames Journal Article In: Vision Research, vol. 50, no. 24, pp. 2651–2660, 2010. @article{Thompson2010, Previous work from our lab, and elsewhere, has demonstrated that remembered target locations are stored and updated in an eye-fixed reference frame. That is, reach errors systematically vary as a function of gaze direction relative to a remembered target location, not only when the target is viewed in the periphery (Bock, 1986, known as the retinal magnification effect), but also when the target has been foveated, and the eyes subsequently move after the target has disappeared but prior to reaching (e.g., Henriques, Klier, Smith, Lowy, & Crawford, 1998; Sorrento & Henriques, 2008; Thompson & Henriques, 2008). These gaze-dependent errors, following intervening eye movements, cannot be explained by representations whose frame is fixed to the head, body or even the world. However, it is unknown whether targets presented sequentially would all be coded relative to gaze (i.e., egocentrically/absolutely), or if they would be coded relative to the previous target (i.e., allocentrically/relatively). It might be expected that the reaching movements to two targets separated by 5° would differ by that distance. But, if gaze were to shift between the first and second reaches, would the movement amplitude between the targets differ? If the target locations are coded allocentrically (i.e., the location of the second target coded relative to the first) then the movement amplitude should be about 5° But, if the second target is coded egocentrically (i.e., relative to current gaze direction), then the reaches to this target and the distances between the subsequent movements should vary systematically with gaze as described above. We found that requiring an intervening saccade to the opposite side of 2 briefly presented targets between reaches to them resulted in a pattern of reaching error that systematically varied as a function of the distance between current gaze and target, and led to a systematic change in the distance between the sequential reach endpoints as predicted by an egocentric frame anchored to the eye. However, the amount of change in this distance was smaller than predicted by a pure eye-fixed representation, suggesting that relative positions of the targets or allocentric coding was also used in sequential reach planning. The spatial coding and updating of sequential reach target locations seems to rely on a combined weighting of multiple reference frames, with one of them centered on the eye. |
Matthew J. Thurtell; Louis F. Dell'Osso; R. John Leigh; Marcelo Mattar; Jonathan B. Jacobs; Robert L. Tomsak Effects of acetazolamide on infantile nystagmus syndrome waveforms: comparisons to contact lenses and convergence in a well-studied subject. Journal Article In: The Open Ophthalmology Journal, vol. 4, pp. 42–51, 2010. @article{Thurtell2010, AIM: To determine if acetazolamide, an effective treatment for certain inherited channelopathies, has therapeutic effects on infantile nystagmus syndrome (INS) in a well-studied subject, compare them to other therapies in the same subject and to tenotomy and reattachment (T&R) in other subjects. METHODS: Eye-movement data were taken using a high-speed digital video recording system. Nystagmus waveforms were analyzed by applying an eXpanded Nystagmus Acuity Function (NAFX) at different gaze angles and determining the Longest Foveation Domain (LFD). RESULTS: Acetazolamide improved foveation by both a 59.7% increase in the peak value of the NAFX function (from 0.395 to 0.580) and a 70% broadening of the NAFX vs Gaze Angle curve (the LFD increased from 20° to 34°). The resulting U-shaped improvement in the percent NAFX vs Gaze Angle curve, varied from ~60% near the NAFX peak to over 1000% laterally. The therapeutic improvements in NAFX from acetazolamide (similar to T&R) were intermediate between those of soft contact lenses and convergence, the latter was best; for LFD improvements, acetazolamide and contact lenses were equivalent and less effective than convergence. Computer simulations suggested that damping the central oscillation driving INS was insufficient to produce the foveation improvements and increased NAFX values. CONCLUSION: Acetazolamide resulted in improved-foveation INS waveforms over a broadened range of gaze angles, probably acting at more than one site. This raises the question of whether hereditary INS involves an inherited channelopathy, and whether other agents with known effects on ion channels should be investigated as therapy for this condition. |
Mark Torrance Grammatical planning, execution, and control in written sentence production Journal Article In: Reading and Writing, vol. 23, no. 7, pp. 777–801, 2010. @article{Torrance2010, In this study participants were asked to describe pictured events in one type-written sentence, containing one of two different syntactic structures (subordinated vs. coordinated subject noun phrases). According to the hypothesis, the larger subordinated structure (one noun phrase including a second, subordinated, one) should be cognitively more costly and will be planned before the start of the production, whereas the coordinated structure, consisting of two syntactically equal noun phrases, can be planned locally in an incremental fashion. The hypothesis was confirmed by the analysis of the word-initial keystroke latencies as well as the eye movements towards the stimulus, indicating a stronger tendency to incremental planning in case of the coordinated structure. |
Reza Shadmehr; Jean-Jacques Orban de Xivry; Minnan Xu-Wilson; Ting-Yu Shih Temporal discounting of reward and the cost of time in motor control Journal Article In: Journal of Neuroscience, vol. 30, no. 31, pp. 10507–10516, 2010. @article{Shadmehr2010, Why do movements take a characteristic amount of time, and why do diseases that affect the reward system alter control of movements? Suppose that the purpose of any movement is to position our body in a more rewarding state. People and other animals discount future reward as a hyperbolic function of time. Here, we show that across populations of people and monkeys there is a correlation between discounting of reward and control of movements. We consider saccadic eye movements and hypothesize that duration of a movement is equivalent to a delay of reward. The hyperbolic cost of this delay not only accounts for kinematics of saccades in adults, it also accounts for the faster saccades of children, who temporally discount reward more steeply. Our theory explains why saccade velocities increase when reward is elevated, and why disorders in the encoding of reward, for example in Parkinson's disease and schizophrenia, produce changes in saccade. We show that delay of reward elevates the cost of saccades, reducing velocities. Finally, we consider coordinated movements that include motion of eyes and head and find that their kinematics is also consistent with a hyperbolic, reward-dependent cost of time. Therefore, each voluntary movement carries a cost because its duration delays acquisition of reward. The cost depends on the value that the brain assigns to stimuli, and the rate at which it discounts this value in time. The motor commands that move our eyes reflect this cost of time. |
Patrick Sturt; Frank Keller; Amit Dubey Syntactic priming in comprehension: Parallelism effects with and without coordination Journal Article In: Journal of Memory and Language, vol. 62, no. 4, pp. 333–351, 2010. @article{Sturt2010, Although previous research has shown a processing facilitation for conjoined phrases that share the same structure, it is currently not clear whether this parallelism advantage is specific to particular syntactic environments such as coordination, or whether it is an example of more general effect in sentence comprehension. Here, we report three eye-tracking experiments that test for parallelism effects both in coordinated noun phrases and in subordinate clauses. The first experiment replicated previous findings, showing that the second conjunct of a coordinated noun phrase was read more quickly when it had the same structure as the first conjunct, compared with when it did not. Experiment 2 examined parallelism effects in noun phrases that were not linked by coordination. Again, a reading time advantage was found when the second noun phrase had the same structure as the first. Experiment 3 compared parallelism effects in coordinated and non-coordinated syntactic environments. The parallelism effect was replicated for both environments, and was statistically equivalent whether or not coordination was involved. This demonstrated that parallelism effects can be found outside the environment of coordination, suggesting a general syntactic priming mechanism as the underlying explanation. |
Riju Srimal; Clayton E. Curtis Secondary adaptation of memory-guided saccades Journal Article In: Experimental Brain Research, vol. 206, no. 1, pp. 35–46, 2010. @article{Srimal2010, Adaptation of saccade gains in response to errors keeps vision and action co-registered in the absence of awareness or effort. Timing is key, as the visual error must be available shortly after the saccade is generated or adaptation does not occur. Here, we tested the hypothesis that when feedback is delayed, learning still occurs, but does so through small secondary corrective saccades. Using a memory-guided saccade task, we gave feedback about the accuracy of saccades that was falsely displaced by a consistent amount, but only after long delays. Despite the delayed feedback, over time subjects improved in accuracy toward the false feedback. They did so not by adjusting their primary saccades, but via directed corrective saccades made before feedback was given. We propose that saccade learning may be driven by different types of feedback teaching signals. One teaching signal relies upon a tight temporal relation with the saccade and contributes to obligatory learning independent of awareness. When this signal is ineffective due to delayed error feedback, a second compensatory teaching signal enables flexible adjustments to the spatial goal of saccades and helps maintain sensorimotor accuracy. |
Terrence R. Stanford; Swetha Shankar; Dino P. Massoglia; M. Gabriela Costello; Emilio Salinas Perceptual decision making in less than 30 milliseconds Journal Article In: Nature Neuroscience, vol. 13, no. 3, pp. 379–385, 2010. @article{Stanford2010, In perceptual discrimination tasks, a subject's response time is determined by both sensory and motor processes. Measuring the time consumed by the perceptual evaluation step alone is therefore complicated by factors such as motor preparation, task difficulty and speed-accuracy tradeoffs. Here we present a task design that minimizes these confounding factors and allows us to track a subject's perceptual performance with unprecedented temporal resolution. We find that monkeys can make accurate color discriminations in less than 30 ms. Furthermore, our simple task design provides a tool for elucidating how neuronal activity relates to sensory as opposed to motor processing, as demonstrated with neural data from cortical oculomotor neurons. In these cells, perceptual information acts by accelerating and decelerating the ongoing motor plans associated with correct and incorrect choices, as predicted by a race-to-threshold model, and the time course of these neural events parallels the time course of the subject's choice accuracy. |
Adrian Staub Eye movements and processing difficulty in object relative clauses Journal Article In: Cognition, vol. 116, no. 1, pp. 71–86, 2010. @article{Staub2010, It is well known that sentences containing object-extracted relative clauses (e.g., The reporter that the senator attacked admitted the error) are more difficult to comprehend than sentences containing subject-extracted relative clauses (e.g., The reporter that attacked the senator admitted the error). Two major accounts of this phenomenon make different predictions about where, in the course of incremental processing of an object relative, difficulty should first appear. An account emphasizing memory processes (Gibson, 1998; Grodner & Gibson, 2005) predicts difficulty at the relative clause verb, while an account emphasizing experience-based expectations (Hale, 2001; Levy, 2008) predicts earlier difficulty, at the relative clause subject. Two eye movement experiments tested these predictions. Regressive saccades were much more likely from the subject noun phrase of an object relative than from the same noun phrase occurring within a subject relative (Experiment 1) or within a verbal complement clause (Experiment 2). This effect was further amplified when the relative pronoun that was omitted. However, reading time was also inflated on the object relative clause verb in both experiments. These results suggest that the violation of expectations and the difficulty of memory retrieval both contribute to the difficulty of object relative clauses, but that these two sources of difficulty have qualitatively distinct behavioral consequences in normal reading. |
Damian G. Stephen; Daniel Mirman Interactions dominate the dynamics of visual cognition Journal Article In: Cognition, vol. 115, no. 1, pp. 154–165, 2010. @article{Stephen2010, Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. |
Catherine Stevens; Heather Winskel; Clare Howell; Lyne-Marine Vidal; Cyril Latimer; Josephine Milne-Home Perceiving dance: Schematic expectations guide experts' scanning of a contemporary dance film Journal Article In: Journal of Dance Medicine & Science, vol. 14, no. 1, pp. 19–25, 2010. @article{Stevens2010, Eye fixations and saccades (eye movements) of expert and novice dance observers were compared to determine the effect of acquired expectations on observations of human movement, body morphology, and dance configurations. As hypothesized, measured fixation times of dance experts were significantly shorter than those of novices. In a second viewing of the same sequences, novices recorded significantly shorter fixations than those recorded during viewing session 1. Saccades recorded from experts were significantly faster than those of novices. Although both experts and novices fixated background regions, most likely making use of extrafoveal or peripheral vision to anticipate movement and configurations, novices fixated background regions significantly more than experts in viewing session 1. Their enhanced speed of visual processing suggests that dance experts are adept at anticipating movement and rapidly processing material, probably aided by acquired schemata or expectations in long-term memory and recognition of body and movement configurations. |
Elizabeth A. L. Stine-Morrow; Matthew C. Shake; Joseph R. Miles; Kenton Lee; Xuefei Gao; George W. McConkie Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual Iintegration in sentence processing Journal Article In: Psychology and Aging, vol. 25, no. 1, pp. 168–176, 2010. @article{StineMorrow2010, Previous research has suggested that older readers may self-regulate input during reading differently from the way younger readers do, so as to accommodate age-graded change in processing capacity. For example, older adults may pause more frequently for conceptual integration. Presumably, such an allocation policy would enable older readers to manage the cognitive demands of constructing a semantic representation of the text by off-loading the products of intermediate computations to long-term memory, thus decreasing memory demands as conceptual load increases. This was explicitly tested in 2 experiments measuring word-by-word reading time for sentences in which boundary salience was manipulated but in which semantic content was controlled. With both a computer-based moving-window paradigm that permits only forward eye movements, and an eye-tracking paradigm that allows measurement of regressive eye movements, we found evidence for the proposed tradeoff between early and late wrap-up. Across the 2 experiments, age groups were more similar than different in regulating processing time. However, older adults showed evidence of exaggerated early wrap-up in both experiments. These data are consistent with the notion that readers opportunistically regulate effort and that older readers can use this to good advantage to maintain comprehension. |
Sonja Stork; Anna Schubö Human cognition in manual assembly: Theories and applications Journal Article In: Advanced Engineering Informatics, vol. 24, no. 3, pp. 320–328, 2010. @article{Stork2010, Human cognition in production environments is analyzed with respect to various findings and theories in cognitive psychology. This theoretical overview describes effects of task complexity and attentional demands on both mental workload and task performance as well as presents experimental data on these topics. A review of two studies investigating the benefit of augmented reality and spatial cueing in an assembly task is given. Results demonstrate an improvement in task performance with attentional guidance while using contact analog highlighting. Improvements were obvious in reduced performance times and eye fixations as well as in increased velocity and acceleration of reaching and grasping movements. These results have various implications for the development of an assistive system. Future directions in this line of applied research are suggested. The introduced methodology illustrates how the analysis of human information processes and psychological experiments can contribute to the evaluation of engineering applications. |
Benjamin W. Tatler; Nicholas J. Wade; Hoi Kwan; John M. Findlay; Boris M. Velichkovsky Yarbus, eye movements, and vision Journal Article In: i-Perception, vol. 1, no. 1, pp. 7–27, 2010. @article{Tatler2010, The impact of Yarbus's research on eye movements was enormous following the translation of his book Eye Movements and Vision into English in 1967. In stark contrast, the published material in English concerning his life is scant. We provide a brief biography of Yarbus and assess his impact on contemporary approaches to research on eye movements. While early interest in his work focused on his study of stabilised retinal images, more recently this has been replaced with interest in his work on the cognitive influences on scanning patterns. We extended his experiment on the effect of instructions on viewing a picture using a portrait of Yarbus rather than a painting. The results obtained broadly supported those found by Yarbus. |
Jessica Taubert; Pamela J. Marsh; Tracey A. Shaw When you turn the other cheek: A preference for novel viewpoints of familiar faces Journal Article In: Perception, vol. 39, no. 3, pp. 429–432, 2010. @article{Taubert2010, Inferences about the psychobiological processes that underlie face perception have been drawn from the spontaneous behaviour of eyes. Using a visual paired-comparison task, we recorded the eye movements of twenty adults as they viewed pairs of faces that differed in their relative familiarity. The results indicate an advantage for novel viewpoints of familiar faces over familiar viewpoints of familiar faces and novel faces. We conclude that this preference serves the face recognition system by collecting the variation necessary to build robust representations of identity. |
Abtine Tavassoli; Dario L. Ringach When your eyes see more than you do Journal Article In: Current Biology, vol. 20, no. 3, pp. 93–94, 2010. @article{Tavassoli2010, Visual information is used by the brain to construct a conscious experience of the visual world and to guide motor actions [1]. Here we report a study of how eye movements and perception relate to each other. We compared the ability of human observers to perceive image motion with the reliability of their eyes to track the motion of a target [2], [3] and [4], the goal being to test whether both motor and sensory processes are based on the same set of signals and limited by a shared source of noise [2] and [4]. We found that the oculomotor system can detect fluctuations in the velocity of a moving target better than the observer. Surprisingly, in some conditions, eye movements reliably respond to the velocity fluctuations of a moving target that are otherwise perceptually invisible to the subjects. The implication is that visual motion signals exist in the brain that can be used to guide motor actions without evoking a perceptual outcome nor being accessible to conscious scrutiny. |
Illia Tchernikov; Mazyar Fallah A color hierarchy for automatic target selection Journal Article In: PLoS ONE, vol. 5, no. 2, pp. e9338, 2010. @article{Tchernikov2010, Visual processing of color starts at the cones in the retina and continues through ventral stream visual areas, called the parvocellular pathway. Motion processing also starts in the retina but continues through dorsal stream visual areas, called the magnocellular system. Color and motion processing are functionally and anatomically discrete. Previously, motion processing areas MT and MST have been shown to have no color selectivity to a moving stimulus; the neurons were colorblind whenever color was presented along with motion. This occurs when the stimuli are luminance-defined versus the background and is considered achromatic motion processing. Is motion processing independent of color processing? We find that motion processing is intrinsically modulated by color. Color modulated smooth pursuit eye movements produced upon saccading to an aperture containing a surface of coherently moving dots upon a black background. Furthermore, when two surfaces that differed in color were present, one surface was automatically selected based upon a color hierarchy. The strength of that selection depended upon the distance between the two colors in color space. A quantifiable color hierarchy for automatic target selection has wide-ranging implications from sports to advertising to human-computer interfaces. |
Anna L. Telling; Antje S. Meyer; Glyn W. Humphreys Distracted by relatives: Effects of frontal lobe damage on semantic distraction Journal Article In: Brain and Cognition, vol. 73, no. 3, pp. 203–214, 2010. @article{Telling2010, When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see Belke, Humphreys, Watson, Meyer, & Telling, 2008; Moores, Laiti, & Chelazzi, 2003). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection. |
Masahiko Terao; Junji Watanabe; Akihiro Yagi; Shin'ya Nishida Smooth pursuit eye movements improve temporal resolution for color perception Journal Article In: PLoS ONE, vol. 5, no. 6, pp. e11214, 2010. @article{Terao2010, Human observers see a single mixed color (yellow) when different colors (red and green) rapidly alternate. Accumulating evidence suggests that the critical temporal frequency beyond which chromatic fusion occurs does not simply reflect the temporal limit of peripheral encoding. However, it remains poorly understood how the central processing controls the fusion frequency. Here we show that the fusion frequency can be elevated by extra-retinal signals during smooth pursuit. This eye movement can keep the image of a moving target in the fovea, but it also introduces a backward retinal sweep of the stationary background pattern. We found that the fusion frequency was higher when retinal color changes were generated by pursuit-induced background motions than when the same retinal color changes were generated by object motions during eye fixation. This temporal improvement cannot be ascribed to a general increase in contrast gain of specific neural mechanisms during pursuit, since the improvement was not observed with a pattern flickering without changing position on the retina or with a pattern moving in the direction opposite to the background motion during pursuit. Our findings indicate that chromatic fusion is controlled by a cortical mechanism that suppresses motion blur. A plausible mechanism is that eye-movement signals change spatiotemporal trajectories along which color signals are integrated so as to reduce chromatic integration at the same locations (i.e., along stationary trajectories) on the retina that normally causes retinal blur during fixation. |
Jan Theeuwes; Sebastiaan Mathôt; Alan Kingstone Object-based eye movements: The eyes prefer to stay within the same object Journal Article In: Attention, Perception, and Psychophysics, vol. 72, no. 3, pp. 597–601, 2010. @article{Theeuwes2010, The present study addressed the question of whether we prefer to make eye movements within or between objects. More specifically, when fixating one end of an object, are we more likely to make the next saccade within that same object or to another object? Observers had to discriminate small letters placed on rectangles similar to those used by Egly, Driver, and Rafal (1994). Following an exogenous cue, observers made a saccade to one end of one of the rectangles. The small target letter, which could be discriminated only after it had been fixated, could appear either within the same or at a different object. Consistent with object-based attention, we show that observers prefer to make an eye movement to the other end of the fixated same object, rather than to the equidistant end of a different object. It is concluded that there is a preference to make eye shifts within the same object, rather than between objects. |
Dhushan Thevarajah; Ryan Webb; Christopher Ferrall; Michael C. Dorris Modeling the value of strategic actions in the superior colliculus Journal Article In: Frontiers in Behavioral Neuroscience, vol. 3, pp. 57, 2010. @article{Thevarajah2010, In learning models of strategic game play, an agent constructs a valuation (action value) over possible future choices as a function of past actions and rewards. Choices are then stochastic functions of these action values. Our goal is to uncover a neural signal that correlates with the action value posited by behavioral learning models. We measured activity from neurons in the superior colliculus (SC), a midbrain region involved in planning saccadic eye movements, while monkeys performed two saccade tasks. In the strategic task, monkeys competed against a computer in a saccade version of the mixed-strategy game "matching-pennies". In the instructed task, saccades were elicited through explicit instruction rather than free choices. In both tasks neuronal activity and behavior were shaped by past actions and rewards with more recent events exerting a larger influence. Further, SC activity predicted upcoming choices during the strategic task and upcoming reaction times during the instructed task. Finally, we found that neuronal activity in both tasks correlated with an established learning model, the Experience Weighted Attraction model of action valuation (Camerer and Ho, 1999). Collectively, our results provide evidence that action values hypothesized by learning models are represented in the motor planning regions of the brain in a manner that could be used to select strategic actions. |
Jedediah M. Singer; David L. Sheinberg Temporal cortex neurons encode articulated actions as slow sequences of integrated poses Journal Article In: Journal of Neuroscience, vol. 30, no. 8, pp. 3133–3145, 2010. @article{Singer2010, Form and motion processing pathways of the primate visual system are known to be interconnected, but there has been surprisingly little investigation of how they interact at the cellular level. Here we explore this issue with a series of three electrophysiology experiments designed to reveal the sources of action selectivity in monkey temporal cortex neurons. Monkeys discriminated between actions performed by complex, richly textured, rendered bipedal figures and hands. The firing patterns of neurons contained enough information to discriminate the identity of the character, the action performed, and the particular conjunction of action and character. This suggests convergence of motion and form information within single cells. Form and motion information in isolation were both sufficient to drive action discrimination within these neurons, but removing form information caused a greater disruption to the original response. Finally, we investigated the temporal window across which visual information is integrated into a single pose (or, equivalently, the timing with which poses are differentiated). Temporal cortex neurons under normal conditions represent actions as sequences of poses integrated over approximately 120 ms. They receive both motion and form information, however, and can use either if the other is absent. |
Chris M. R. Smerecnik; Ilse Mesters; Loes T. E. Kessels; Robert A. C. Ruiter; Nanne K. De Vries; Hein De Vries In: Risk Analysis, vol. 30, no. 9, pp. 1387–1398, 2010. @article{Smerecnik2010, Risk communications are an integral aspect of health education and promotion. However, the commonly used textual risk information is relatively difficult to understand for the average recipient. Consequently, researchers and health promoters have started to focus on so-called decision aids, such as tables and graphs. Although tabular and graphical risk information more effectively communicate risks than textual risk information, the cognitive mechanisms responsible for this enhancement are unclear. This study aimed to examine two possible mechanisms (i.e., cognitive workload and attention). Cognitive workload (mean pupil size and peak pupil dilation) and attention directed to the risk information (viewing time, number of eye fixations, and eye fixation durations) were both measured in a between-subjects experimental design. The results suggest that graphical risk information facilitates comprehension of that information because it attracts and holds attention for a longer period of time than textual risk information. Graphs are thus a valuable asset to risk communication practice for two reasons: first, they tend to attract attention and, second, when attended to, they elicit information extraction with relatively little cognitive effort, and finally result in better comprehension. |
Daniel Smilek; Jonathan S. A. Carriere; J. Allan Cheyne Out of mind, out of sight: Eye blinking as indicator and embodiment of mind wandering. Journal Article In: Psychological Science, vol. 21, no. 6, pp. 786–789, 2010. @article{Smilek2010, Mind wandering, in which cognitive processing of the external environment decreases in favor of internal processing, has been consistently associated with errors on tasks requiring sustained attention and continuous stimulus monitoring. The present investigation is based on the idea that blink rate might serve to modulate trade-offs between attention to mindwandering thoughts and to external task-related stimuli. To assess the relation between eye blinks and mind wandering, we compared blink rates during probe-caught episodes of mind wandering and on-task periods of reading. We also analyzed fixation frequency and fixation duration as a function of mind wandering. Analysis of the rate of eye fixations revealed that the eyes fixated less often during mind wandering than when subjects were on task. Analyses of average fixation durations failed to detect any significant differences between episodes of mind wandering and on-task periods. |
Tom Foulsham; Joey T. Cheng; Jessica L. Tracy; Joseph Henrich; Alan Kingstone Gaze allocation in a dynamic situation: Effects of social status and speaking Journal Article In: Cognition, vol. 117, no. 3, pp. 319–331, 2010. @article{Foulsham2010a, Human visual attention operates in a context that is complex, social and dynamic. To explore this, we recorded people taking part in a group decision-making task and then showed video clips of these situations to new participants while tracking their eye movements. Observers spent the majority of time looking at the people in the videos, and in particular at their eyes and faces. The social status of the people in the clips had been rated by their peers in the group task, and this status hierarchy strongly predicted where eye-tracker participants looked: high-status individuals were gazed at much more often, and for longer, than low-status individuals, even over short, 20-s videos. Fixation was temporally coupled to the person who was talking at any one time, but this did not account for the effect of social status on attention. These results are consistent with a gaze system that is attuned to the presence of other individuals, to their social status within a group, and to the information most useful for social interaction. |
Tom Foulsham; Alan Kingstone Asymmetries in the direction of saccades during perception of scenes and fractals: Effects of image type and image features Journal Article In: Vision Research, vol. 50, no. 8, pp. 779–795, 2010. @article{Foulsham2010, The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance. |
Alessio Fracasso; Alfonso Caramazza; David Melcher Continuous perception of motion and shape across saccadic eye movements Journal Article In: Journal of Vision, vol. 10, no. 13, pp. 1–17, 2010. @article{Fracasso2010, Although our naïve experience of visual perception is that it is smooth and coherent, the actual input from the retina involves brief and discrete fixations separated by saccadic eye movements. This raises the question of whether our impression of stable and continuous vision is merely an illusion. To test this, we examined whether motion perception can "bridge" a saccade in a two-frame apparent motion display in which the two frames were separated by a saccade. We found that transformational apparent motion, in which an object is seen to change shape and even move in three dimensions during the motion trajectory, continues across saccades. Moreover, participants preferred an interpretation of motion in spatial, rather than retinal, coordinates. The strength of the motion percept depended on the temporal delay between the two motion frames and was sufficient to give rise to a motion-from-shape aftereffect, even when the motion was defined by a second-order shape cue ("phantom transformational apparent motion"). These findings suggest that motion and shape information are integrated across saccades into a single, coherent percept of a moving object. |
Tom C. A. Freeman; Rebecca A. Champion; Paul A. Warren A Bayesian model of perceived head-centered Velocity during smooth pursuit eye movement Journal Article In: Current Biology, vol. 20, no. 8, pp. 757–762, 2010. @article{Freeman2010, During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion. |
Cheryl Frenck-Mestre; Nathalie Zardan; Annie Colas; Alain Ghio Eye-movement patterns of readers with down syndrome during sentence-processing: An exploratory study Journal Article In: American Journal on Intellectual and Developmental Disabilities, vol. 115, no. 3, pp. 193–206, 2010. @article{FrenckMestre2010, Eye movements were examined to determine how readers with Down syndrome process sentences online. Participants were 9 individuals with Down syndrome ranging in reading level from Grades 1 to 3 and a reading-level-matched control group. For syntactically simple sentences, the pattern of reading times was similar for the two groups, with longer reading times found at sentence end. This "wrap-up" effect was also found in the first reading of more complex sentences for the control group, whereas it only emerged later for the readers with Down syndrome. Our results provide evidence that eye movements can be used to investigate reading in individuals with Down syndrome and underline the need for future studies. |
Hans Peter Frey; Shane P. Kelly; Edmund C. Lalor; John J. Foxe Early spatial attentional modulation of inputs to the fovea Journal Article In: Journal of Neuroscience, vol. 30, no. 13, pp. 4547–4551, 2010. @article{Frey2010, Attending to a specific spatial location modulates responsivity of neurons with receptive fields processing that part of the environment. A major outstanding question is whether attentional modulation operates differently for the foveal (central) representation of the visual field than it does for the periphery. Indeed, recent animal electrophysiological recordings suggest that attention differentially affects spatial integration for central and peripheral receptive fields in primary visual cortex. In human electroencephalographic recordings, spatial attention to peripheral locations robustly modulates activity in early visual regions, but it has been claimed that this mechanism does not operate in foveal vision. Here, however, we show clear early attentional modulation of foveal stimulation with the same timing and cortical sources as seen for peripheral stimuli, demonstrating that attentional gain control operates similarly across the entire field of view. These results imply that covertly attending away from the center of gaze, which is a common paradigm in behavioral and electrophysiological studies of attention, results in a precisely timed push–pull mechanism. While the amplitude of the initial response to stimulation at attended peripheral locations is significantly increased beginning at 80 ms, the amplitude of the response to foveal stimulation begins to be attenuated. |
Shai Gabay; Avishai Henik; Libe Gradstein Ocular motor ability and covert attention in patients with Duane Retraction Syndrome Journal Article In: Neuropsychologia, vol. 48, no. 10, pp. 3102–3109, 2010. @article{Gabay2010, Is orienting of spatial attention dependent on normal functioning of the ocular motor system? We investigated the role of motor pathways in covert orienting (attentional orienting without performing eye movements) by studying three patients suffering from Duane Retraction Syndrome-a congenital impairment in executing horizontal eye movements restricted to specific gaze directions. Patients showed a typical exogenous (reflexive) attention effect when the target was presented in visual fields to which they could perform an eye movement. This effect was not present when the target was presented in the visual field to which they could not perform eye movements. These findings stress the link between eye movements and attention. Specifically, they bring out the importance of the ability to execute appropriate eye movements for attentional orienting. We suggest that the relevant information about eye movement ability is provided by feedback from lower motor structures to higher attentional areas. |
Wolfgang Einhäuser; Christof Koch; Olivia Carter Pupil dilation betrays the timing of decisions Journal Article In: Frontiers in Human Neuroscience, vol. 4, pp. 18, 2010. @article{Einhaeuser2010, The notion of "mind-reading" by carefully observing another individual's physiological responses has recently become commonplace in popular culture, particularly in the context of brain imaging. The question remains, however, whether outwardly accessible physiological signals indeed betray a decision before a person voluntarily reports it. In one experiment we asked observers to push a button at any time during a 10-s period ("immediate overt response"). In a series of three additional experiments observers were asked to select one number from five sequentially presented digits but concealed their decision until the trial's end ("covert choice"). In these experiments observers either had to choose the digit themselves under conditions of reward and no reward, or were instructed which digit to select via an external cue provided at the time of the digit presentation. In all cases pupil dilation alone predicted the choice (timing of button response or chosen digit, respectively). Consideration of the average pupil-dilation responses, across all experiments, showed that this prediction of timing was distinct from a general arousal or reward-anticipation response. Furthermore, the pupil dilation appeared to reflect the post-decisional consolidation of the selected outcome rather than the pre-decisional cognitive appraisal component of the decision. Given the tight link between pupil dilation and norepinephrine levels during constant illumination, our results have implications beyond the tantalizing mind-reading speculations. These findings suggest that similar noradrenergic mechanisms may underlie the consolidation of both overt and covert decisions. |
Brianna M. Eiter; Albrecht W. Inhoff Visual word recognition during reading is followed by subvocal articulation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 36, no. 2, pp. 457–470, 2010. @article{Eiter2010, Three experiments examined whether the identification of a visual word is followed by its subvocal articulation during reading. An irrelevant spoken word (ISW) that was identical, phonologically similar, or dissimilar to a visual target word was presented when the eyes moved to the target in the course of sentence reading. Sentence reading was further accompanied by either a sequential finger tapping task (Experiment 1) or an articulatory suppression task (Experiment 2). Experiment 1 revealed sound-specific interference from a phonologically similar ISW during posttarget viewing. This interference was absent in Experiment 2, where similar and dissimilar ISWs impeded target and posttarget reading equally. Experiment 3 showed that articulatory suppression left the lexical processing of visual words intact and that it did not diminish the influence of visual word recognition on eye guidance. The presence of sound-specific interference during posttarget reading in Experiment 1 is attributed to deleterious effects of a phonologically similar ISW on the subvocal articulation of a target. Its absence in Experiment 2 is attributed to the suppression of a target's subvocal articulation. |
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold Visual search performance among persons with schizophrenia as a function of target eccentricity Journal Article In: Neuropsychology, vol. 24, no. 2, pp. 192–198, 2010. @article{Elahipanah2010, The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. |
Nick C. Ellis; Nuria Sagarra Learned attention effects in L2 temporal reference: The first hour and the next eight semesters Journal Article In: Language Learning, vol. 60, pp. 85–108, 2010. @article{Ellis2010, This article relates adults' difficulty acquiring foreign languages to the associative learn- ing phenomena of cue salience, cue complexity, and the blocking of later experienced cues by earlier learned ones. It examines short- and long-term learned attention effects in adult acquisition of lexical (adverbs) and morphological cues (verbal inflections) for temporal reference in Latin (1 hr of controlled laboratory learning) and Spanish (three to eight semesters of classroom learning). Our experiments indicate that early adult learning is characterized by a general tendency to focus on lexical cues because of their physical salience in the input and their psychological salience resulting from their simplicity of form-function mapping and from learners' prior first language knowledge. Later, attention to verbal morphology is modulated by cue complexity and language experience: Acquisition is better in cases of cues of lesser complexity, speakers of morphologically rich native languages, and longer periods of study. Finally, instruc- tional practices that emphasize morphological cues by means either of preexposure or typographical enhancement increase attention to inflections thus to block reliance on adverbial cues. This |
Ralf Engbert; André Krügel Readers use bayesian estimation for eye movement control Journal Article In: Psychological Science, vol. 21, no. 3, pp. 366–371, 2010. @article{Engbert2010, During reading, saccadic landing positions within words show a pronounced peak close to the word center, with an additional systematic error that is modulated by the distance from the launch site and the length of the target word. Here we show that the systematic variation of fixation positions within words, the saccadic range error, can be derived from Bayesian decision theory. We present the first mathematical model for the saccadic range error; this model makes explicit assumptions regarding underlying visual and oculomotor processes. Analyzing a corpus of eye movement recordings, we obtained results that are consistent with the view that readers use Bayesian estimation for saccade planning. Furthermore, we show that alternative models fail to reproduce the experimental data. |
Paul E. Engelhardt; Fernanda Ferreira; Elena G. Patsenko Pupillometry reveals processing load during spoken language comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 4, pp. 639–645, 2010. @article{Engelhardt2010, This study investigated processing effort by measuring peoples' pupil diameter as they listened to sentences containing a temporary syntactic ambiguity. In the first experiment, we manipulated prosody. The results showed that when prosodic structure conflicted with syntactic structure, pupil diameter reliably increased. In the second experiment, we manipulated both prosody and visual context. The results showed that when visual context was consistent with the correct interpretation, prosody had very little effect on processing effort. However, when visual context was inconsistent with the correct interpretation, prosody had a large effect on processing effort. The interaction between visual context and prosody shows that visual context has an effect on online processing and that it can modulate the influence of linguistic sources of information, such as prosody. Pupillometry is a sensitive measure of processing effort during spoken language comprehension. |
Heather J. Ferguson; Christoph Scheepers; Anthony J. Sanford Expectations in counterfactual and theory of mind reasoning Journal Article In: Language and Cognitive Processes, vol. 25, no. 3, pp. 297–346, 2010. @article{Ferguson2010, During language comprehension, information about the world is exchanged and processed. Two essential ingredients of everyday cognition that are employed during language comprehension are the ability to reason counterfactually, and the ability to understand and predict other peoples' behaviour by attributing independent mental states to them (theory of mind).We report two visual-world studies investigating the extent to which the constraints of world knowledge and prior context, as established by a counterfactual (Exp. 1) or a false belief situation (Exp. 2), influence eye-movements directed towards objects in a visual field. Proportions of anticipatory eye-movements indicated an initial visual bias towards contextually supported referents in both studies. Thus, we propose that when visual information is available to reinforce linguistic input, participants expect a context-relevant continuation. Shortly after the critical word onset, the linguistically supported referent was visually favoured, with counterfactual (but not false belief) contexts revealing a temporal delay in integrating factually inconsistent language input. Results are discussed in relation to accounts of discourse processing and the processing relationship between counterfactual and theory of mind reasoning. Finally, we compare findings across different experimental paradigms and propose a novel cluster-analytic procedure to identify time-windows of interest in visual-world data. |
Ruth Filik; Linda M. Moxey The on-line processing of written irony Journal Article In: Cognition, vol. 116, no. 3, pp. 421–436, 2010. @article{Filik2010, We report an eye-tracking study in which we investigate the on-line processing of written irony. Specifically, participants' eye movements were recorded while they read sentences which were either intended ironically, or non-ironically, and subsequent text which contained pronominal reference to the ironic (or non-ironic) phrase. Results showed longer reading times for ironic comments compared to a non-ironic baseline, suggesting that additional processing was required in ironic compared to non-ironic conditions. Reading times for subsequent pronominal reference indicated that for ironic materials, both the ironic and literal interpretations of the text were equally accessible during on-line language comprehension. This finding is most in-line with predictions of the graded salience hypothesis, which, in conjunction with the retention hypothesis, states that readers represent both the literal and ironic interpretation of an ironic utterance. |
Kevin Fleming; Carole L. Bandy; Matthew O. Kimble Decisions to shoot in a weapon identification task: The influence of cultural stereotypes and perceived threat on false positive errors Journal Article In: Social Neuroscience, vol. 5, no. 2, pp. 201–220, 2010. @article{Fleming2010, The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance. |
Denise D. J. Grave; Nicola Bruno The effect of the Müller-Lyer illusion on saccades is modulated by spatial predictability and saccadic latency Journal Article In: Experimental Brain Research, vol. 203, no. 4, pp. 671–679, 2010. @article{Grave2010, Studies investigating the effect of visual illusions on saccadic eye movements have provided a wide variety of results. In this study, we test three factors that might explain this variability: the spatial predictability of the stimulus, the duration of the stimulus and the latency of the saccades. Participants made a saccade from one end of a Muller-Lyer figure to the other end. By changing the spatial predictability of the stimulus, we find that the illusion has a clear effect on saccades (16%) when the stimulus is at a highly predictable location. Even stronger effects of the illusion are found when the stimulus location becomes more unpredictable (19-23%). Conversely, manipulating the duration of the stimulus fails to reveal a clear difference in illusion effect. Finally, by computing the illusion effect for different saccadic latencies, we find a maximum illusion effect (about 30%) for very short latencies, which decreases by 7% with every 100 ms latency increase. We conclude that spatial predictability of the stimulus and saccadic latency influences the effect of the Muller-Lyer illusion on saccades. |
C. Hemptinne; G. R. Barnes; Marcus Missal Influence of previous target motion on anticipatory pursuit deceleration Journal Article In: Experimental Brain Research, vol. 207, no. 3-4, pp. 173–184, 2010. @article{Hemptinne2010, During visual pursuit of a moving target, expected changes in its trajectory often evoke anticipatory smooth pursuit responses. In the present study, we investigated characteristics of anticipatory smooth pursuit decelerations before a change or the end of a target trajectory. Healthy humans had to pursue with the eyes a target moving along a circular path that predictably or unpredictably reversed direction and then retraced its movement back to the starting position. We found that anticipatory eye decelerations were often evoked in temporal expectation of target reversal and of the end of the trajectory. The latency of anticipatory decelerations initiated before target reversal was variable, had poor temporal accuracy and depended on the history of previous trials. Anticipations of the end of the trajectory were more accurate, more precise and were not influenced by previous trials. In this case, subjects probably based their estimate of the end of the trajectory on the duration just experienced before target motion reversal. These results suggest that anticipatory eye decelerations are based on the characteristics of the current or preceding trials depending on the most reliable information available. |
Kurt Debono; Alexander C. Schütz; Miriam Spering; Karl R. Gegenfurtner Receptive fields for smooth pursuit eye movements and motion perception Journal Article In: Vision Research, vol. 50, no. 24, pp. 2729–2739, 2010. @article{Debono2010, Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). |
Adriana M. Degani; Alessander Danna-Dos-Santos; Thomas Robert; Mark L. Latash Kinematic synergies during saccades involving whole-body rotation: A study based on the uncontrolled manifold hypothesis Journal Article In: Human Movement Science, vol. 29, no. 2, pp. 243–258, 2010. @article{Degani2010, We used the framework of the uncontrolled manifold hypothesis to study the coordination of body segments and eye movements in standing persons during the task of shifting the gaze to a target positioned behind the body. The task was performed at a comfortable speed and fast. Multi-segment and head-eye synergies were quantified as co-varied changes in elemental variables (body segment rotations and eye rotation) that stabilized (reduced the across trials variability) of head rotation in space and gaze trajectory. Head position in space was stabilized by co-varied rotations of body segments prior to the action, during its later stages, and after its completion. The synergy index showed a drop that started prior to the action initiation (anticipatory synergy adjustment) and continued during the phase of quick head rotation. Gaze direction was stabilized only at movement completion and immediately after the saccade at movement initiation under the " fast" instruction. The study documents for the first time anticipatory synergy adjustments during whole-body actions. It shows multi-joint synergies stabilizing head trajectory in space. In contrast, there was no synergy between head and eye rotations during saccades that would achieve a relatively invariant gaze trajectory. |
Francesca Delogu; Francesco Vespignani; Anthony J. Sanford Effects of intensionality on sentence and discourse processing: Evidence from eye-movements Journal Article In: Journal of Memory and Language, vol. 62, no. 4, pp. 352–379, 2010. @article{Delogu2010, Intensional verbs like want select for clausal complements expressing propositions, though they can be perfectly natural when combined with a direct object. There are two interesting phenomena associated with intensional transitive expressions. First, it has been suggested that their interpretation requires enriched compositional operations, similarly to expressions like began the book (e.g., Pustejovsky, 1995). Secondly, when the object position is filled by an indefinite NP, it preferentially receives an unspecific reading, under which definite anaphora is not supported (e.g., Moltmann, 1997). We report three eye-tracking experiments investigating the time-course of processing of sentence pairs like John wanted a beer. The beer was warm. Consistent with the enriched composition hypothesis, results showed that intensional transitive constructions (e.g., wanted a beer) take longer to process than control expressions (e.g., drank/wanted to drink a beer). However, contrary to previous findings, the processing of the continuation sentence appears to be not affected by whether the definite NP (the beer) can be interpreted as coreferential with the indefinite or not. We interpret the results with respect to accounts of semantic processing relying on the notions of enriched composition and non-actuality implicature. |
T. M. Desrochers; D. Z. Jin; N. D. Goodman; Ann M. Graybiel Optimal habits can develop spontaneously through sensitivity to local cost Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 47, pp. 20512–20517, 2010. @article{Desrochers2010, Habits and rituals are expressed universally across animal species. These behaviors are advantageous in allowing sequential behaviors to be performed without cognitive overload, and appear to rely on neural circuits that are relatively benign but vulnerable to takeover by extreme contexts, neuropsychiatric sequelae, and processes leading to addiction. Reinforcement learning (RL) is thought to underlie the formation of optimal habits. However, this theoretic formulation has principally been tested experimentally in simple stimulus-response tasks with relatively few available responses. We asked whether RL could also account for the emergence of habitual action sequences in realistically complex situations in which no repetitive stimulus-response links were present and in which many response options were present. We exposed naïve macaque monkeys to such experimental conditions by introducing a unique free saccade scan task. Despite the highly uncertain conditions and no instruction, the monkeys developed a succession of stereotypical, self-chosen saccade sequence patterns. Remarkably, these continued to morph for months, long after session-averaged reward and cost (eye movement distance) reached asymptote. Prima facie, these continued behavioral changes appeared to challenge RL. However, trial-by-trial analysis showed that pattern changes on adjacent trials were predicted by lowered cost, and RL simulations that reduced the cost reproduced the monkeys' behavior. Ultimately, the patterns settled into stereotypical saccade sequences that minimized the cost of obtaining the reward on average. These findings suggest that brain mechanisms underlying the emergence of habits, and perhaps unwanted repetitive behaviors in clinical disorders, could follow RL algorithms capturing extremely local explore/exploit tradeoffs. |
Leandro Luigi Di Stasi; Mauro Marchitto; Adoracíon Antolí; Thierry Baccino; José J. Cañas Approximation of on-line mental workload index in ATC simulated multitasks Journal Article In: Journal of Air Transport Management, vol. 16, no. 6, pp. 330–333, 2010. @article{DiStasi2010, To assess the effects of workload pressures, participants interacted with a modified version of air traffic control simulated tasks requiring different levels of cognitive resources. Changes in mental workload between the levels were evaluated multidimensionally using a subjective rating, performance in a secondary task, and other behavioural indices. Saccadic movements were measured using a video-based eye tracking system. The Wickens multiple resource model is used as a theoretical reference framework. Saccadic peak velocity decreases with increasing cognitive load, in agreement with subjective test scores and performance data. That saccadic peak velocity is sensitive to variations in mental workload during ecologically valid tasks is demonstrated. |
Leandro Luigi Di Stasi; Rebekka Renner; Peggy Staehr; Jens R. Helmert; Boris M. Velichkovsky; Jose J. Canas; Andrés Catena; Sebastian Pannasch Saccadic Peak Velocity Sensitivity to Variations in Mental Workload Journal Article In: Aviation Space and Environmental Medicine, vol. 81, no. 4, pp. 413–417, 2010. @article{DiStasi2010a, Introduction: For research and applications in the field of (neuro)ergonomics, it is of increasing importance to have reliable methods for measuring mental workload. In the present study we examined the hypothesis that saccadic eye movements can be used for an online assessment of mental workload. Methods: Saccadic main sequence (amplitude, dura- tion and peak velocity) was used as a diagnostic measure of mental workload in a virtual driving task with three complexity levels. We tested 18 drivers in the SIRCA driving simulator while their eye movements were recorded. The Wickens' multiple resources model was used as theoretical framework. Changes in mental workload between the complexity levels were evaluated multidimensionally, using subjective rating, performance in a secondary task, and other behavioral indices. Results: Saccadic peak velocity decreased (7.2 visual °/s) as the mental workload increased, as measured by scores of mental workload test (15.2 scores) and the increase of the reaction time on the secondary task (46 ms). Discussion: Saccadic peak velocity is affected by variations in mental workload during ecologically valid tasks. We conclude that saccadic peak velocity could be a useful diagnostic index for the assessment of operators' mental workload and attentional state in hazardous environments. |
M. Dyer Diehl; Peter E. Pidcoe The influence of gaze stabilization and fixation on stepping reactions in younger and older adults Journal Article In: Journal of Geriatric Physical Therapy, vol. 33, no. 1, pp. 19–25, 2010. @article{Diehl2010, PURPOSE: To date, there has been little evidence to suggest the importance of foveal viewing versus peripheral retina viewing when trying to recover from a perturbation. The purposes of this investigation were to (1) determine whether a visual target can be stabilized on the fovea during a perturbation, (2) determine whether stepping responses following a perturbation are influenced by foveal fixation, and (3) compare gaze stability and stepping responses between young and aging adults. MATERIALS/METHODS: Ten young adults and 10 aging adults were asked to wear an eye-tracking device linked to a kinematic tracking system during perturbations. Perturbations were delivered under 2 conditions: control (no instructions regarding gaze location were given) and earth-fixed (EF) (subjects were asked to fixate gaze on an EF target). Stepping responses were recorded via force plates. Gaze stability, reported as percent foveal fixation (% FF), was calculated from eye-tracking data. Step latencies (SLs) were computed from force plate data. A 2 x 2 analysis of variance was used to assess statistical significance between groups. For the young and aging adults, linear correlations were made to identify relationships between % FF and SL. RESULTS: For each condition, aging adults took longer to initiate a step (control |
Steve Dipaola; Caitlin Riebe; James T. Enns Rembrandt's textural agency: A shared perspective in visual art and science Journal Article In: Leonardo, vol. 43, no. 2, pp. 145–151, 2010. @article{Dipaola2010, This interdisciplinary paper hypothesizes that Rembrandt developed new painterly techniques — novel to the early modern period — in order to engage and direct the gaze of the observer. Though these methods were not based on scientific evidence at the time, we show that they nonetheless are consistent with a contemporary understanding of human vision. Here we propose that artists in the late ‘early modern' period developed the technique of textural agency — involving selective variation in image detail — to guide the observer's eye and thereby influence the viewing experience. The paper begins by establishing the well-known use of textural agency among modern portrait artists, before considering the possibility that Rembrandt developed these techniques in his late portraits in reaction to his Italian contemporaries. A final section brings the argument full circle, with the presentation of laboratory evidence that Rembrandt's techniques indeed guide the modern viewer's eye in the way we propose. |
Barnaby J. Dixson; Gina M. Grimshaw; Wayne L. Linklater; Alan F. Dixson Watching the hourglass: Eye tracking reveals men's appreciation of the female form Journal Article In: Human Nature, vol. 21, no. 4, pp. 355–370, 2010. @article{Dixson2010, Eye-tracking techniques were used to measure men's attention to back-posed and front-posed images of women varying in waist-to-hip ratio (WHR). Irrespective of body pose, men rated images with a 0.7 WHR as most attractive. For back-posed images, initial visual fixations (occurring within 200 milliseconds of commencement of the eye-tracking session) most frequently involved the midriff. Numbers of fixations and dwell times throughout each of the five-second viewing sessions were greatest for the midriff and buttocks. By contrast, visual attention to front-posed images (first fixations, numbers of fixations, and dwell times) mainly involved the breasts, with attention shifting more to the midriff of images with a higher WHR. This report is the first to compare men's eye-tracking responses to back-posed and front-posed images of the female body. Results show the importance of the female midriff and of WHR upon men's attractiveness judgments, especially when viewing back-posed images. |
Mieke Donk; Leroy Soesman Salience is only briefly represented: Evidence from probe-detection performance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 2, pp. 286–302, 2010. @article{Donk2010, Salient objects in the visual field tend to capture attention. The present study aimed to examine the time-course of salience effects using a probe-detection task. Eight experiments investigated how the salience of different orientation singletons affected probe reaction time as a function of stimulus onset asynchrony (SOA) between the presentation of a singleton display and a probe display. The results demonstrate that salience consistently affected probe reaction time at the shortest SOA. The effect of salience disappeared as SOA increased. These results suggest that contrary to the assumption of major theories on visual selection, salience is transiently represented in our visual system allowing the effects of salience on attentional selection to be only short-lived. |
Michael Dorr; T. Martinetz; Karl R. Gegenfurtner; E. Barth Variability of eye movements when viewing dynamic natural scenes Journal Article In: Journal of Vision, vol. 10, no. 10, pp. 1–17, 2010. @article{Dorr2010, How similar are the eye movement patterns of different subjects when free viewing dynamic natural scenes? We collected a large database of eye movements from 54 subjects on 18 high-resolution videos of outdoor scenes and measured their variability using the Normalized Scanpath Saliency, which we extended to the temporal domain. Even though up to about 80% of subjects looked at the same image region in some video parts, variability usually was much greater. Eye movements on natural movies were then compared with eye movements in several control conditions. "Stop-motion" movies had almost identical semantic content as the original videos but lacked continuous motion. Hollywood action movie trailers were used to probe the upper limit of eye movement coherence that can be achieved by deliberate camera work, scene cuts, etc. In a "repetitive" condition, subjects viewed the same movies ten times each over the course of 2 days. Results show several systematic differences between conditions both for general eye movement parameters such as saccade amplitude and fixation duration and for eye movement variability. Most importantly, eye movements on static images are initially driven by stimulus onset effects and later, more so than on continuous videos, by subject-specific idiosyncrasies; eye movements on Hollywood movies are significantly more coherent than those on natural movies. We conclude that the stimuli types often used in laboratory experiments, static images and professionally cut material, are not very representative of natural viewing behavior. All stimuli and gaze data are publicly available at http://www.inb.uni-luebeck.de/tools-demos/gaze. |
Denis Drieghe; Alexander Pollatsek; Barbara J. Juhasz; Keith Rayner Parafoveal processing during reading is reduced across a morphological boundary Journal Article In: Cognition, vol. 116, no. 1, pp. 136–142, 2010. @article{Drieghe2010, A boundary change manipulation was implemented within a monomorphemic word (e.g., fountaom as a preview for fountain), where parallel processing should occur given adequate visual acuity, and within an unspaced compound (bathroan as a preview for bathroom), where some serial processing of the constituents is likely. Consistent with that hypothesis, there was no effect of the preview manipulation on fixation time on the 1st constituent of the compound, whereas there was on the corresponding letters of the monomorphemic word. There was also a larger preview disruption on gaze duration on the whole monomorphemic word than on the compound, suggesting more parallel processing within monomorphemic words. |
Jacob Duijnhouwer; Bart Krekelberg; Albert V. Berg; Richard J. A. Wezel Temporal integration of focus position signal during compensation for pursuit in optic flow. Journal Article In: Journal of Vision, vol. 10, no. 14, pp. 1–15, 2010. @article{Duijnhouwer2010, Observer translation results in optic flow that specifies heading. Concurrent smooth pursuit causes distortion of the retinal flow pattern for which the visual system compensates. The distortion and its perceptual compensation are usually modeled in terms of instantaneous velocities. However, apart from adding a velocity to the flow field, pursuit also incrementally changes the direction of gaze. The effect of gaze displacement on optic flow perception has received little attention. Here we separated the effects of velocity and gaze displacement by measuring the perceived two-dimensional focus position of rotating flow patterns during pursuit. Such stimuli are useful in the current context because the two effects work in orthogonal directions. As expected, the instantaneous pursuit velocity shifted the perceived focus orthogonally to the pursuit direction. Additionally, the focus was mislocalized in the direction of the pursuit. Experiments that manipulated the presentation duration, flow speed, and uncertainty of the focus location supported the idea that the latter component of mislocalization resulted from temporal integration of the retinal trajectory of the focus. Finally, a comparison of the shift magnitudes obtained in conditions with and without pursuit (but with similar retinal stimulation) suggested that the compensation for both effects uses extraretinal information. |
Peter J. Etchells; Christopher P. Benton; Casimir J. H. Ludwig; Iain D. Gilchrist The target velocity integration function for saccades Journal Article In: Journal of Vision, vol. 10, no. 6, pp. 1–14, 2010. @article{Etchells2010, Interacting with a dynamic environment calls for close coordination between the timing and direction of motor behaviors. Accurate motor behavior requires the system to predict where the target for action will be, both when action planning is complete and when the action is executed. In the current study, we investigate the time course of velocity information accrual in the period leading up to a saccade toward a moving object. In two experiments, observers were asked to generate saccades to one of two moving targets. Experiment 1 looks at the accuracy of saccades to targets that have trial-by-trial variations in velocity. We show that the pattern of errors in saccade landing position is best explained by proposing that trial-by-trial target velocity is taken into account in saccade planning. In Experiment 2, target velocity stepped up or down after a variable interval after the movement cue. The extent to which the movement endpoint reflects pre- or post-step velocity can be used to identify the temporal velocity integration window; we show that the system takes a temporally blurred snapshot of target velocity centered ∼200 ms before saccade onset. This estimate is used to generate a dynamically updated prediction of the target's likely future location. |
David R. Evens; Casimir J. H. Ludwig Dual-task costs and benefits in anti-saccade performance Journal Article In: Experimental Brain Research, vol. 205, pp. 545–557, 2010. @article{Evens2010, It has been reported that anti-saccade performance is facilitated by diverting attention through a secondary task (Kristja´nsson et al. in Nat Neurosci 4:1037–1042, 2001). This finding supports the idea that the withdrawal of resources that would be taken up by the erroneous movement plan makes it easier to overcome the tendency to look towards the imperative stimulus. We first report an attempt to replicate this finding. Four observers were extensively tested in an anti-saccade paradigm. The luminance of the fixation point or peripheral target was briefly increased or decreased. In the dual-task condition observers signalled the direction of the luminance change. In the single-task condition the discrimination stimulus was presented, but could be ignored as it required no response. We found an overall dual-task cost in anti-saccade latency, although some facilitation was observed in the accuracy. The discrepancy between the two studies was attributed to performance in the single-task condition. For latency facilitation to occur, performance should not be affected by the discrimination stimulus when it is task-irrelevant. We show that naive, untrained observers could not ignore this irrelevant visual event. If it occurred before the imperative movement signal, the event acted as a warning signal, speeding up anti-saccade generation. If it occurred after the imperative movement stimulus, it acted as a remote distractor and interfered with the generation of the correct movement. Under normal circumstances, these basic oculomotor effects operate in both single- and dual-task conditions. An overall dual-task cost rides on top of this latency modulation. This overall cost is best accounted for by an increase in the response criterion for saccade generation in the more demanding dual-task condition. |
Simon Farrell; Casimir J. H. Ludwig; Lucy A. Ellis; Iain D. Gilchrist Influence of environmental statistics on inhibition of saccadic return Journal Article In: Proceedings of the National Academy of Sciences, vol. 107, no. 2, pp. 929–934, 2010. @article{Farrell2010, Initiating an eye movement is slowed if the saccade is directed to a location that has been fixated in the recent past. We show that this inhibitory effect is modulated by the temporal statistics of the environment: If a return location is likely to become behaviorally relevant, inhibition of return is absent. By fitting an accumulator model of saccadic decision-making, we show that the inhibitory effect and the sensitivity to local statistics can be dissociated in their effects on the rate of accumulation of evidence, and the threshold controlling the amount of evidence needed to generate a saccade. |
Cara R. Featherstone; Patrick Sturt Because there was a cause for concern: An investigation into a word-specific prediction account of the implicit-causality effect Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 1, pp. 3–15, 2010. @article{Featherstone2010, In Koornneef and Van Berkum's (2006) eye-tracking study of implicit causality (Caramazza, Grober, Garvey, & Yates, 1977), midsentence delays were observed in the processing of sentences such as "David blamed Linda because she(bias-congruent)/he(bias-incongruent) . . . " when the pronoun following because was incongruent with the bias of the implicit-causality verb. The authors suggested that these immediate delays could be attributed to participants predicting a bias-congruent pronoun after because. According to this explanation, any other word placed after because should cause processing delays. The present investigation aimed to test this explanation by using sentences of the form "David blamed Linda because she(bias-congruent)/he(bias-incongruent)/there(bias-neutral) . . . ". Since significant immediate delays were observed in sentences containing a bias-incongruent pronoun (relative to a bias-congruent pronoun) but not in sentences containing there, the results of this study support an immediate integration effect but pose a problem to the word-specific prediction account of the implicit causality effect. |
Naotoshi Abekawa; Hiroaki Gomi Spatial coincidence of intentional actions modulates an implicit visuomotor control Journal Article In: Journal of Neurophysiology, vol. 103, no. 5, pp. 2717–2727, 2010. @article{Abekawa2010, We investigated a visuomotor mechanism contributing to reach correction: the manual following response (MFR), which is a quick response to background visual motion that frequently occurs as a reafference when the body moves. Although several visual specificities of the MFR have been elucidated, the functional and computational mechanisms of its motor coordination remain unclear mainly because it involves complex relationships among gaze, reaching target, and visual stimuli. To directly explore how these factors interact in the MFR, we assessed the impact of spatial coincidences among gaze, arm reaching, and visual motion on the MFR. When gaze location was displaced from the reaching target with an identical visual motion kept on the retina, the amplitude of the MFR significantly decreased as displacement increased. A factorial manipulation of gaze, reaching-target, and visual motion locations showed that the response decrease is due to the spatial separation between gaze and reaching target but is not due to the spatial separation between visual motion and reaching target. Additionally, elimination of visual motion around the fovea attenuated the MFR. The effects of these spatial coincidences on the MFR are completely different from their effects on the perceptual mislocalization of targets caused by visual motion. Furthermore, we found clear differences between the modulation sensitivities of the MFR and the ocular following response to spatial mismatch between gaze and reaching locations. These results suggest that the MFR modulation observed in our experiment is not due to changes in visual interaction between target and visual motion or to modulation of motion sensitivity in early visual processing. Instead the motor command of the MFR appears to be modulated by the spatial relationship between gaze and reaching. |
Alper Açik; Adjmal Sarwary; Rafael Schultze-Kraft; Selim Onat; Peter König Developmental changes in natural viewing behavior: Bottom-up and top-down differences between children, young adults and older adults Journal Article In: Frontiers in Psychology, vol. 1, pp. 207, 2010. @article{Acik2010, Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature - color, luminance contrast etc. - guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing. |
Naseem Al-Aidroos; Jay Pratt Top-down control in time and space: Evidence from saccadic latencies and trajectories Journal Article In: Visual Cognition, vol. 18, no. 1, pp. 26–49, 2010. @article{AlAidroos2010, Visual distractors disrupt the production of saccadic eye movements temporally, by increasing saccade latency, and spatially, by biasing the trajectory of the movement. The present research investigated the extent to which top-down control can be exerted over these two forms of oculomotor capture. In two experiments, people were instructed to make target directed saccades in the presence of distractors, and temporal and spatial capture were assessed simultaneously by measuring saccade latency and saccade trajectory curvature, respectively. In Experiment 1, an attentional control set manipulation was employed, resulting in the elimination of temporal capture, but only an attenuation of spatial capture. In Experiment 2, foreknowledge of the target location caused an attenuation of temporal capture but an enhancement of spatial capture. These results suggest that, whereas temporal capture is contingent on top-down control, the spatial component of capture is stimulus-driven. |
Denis Alamargot; Sylvie Plane; Eric Lambert; David Chesnet In: Reading and Writing, vol. 23, no. 7, pp. 853–888, 2010. @article{Alamargot2010, This study was designed to enhance our understanding of the changing relationship between low- and high-level writing processes in the course of development. A dual description of writing processes was undertaken, based on (a) the respective time courses of these processes, as assessed by an analysis of eye and pen movements, and (b) the semantic characteristics of the writers' scripts. To conduct a more fine-grained description of processing strategies, a ‘‘case study'' approach was adopted, whereby a comprehensive range of measures was used to assess processes within five writers with different levels of expertise. The task was to continue writing a story based on excerpt from a source document (incipit). The main results showed two developmental patterns linked to expertise: (a) a gradual acceleration in low- and high-level processing (pauses, flow), associated with (b) changes in the way the previous text was (re)read. |
Albert Moukheiber; Gilles Rautureau; Fernando Perez-Diaz; Robert Soussignan; Stéphanie Dubal; Roland Jouvent; Antoine Pelissolo Gaze avoidance in social phobia: Objective measure and correlates Journal Article In: Behaviour Research and Therapy, vol. 48, pp. 147–151, 2010. @article{Moukheiber2010, Gaze aversion could be a central component of the physiopathology of social phobia. The emotions of the people interacting with a person with social phobia seem to model this gaze aversion. Our research consists of testing gaze aversion in subjects with social phobia compared to control subjects in different emotional faces of men and women using an eye tracker. Twenty-six subjects with DSM-IV social phobia were recruited. Twenty-four healthy subjects aged and sex-matched constituted the control group. We looked at the number of fixations and the dwell time in the eyes area on the pictures. The main findings of this research are: confirming a significantly lower amount of fixations and dwell time in patients with social phobia as a general mean and for the 6 basic emotions independently from gender; observing a significant correlation between the severity of the phobia and the degree of gaze avoidance. However, no difference in gaze avoidance according to subject/picture gender matching was observed. These findings confirm and extend some previous results, and suggest that eye avoidance is a robust marker of persons with social phobia, which could be used as a behavioral phenotype for brain imagery studies on this disorder. |
Linda M. Moxey; Ruth Filik The effects of character desire on focus patterns and pronominal reference following quantified statements Journal Article In: Discourse Processes, vol. 47, no. 7, pp. 588–616, 2010. @article{Moxey2010, Following a positively quantified statement such as, oA few of the children sang the chorus,o a plural pronoun is likely to refer to the set of children who sang (the reference set). Negative natural language quantifiers (NLQs) such as few or not many, on the other hand, are more likely to be followed by reference to the complement set of children who did not sing. According to the presupposition-denial account of negative NLQs, the complement set is available for pronominal reference following these expressions because they imply a shortfall between the amount denoted and a presupposed larger amount. Focus on the shortfall set is effectively focus on the complement set. Previous support for this account is largely based on a series of experiments which show that complement set focus is also possible following positive NLQs if a previously mentioned character expects a larger amount, thereby creating a shortfall between the character's expectations and the amount denoted by the NLQ. It is not clear, however, whether the shortfall implied by a negative NLQ must be based on expectation per se, or whether the NLQ-based implication is more general. This article reports 3 experiments which show that a shortfall can also be created between an NLQ and a character's desire for a particular quantity. Results suggest that the implication of negative NLQs that a larger amount is denied need not be based on expectation, but may be less specific. |
Sven Mucke; Velitchko Manahilov; Niall C. Strang; Dirk Seidel; Lyle S. Gray; Uma Shahani Investigating the mechanisms that may underlie the reduction in contrast sensitivity during dynamic accommodation Journal Article In: Journal of Vision, vol. 10, no. 5, pp. 1–14, 2010. @article{Mucke2010, Head and eye movements, together with ocular accommodation enable us to explore our visual environment. The stability of this environment is maintained during saccadic and vergence eye movements due to reduced contrast sensitivity to low spatial frequency information. Our recent work has revealed a new type of selective reduction of contrast sensitivity to high spatial frequency patterns during the fast phase of dynamic accommodation responses compared with steady-state accommodation. Here were report data which show a strong correlation between the effects of reduced contrast sensitivity during dynamic accommodation and velocity of accommodation responses, elicited by ramp changes in accommodative demand. The results were accounted for by a contrast gain control model of a cortical mechanism for contrast detection during dynamic ocular accommodation. Sensitivity, however, was not altered during attempted accommodation responses in the absence of crystalline-lens changes due to cycloplegia. These findings suggest that contrast sensitivity reduction during dynamic accommodation may be a consequence of cortical inhibition driven by proprioceptive-like signals originating within the ciliary muscle, rather than by corollary discharge signals elicited simultaneously with the motor command to the ciliary muscle. |
Manon Mulckhuyse; Jan Theeuwes Unconscious cueing effects in saccadic eye movements - Facilitation and inhibition in temporal and nasal hemifield Journal Article In: Vision Research, vol. 50, no. 6, pp. 606–613, 2010. @article{Mulckhuyse2010, The current study investigated whether subliminal spatial cues can affect the oculomotor system. In addition, we performed the experiment under monocular viewing conditions. By limiting participants to monocular viewing conditions, we can examine behavioral temporal-nasal hemifield asymmetries. These behavioral asymmetries may arise from an anatomical asymmetry in the retinotectal pathway. The results show that even though our spatial cues were not consciously perceived they did affect the oculomotor system: relative to the neutral condition, saccade latencies to the validly cued location were shorter and saccade latencies to the invalidly cued location were longer. Although we did not observe an overall inhibition of return effect, there was a reliable effect of hemifield on IOR for those observers who showed an overall IOR effect. More specifically, consistent with the notion that processing via the retinotectal pathway is stronger in the temporal hemifield than in the nasal hemifield we found an IOR effect for cues presented in the temporal hemifield but not for cues presented in the nasal hemifield. We conclude that unconsciously processed spatial cues can affect the oculomotor system. In addition, the observed behavioral temporal-nasal hemifield asymmetry is consistent with retinotectal mediation. |
Jong-yoon Myung; Sheila E. Blumstein; Eiling Yee; Julie C. Sedivy; Sharon L. Thompson-Schill; Laurel J. Buxbaum Impaired access to manipulation features in Apraxia: Evidence from eyetracking and semantic judgment tasks Journal Article In: Brain and Language, vol. 112, no. 2, pp. 101–112, 2010. @article{Myung2010, Apraxic patients are known for deficits in producing and comprehending skilled movements. Two experiments tested their implicit and explicit knowledge about manipulable objects in order to examine whether such deficits accompany impairment in the conceptual representation of manipulation features. An eyetracking method was used to test implicit knowledge (Experiment 1): participants viewed a visual display on a computer screen and touched the corresponding object in response to an auditory input. Manipulation relationship among objects was not task-relevant, and thus the assessment of manipulation knowledge was implicit. Like the non-apraxic control patients, apraxic patients fixated on an object picture (e.g., " typewriter" ) that was manipulation-related to a target word (e.g., 'piano') significantly more often than an unrelated object picture (e.g., " bucket" ) as well as a visual control (e.g., " couch" ). However, this effect emerged later than in the non-apraxic control group, suggesting impaired access to manipulation features in the apraxic group. In the semantic judgment task (Experiment 2), participants were asked to make an explicit judgment about the relationship of picture triplets of manipulable objects by choosing the pair with similar manipulation features. Apraxic patients performed significantly worse on this task than the non-apraxic control group. Both implicit and explicit measures of manipulation knowledge show that apraxia is not merely a perceptuomotor deficit of skilled movements, but results in a concomitant impairment in representing manipulation features and accessing them for cognitive processing. |
Anna Oleksiak; Miroslawa Mańko; Albert Postma; Ineke J. M. Ham; Albert V. Berg; Richard J. A. Wezel Distance estimation is influenced by encoding conditions Journal Article In: PLoS ONE, vol. 5, no. 3, pp. e9918, 2010. @article{Oleksiak2010, Background: It is well established that foveating a behaviorally relevant part of the visual field improves localization performance as compared to the situation where the gaze is directed elsewhere. Reduced localization performance in the peripheral encoding conditions has been attributed to an eccentricity-dependent increase in positional uncertainty. It is not known, however, whether and how the foveal and peripheral encoding conditions can influence spatial interval estimation. In this study we compare observers' estimates of a distance between two co-planar dots in the condition where they foveate the two sample dots and where they fixate a central dot while viewing the sample dots peripherally. Methodology/Principal Findings: Observers were required to reproduce, after a short delay, a distance between two sample dots based on a stationary reference dot and a movable mouse pointer. When both sample dots are foveated, we find that the distance estimation error is small but consistently increases with the dots-separation size. In comparison, distance judgment in peripheral encoding condition is significantly overestimated for smaller separations and becomes similar to the performance in foveal trials for distances from 10 to 16 degrees. Conclusions/Significance: Although we find improved accuracy of distance estimation in the foveal condition, the fact that the difference is related to the reduction of the estimation bias present in the peripheral conditon, challenges the simple account of reducing the eccentricity-dependent positional uncertainty. Contrary to this, we present evidence for an explanation in terms of neuronal populations activated by the two sample dots and their inhibitory interactions under different visual encoding conditions. We support our claims with simulations that take into account receptive fields size differences between the two encoding conditions. |
Jean-Jacques Orban de Xivry; Sébastien Coppe; Philippe Lefèvre; Marcus Missal Biological motion drives perception and action. Journal Article In: Journal of Vision, vol. 10, no. 2, pp. 1–11, 2010. @article{OrbandeXivry2010, Presenting a few dots moving coherently on a screen can yield to the perception of human motion. This perception is based on a specific network that is segregated from the traditional motion perception network and that includes the superior temporal sulcus (STS). In this study, we investigate whether this biological motion perception network could influence the smooth pursuit response evoked by a point-light walker. We found that smooth eye velocity during pursuit initiation was larger in response to the point-light walker than in response to one of its scrambled versions, to an inverted walker or to a single dot stimulus. In addition, we assessed the proximity to the point-light walker (i.e. the amount of information about the direction contained in the scrambled stimulus and extracted from local motion cue of biological motion) of each of our scrambled stimuli in a motion direction discrimination task with manual responses and found that the smooth pursuit response evoked by those stimuli moving across the screen was modulated by their proximity to the walker. Therefore, we conclude that biological motion facilitates smooth pursuit eye movements, hence influences both perception and action. |
José P. Ossandón; Andrea Helo; Rodrigo Montefusco-Siegmund; Pedro E. Maldonado Superposition model predicts EEG occipital activity during free viewing of natural scenes Journal Article In: Journal of Neuroscience, vol. 30, no. 13, pp. 4787–4795, 2010. @article{Ossandon2010, Visual event-related potentials (ERPs) produced by a stimulus are thought to reflect either an increase of synchronized activity or a phase realignment of ongoing oscillatory activity, with both mechanisms sharing the assumption that ERPs are independent of the current state of the brain at the time of stimulation. In natural viewing, however, visual inputs occur one after another at specific subject-paced intervals through unconstrained eye movements. We conjecture that during natural viewing, ERPs generated after each fixation are better explained by a superposition of ongoing oscillatory activity related to the processing of previous fixations, with new activity elicited by the visual input at the current fixation. We examined the electroencephalography (EEG) signals that occur in humans at the onset of each visual fixation, both while subjects freely viewed natural scenes and while they viewed a black or gray background. We found that the fixation ERPs show visual components that are absent when subjects move their eyes on a homogeneous gray or black screen. Single-trial EEG signals that comprise the ERP are predicted more accurately by a model of superposition than by either phase resetting or the addition of evoked responses and stimulus-independent noise. The superposition of ongoing oscillatory activity and the visually evoked response results in a modification of the ongoing oscillation phase. The results presented suggest that the observed EEG signals reflect changes occurring in a common neuronal substrate rather than a simple summation at the scalp of signals from independent sources. |
Mathias Abegg; Hyung Lee; Jason J. S. Barton Systematic diagonal and vertical errors in antisaccades and memory-guided saccades Journal Article In: Journal of Eye Movement Research, vol. 3, no. 3, pp. 1–10, 2010. @article{Abegg2010, Studies of memory-guided saccades in monkeys show an upward bias, while studies of antisaccades in humans show a diagonal effect, a deviation of endpoints toward the 45° diagonal. To determine if these two different spatial biases are specific to different types of saccades, we studied prosaccades, antisaccades and memory-guided saccades in humans. The diagonal effect occurred not with prosaccades but with antisaccades and memory-guided saccades with long intervals, consistent with hypotheses that it originates in computations of goal location under conditions of uncertainty. There was a small upward bias for memory-guided saccades but not prosaccades or antisaccades. Thus this bias is not a general effect of target uncertainty but a property specific to memory-guided saccades. |
Mathias Abegg; Amadeo R. Rodriguez; Hyung Lee; Jason J. S. Barton ‘Alternate-goal bias' in antisaccades and the influence of expectation Journal Article In: Experimental Brain Research, vol. 203, no. 3, pp. 553–562, 2010. @article{Abegg2010a, Saccadic performance depends on the requirements of the current trial, but also may be influenced by other trials in the same experiment. This effect of trial context has been investigated most for saccadic error rate and reaction time but seldom for the positional accuracy of saccadic landing points. We investigated whether the direction of saccades towards one goal is affected by the location of a second goal used in other trials in the same experimental block. In our first experiment, landing points ('endpoints') of antisaccades but not prosaccades were shifted towards the location of the alternate goal. This spatial bias decreased with increasing angular separation between the current and alternative goals. In a second experiment, we explored whether expectancy about the goal location was responsible for the biasing of the saccadic endpoint. For this, we used a condition where the saccadic goal randomly changed from one trial to the next between locations on, above or below the horizontal meridian. We modulated the prior probability of the alternate-goal location by showing cues prior to stimulus onset. The results showed that expectation about the possible positions of the saccadic goal is sufficient to bias saccadic endpoints and can account for at least part of this phenomenon of 'alternate-goal bias'. |
Anish R. Mitra; Mathias Abegg; Jayalakshmi Viswanathan; Jason J. S. Barton Line bisection in simulated homonymous hemianopia Journal Article In: Neuropsychologia, vol. 48, no. 6, pp. 1742–1749, 2010. @article{Mitra2010, Hemianopic patients make a systematic error in line bisection, showing a contra-lesional bias towards their blind side, which is the opposite of that in hemineglect patients. This error has been attributed variously to the visual field defect, to long-term strategic adaptation, or to independent effects of damage to extrastriate cortex. To determine if hemianopic bisection error can occur without the latter two factors, we studied line bisection in healthy subjects with simulated homonymous hemianopia using a gaze-contingent display, with different line-lengths, and with or without markers at both ends of the lines. Simulated homonymous hemianopia did induce a contra-lesional bisection error and this was associated with increased fixations towards the blind field. This error was found with end-marked lines and was greater with very long lines. In a second experiment we showed that eccentric fixation alone produces a similar bisection error and eliminates the effect of line-end markers. We conclude that a homonymous hemianopic field defect alone is sufficient to induce both a contra-lesional line bisection error and previously described alterations in fixation distribution, and does not require long-term adaptation or extrastriate damage. © 2010 Elsevier Ltd. |