全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2009 |
Jinmian Yang; Suiping Wang; Yimin Xu; Keith Rayner Do Chinese readers obtain preview benefit from word n + 2? Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 4, pp. 1192–1204, 2009. @article{Yang2009d, The boundary paradigm (K. Rayner, 1975) was used to determine the extent to which Chinese readers obtain information from the right of fixation during reading. As characters are the basic visual unit in written Chinese, they were used as targets in Experiment 1 to examine whether readers obtain preview information from character n + 1 and character n + 2. The results from Experiment 1 suggest they do. In Experiment 2, 2-character target words were used to determine whether readers obtain preview information from word n + 2 as well as word n + 1. Robust preview effects were obtained for word n + 1. There was also evidence from gaze duration (but not first fixation duration), suggesting preview effects for word n + 2. Moreover, there was evidence for parafoveal-on-foveal effects in Chinese reading in both experiments. Implications of these results for models of eye movement control are discussed. |
Shun-Nan Yang Effects of gaze-contingent text changes on fixation duration in reading Journal Article In: Vision Research, vol. 49, no. 23, pp. 2843–2855, 2009. @article{Yang2009a, In reading, a text change during an eye fixation can increase the duration of that fixation. This increased fixation duration could be the result of disrupted text processing, or from the effect of perceiving the brief visual change (a visual transient). The present study was designed to test those two hypotheses. Subjects read multiple-line text while their eye movements were monitored. During randomly selected saccades, the text was masked with an alternate page, which was then replaced with a second alternate page, 75 or 150 ms after the onset of the subsequent (critical) fixation. The effect of the initial masking page, the text change during fixation, and the content of the second page on the likelihood of saccade initiation during the critical fixation, was measured. Results showed that a text change during fixation resulted in similar bilateral (forward and regressive) saccade suppression regardless of the nature of the first and second pages, or the timing of text change. This result likely reflects the effect of a low-level visual transient caused by text change. In addition, there was delay effect reflecting the content of the initial masking. How the suppression dissipated after text change depended on the nature of the first and second pages. These effects are attributed to high-level text processing. The present results suggest that in reading, visual and cognitive processes both can disrupt saccade initiation. The combination of processing difficulty and visually-induced saccade suppression is responsible for the change in fixation duration when gaze-contingent display change is utilized. Therefore, it is prudent to consider both factors when interpreting the effect of text change on eye movement patterns. |
Shun-nan Yang; Yu-chi Tai; Hannu Laukkanen; James Sheedy Effects of ocular transverse chromatic aberration on near foveal letter recognition Journal Article In: Vision Research, vol. 49, no. 23, pp. 2881–2890, 2009. @article{Yang2009e, Transverse chromatic aberration (TCA) smears retinal images of peripheral stimuli. In reading, text information is extracted from both foveal and near fovea, where TCA magnitude is relatively small and variable. The present study investigated whether TCA significantly affects near foveal letter identification. Subjects were briefly presented a string of five letters centered one degree of visual angle to the left or right of fixation. They indicated whether the middle letter was the same as a comparison letter subsequently presented. Letter strings were rendered with a reddish fringe on the left edge of each letter and a bluish fringe on the right edge, consistent with expected left periphery TCA, or with the opposite fringe consistent with expected right periphery TCA. Effect of the color fringing on letter recognition was measured by comparing the response accuracy for fringed and non-fringed stimuli. Effects of lateral interference were examined by manipulating inter-letter spacing and similarity of neighboring letters. Results demonstrated significantly improved response accuracy with the color fringe opposite to the expected TCA, but decreased accuracy when consistent with it. Narrower letter spacing exacerbated the effect of the color fringe, whereas letter similarity did not. Our results suggest that TCA significantly reduces the ability to recognize letters in the near fovea by impeding recognition of individual letters and by enhancing lateral interference between letters. |
Kiyomi Yatabe; Martin J. Pickering; Scott A. McDonald Lexical processing during saccades in text comprehension Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 1, pp. 62–66, 2009. @article{Yatabe2009, We asked whether people process words during saccades when reading sentences. Irwin (1998) demonstrated that such processing occurs when words are presented in isolation. In our experiment, participants read part of a sentence ending in a high- or low-frequency target word and then made a long (40 degrees) or short (10 degrees) saccade to the rest of the sentence. We found a frequency effect on the target word and the first word after the saccade, but the effect was greater for short than for long saccades. Readers therefore performed more lexical processing during long saccades than during short ones. Hence, lexical processing takes place during saccades in text comprehension. |
Eiling Yee; Eve Overton; Sharon L. Thompson-Schill Looking for meaning: Eye movements are sensitive to overlapping semantic features, not association Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 869–874, 2009. @article{Yee2009, Theories of semantic memory differ in the extent to which relationships among concepts are captured via associative or via semantic relatedness. We examined the contributions of these two factors, using a visual world paradigm in which participants selected the named object from a four-picture display. We controlled for semantic relatedness while manipulating associative strength by using the visual world paradigm's analogue to presenting asymmetrically associated pairs in either their forward or backward associative direction (e.g., ham-eggs vs. eggs-ham). Semantically related objects were preferentially fixated regardless of the direction of presentation (and the effect size was unchanged by presentation direction). However, when pairs were associated but not semantically related (e.g., iceberg-lettuce), associated objects were not preferentially fixated in either direction. These findings lend support to theories in which semantic memory is organized according to semantic relatedness (e.g., distributed models) and suggest that association by itself has little effect on this organization. |
Miao-Hsuan Yen; Ralph Radach; Ovid J. L. Tzeng; Daisy L. Hung; Jie-Li Tsai Early parafoveal processing in reading Chinese sentences Journal Article In: Acta Psychologica, vol. 131, no. 1, pp. 24–33, 2009. @article{Yen2009, The possibility that during Chinese reading information is extracted at the beginning of the current fixation was examined in this study. Twenty-four participants read for comprehension while their eye movements were being recorded. A pretarget-target two-character word pair was embedded in each sentence and target word visibility was manipulated in two time intervals (initial 140 ms or after 140 ms) during pretarget viewing. Substantial beginning- and end-of-fixation preview effects were observed together with beginning-of-fixation effects on the pretarget. Apparently parafoveal information at least at the character level can be extracted relatively early during ongoing fixations. Results are highly relevant for ongoing debates on spatially distributed linguistic processing and address fundamental questions about how the human mind solves the task of reading within the constraints of different writing systems. |
Weilei Yi; Dana Ballard Recognizing behavior in hand-eye coordination patterns Journal Article In: International Journal of Humanoid Robotics, vol. 06, no. 03, pp. 337–359, 2009. @article{Yi2009, Modeling human behavior is important for the design of robots as well as human-computer interfaces that use humanoid avatars. Constructive models have been built, but they have not captured all of the detailed structure of human behavior such as the moment-to-moment deployment and coordination of hand, head and eye gaze used in complex tasks. We show how this data from human subjects performing a task can be used to program a dynamic Bayes network (DBN) which in turn can be used to recognize new performance instances. As a specific demonstration we show that the steps in a complex activity such as sandwich making can be recognized by a DBN in real time. |
Gregory J. Zelinsky; Joseph Schmidt An effect of referential scene constraint on search implies scene segmentation Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1004–1028, 2009. @article{Zelinsky2009, Subjects searched aerial images for a UFO target, which appeared hovering over one of five scene regions: Water, fields, foliage, roads, or buildings. Prior to search scene onset, subjects were either told the scene region where the target could be found (specified condition) or not (unspecified condition). Search times were faster and fewer eye movements were needed to acquire targets when the target region was specified. Subjects also distributed their fixations disproportionately in this region and tended to fixate the cued region sooner. We interpret these patterns as evidence for the use of referential scene constraints to partially confine search to a specified scene region. Importantly, this constraint cannot be due to learned associations between the scene and its regions, as these spatial relationships were unpredictable. These findings require the modification of existing theories of scene constraint to include segmentation processes that can rapidly bias search to cued regions. |
Kim Joris Boström; Anne Kathrin Warzecha Ocular following response to sampled motion Journal Article In: Vision Research, vol. 49, no. 13, pp. 1693–1701, 2009. @article{Bostroem2009, We investigate the impact of monitor frame rate on the human ocular following response (OFR) and find that the response latency considerably depends on the frame rate in the range of 80-160 Hz, which is far above the flicker fusion limit. From the lowest to the highest frame rate the latency declines by roughly 10 ms. Moreover, the relationship between response latency and stimulus speed is affected by the frame rate, compensating and even inverting the effect at lower frame rates. In contrast to that, the initial response acceleration is not affected by the frame rate and its expected dependence on stimulus speed remains stable. The nature of these phenomena reveals insights into the neural mechanism of low-level motion detection underlying the ocular following response. |
Christian Boucheny; Georges Pierre Bonneau; Jacques Droulez; Guillaume Thibault; Stephane Ploix A perceptive evaluation of volume rendering techniques Journal Article In: ACM Transactions on Applied Perception, vol. 5, no. 4, pp. 1–24, 2009. @article{Boucheny2009, The display of space filling data is still a challenge for the community of visualization. Direct volume rendering (DVR) is one of the most important techniques developed to achieve direct perception of such volumetric data. It is based on semitransparent representations, where the data are accumulated in a depth-dependent order. However, it produces images that may be difficult to understand, and thus several techniques have been proposed so as to improve its effectiveness, using for instance lighting models or simpler representations (e.g., maximum intensity projection). In this article, we present three perceptual studies that examine how DVR meets its goals, in either static or dynamic context. We show that a static representation is highly ambiguous, even in simple cases, but this can be counterbalanced by use of dynamic cues (i.e., motion parallax) provided that the rendering parameters are correctly tuned. In addition, perspective projections are demonstrated to provide relevant information to disambiguate depth perception in dynamic displays. |
Julie A. Brefczynski-Lewis; Ritobrato Datta; James W. Lewis; Edgar A. DeYoe The topography of visuospatial attention as revealed by a novel visual field mapping technique Journal Article In: Journal of Cognitive Neuroscience, vol. 21, no. 7, pp. 1447–1460, 2009. @article{BrefczynskiLewis2009, Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style." |
Eli Brenner; Jeroen B. J. Smeets Sources of variability in interceptive movements Journal Article In: Experimental Brain Research, vol. 195, no. 1, pp. 117–133, 2009. @article{Brenner2009, In order to successfully intercept a moving target one must be at the right place at the right time. But simply being there is seldom enough. One usually needs to make contact in a certain manner, for instance to hit the target in a certain direction. How this is best achieved depends on the exact task, but to get an idea of what factors may limit performance we asked people to hit a moving virtual disk through a virtual goal, and analysed the spatial and temporal variability in the way in which they did so. We estimated that for our task the standard deviations in timing and spatial accuracy are about 20 ms and 5 mm. Additional variability arises from individual movements being planned slightly differently and being adjusted during execution. We argue that the way that our subjects moved was precisely tailored to the task demands, and that the movement accuracy is not only limited by the muscles and their activation, but also-and probably even mainly-by the resolution of visual perception. |
Leonard A. Breslow; J. Gregory Trafton; Raj M. Ratwani A perceptual process approach to selecting color scales for complex visualizations Journal Article In: Journal of Experimental Psychology: Applied, vol. 15, no. 1, pp. 25–34, 2009. @article{Breslow2009, Previous research has shown that multicolored scales are superior to ordered brightness scales for supporting identification tasks on complex visualizations (categorization, absolute numeric value judgments, etc.), whereas ordered brightness scales are superior for relative comparison tasks (greater/less). We examined the processes by which such tasks are performed. By studying eye movements and by comparing performance on scales of different sizes, we argued that (a) people perform identification tasks by conducting a serial visual search of the legend, whose speed is sensitive to the number of scale colors and the discriminability of the colors; and (b) people perform relative comparison tasks using different processes for multicolored versus brightness scales. With multicolored scales, they perform a parallel search of the legend, whose speed is relatively insensitive to the size of the scale, whereas with brightness scales, people usually directly compare the target colors in the visualization, while making little reference to the legend. Performance of comparisons was relatively robust against increases in scale size, whereas performance of identifications deteriorated markedly, especially with brightness scales, once scale sizes reached 10 colors or more. |
James R. Brockmole; Walter R. Boot Should I stay or should I go? Attentional disengagement from visually unique and unexpected items at fixation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 3, pp. 808–815, 2009. @article{Brockmole2009, Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience. |
Jeroen S. Benjamins; Ignace T. C. Hooge; Jacco C. Elst; Alexander H. Wertheim; Frans A. J. Verstraten Search time critically depends on irrelevant subset size in visual search Journal Article In: Vision Research, vol. 49, pp. 398–406, 2009. @article{Benjamins2009, In order for our visual system to deal with the massive amount of sensory input, some of this input is discarded, while other parts are processed [Wolfe, J. M. (1994). Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238]. From the visual search literature it is unclear how well one set of items can be selected that differs in only one feature from target (a 1F set), while another set of items can be ignored that differs in two features from target (a 2F set). We systematically varied the percentage of 2F non-targets to determine the contribution of these non-targets to search behaviour. Increasing the percentage 2F non-targets, that have to be ignored, was expected to result in increasingly faster search, since it decreases the size of 1F set that has to be searched. Observers searched large displays for a target in the 1F set with a variable percentage of 2F non-targets. Interestingly, when the search displays contained 5% 2F non-targets, the search time was longer compared to the search time in other conditions. This effect of 2F non-targets on performance was independent of set size. An inspection of the saccades revealed that saccade target selection did not contribute to the longer search times in displays with 5% 2F non-targets. Occurrence of longer search times in displays containing 5% 2F non-targets might be attributed to covert processes related to visual analysis of the fixated part of the display. Apparently, visual search performance critically depends on the percentage of irrelevant 2F non-targets. |
R. Bibi; Jay A. Edelman The influence of motor yraining onhuman express saccade production Journal Article In: Journal of Neurophysiology, vol. 102, no. 6, pp. 3101–3110, 2009. @article{Bibi2009, Express saccadic eye movements are saccades of extremely short latency. In monkey, express saccades have been shown to occur much more frequently when the monkey has been trained to make saccades in a particular direction to targets that appear in predictable locations. Such results suggest that express saccades occur in large number only under highly specific conditions, leading to the view that vector-specific training and motor preparatory processes are required to make an express saccade of a particular magnitude and direction. To evaluate this hypothesis in humans, we trained subjects to make saccades quickly to particular locations and then examined whether the frequency of express saccades depended on training and the number of possible target locations. Training significantly decreased saccade latency and increased express saccade production to both trained and untrained locations. Increasing the number of possible target locations (two vs. eight possible targets) led to only a modest increase of saccade latency. For most subjects, the probability of express saccade occurrence was much higher than that expected if vector-specific movement preparation were necessary for their production. These results suggest that vector-specific motor preparation and vector-specific saccade training are not necessary for express saccade production in humans and that increases in express saccade production are due in part to a facilitation in fixation disengagement or else a general enhancement in the ability of the saccadic system to respond to suddenly appearing visual stimuli. |
Markus Bindemann; Christoph Scheepers; A. Mike Burton Viewpoint and center of gravity affect eye movements to human faces Journal Article In: Journal of Vision, vol. 9, no. 2, pp. 1–16, 2009. @article{Bindemann2009, In everyday life, human faces are encountered in many different views. Despite this fact, most psychological research has focused on the perception of frontal faces. To address this shortcoming, the current study investigated how different face views are processed, by measuring eye movements to frontal, mid-profile and profile faces during a gender categorization (Experiment 1) and a free-viewing task (Experiment 2). In both experiments observers initially fixated the geometric center of a face, independent of face view. This center-of-gravity effect induced a qualitative shift in the features that were sampled across different face views in the time period immediately after stimulus onset. Subsequent eye fixations focused increasingly on specific facial features. At this stage, the eye regions were targeted predominantly in all face views, and to a lesser extent also the nose and the mouth. These findings show that initial saccades to faces are driven by general stimulus properties, before eye movements are redirected to the specific facial features in which observers take an interest. These findings are illustrated in detail by plotting the distribution of fixations, first fixations, and percentage fixations across time. |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Saliency does not account for fixations to eyes within social scenes Journal Article In: Vision Research, vol. 49, pp. 2992–3000, 2009. @article{Birmingham2009, We assessed the role of saliency in driving observers to fixate the eyes in social scenes. Saliency maps (Itti & Koch, 2000) were computed for the scenes from three previous studies. Saliency provided a poor account of the data. The saliency values for the first-fixated locations were extremely low and no greater than what would be expected by chance. In addition, the saliency values for the eye regions were low. Furthermore, whereas saliency was no better at predicting early saccades than late saccades, the average latency to fixate social areas of the scene (e.g., the eyes) was very fast (within 200 ms). Thus, visual saliency does not account for observers' bias to select the eyes within complex social scenes, nor does it account for fixation behavior in general. Instead, it appears that observers' fixations are driven largely by their default interest in social information. |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Get real! Resolving the debate about equivalent social stimuli Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 904–924, 2009. @article{Birmingham2009a, Gaze and arrow studies of spatial orienting have shown that eyes and arrows produce nearly identical effects on shifts of spatial attention. This has led some researchers to suggest that the human attention system considers eyes and arrows as equivalent social stimuli. However, this view does not fit with the general intuition that eyes are unique social stimuli nor does it agree with a large body of work indicating that humans possess a neural system that is preferentially biased to process information regarding human gaze. To shed light on this discrepancy we entertained the idea that the model cueing task may fail to measure some of the ways that eyes are special. Thus rather than measuring the orienting of attention to a location cued by eyes and arrows, we measured the selection of eyes and arrows embedded in complex real-world scenes. The results were unequivocal: People prefer to look at other people and their eyes; they rarely attend to arrows. This outcome was not predicted by visual saliency but it was predicted by the idea that eyes are social stimuli that are prioritized by the attention system. These data, and the paradigm from which they were derived, shed new light on past cueing studies of social attention, and they suggest a new direction for future investigations of social attention. Gaze and arrow studies of spatial orienting have shown that eyes and arrows produce nearly identical effects on shifts of spatial attention. This has led some researchers to suggest that the human attention system considers eyes and arrows as equivalent social stimuli. However, this view does not fit with the general intuition that eyes are unique social stimuli nor does it agree with a large body of work indicating that humans possess a neural system that is preferentially biased to process information regarding human gaze. To shed light on this discrepancy we entertained the idea that the model cueing task may fail to measure some of the ways that eyes are special. Thus rather than measuring the orienting of attention to a location cued by eyes and arrows, we measured the selection of eyes and arrows embedded in complex real-world scenes. The results were unequivocal: People prefer to look at other people and their eyes; they rarely attend to arrows. This outcome was not predicted by visual saliency but it was predicted by the idea that eyes are social stimuli that are prioritized by the attention system. These data, and the paradigm from which they were derived, shed new light on past cueing studies of social attention, and they suggest a new direction for future investigations of social attention. |
Sarah Bate; Catherine Haslam; Timothy L. Hodgson Angry faces are special too: Evidence From the visual scanpath Journal Article In: Neuropsychology, vol. 23, no. 5, pp. 658–667, 2009. @article{Bate2009a, Traditional models of face processing posit independent pathways for the processing of facial identity and facial expression (e.g., Bruce & Young, 1986). However, such models have been questioned by recent reports that suggest positive expressions may facilitate recognition (e.g., Baudouin et al., 2000), although little attention has been paid to the role of negative expressions. The current study used eye movement indicators to examine the influence of emotional expression (angry, happy, neutral) on the recognition of famous and novel faces. In line with previous research, the authors found some evidence that only happy expressions facilitate the processing of famous faces. However, the processing of novel faces was enhanced by the presence of an angry expression. Contrary to previous findings, this paper suggests that angry expressions also have an important role in the recognition process, and that the influence of emotional expression is modulated by face familiarity. The implications of this finding are discussed in relation to (1) current models of face processing, and (2) theories of oculomotor control in the viewing of facial stimuli. |
Sarah Bate; Catherine Haslam; Ashok Jansari; Timothy L. Hodgson Covert face recognition relies on affective valence in congenital prosopagnosia Journal Article In: Cognitive Neuropsychology, vol. 26, no. 4, pp. 391–411, 2009. @article{Bate2009, Dominant accounts of covert recognition in prosopagnosia assume subthreshold activation of face representations created prior to onset of the disorder. Yet, such accounts cannot explain covert recognition in congenital prosopagnosia, where the impairment is present from birth. Alternatively, covert recognition may rely on affective valence, yet no study has explored this possibility. The current study addressed this issue in 3 individuals with congenital prosopagnosia, using measures of the scanpath to indicate recognition. Participants were asked to memorize 30 faces paired with descriptions of aggressive, nice, or neutral behaviours. In a later recognition test, eye movements were monitored while participants discriminated studied from novel faces. Sampling was reduced for studied–nice compared to studied–aggressive faces, and performance for studied–neutral and novel faces fell between these two conditions. This pattern of findings suggests that (a) positive emotion can facilitate processing in prosopagnosia, and (b) covert recognition may rely on emotional valence rather than familiarity. |
Paul M. Bays; R. F. G. Catalao; Masud Husain The precision of visual working memory is set by allocation of a shared resource Journal Article In: Journal of Vision, vol. 9, no. 10, pp. 7–7, 2009. @article{Bays2009, The mechanisms underlying visual working memory have recently become controversial. One account proposes a small number of memory "slots," each capable of storing a single visual object with fixed precision. A contrary view holds that working memory is a shared resource, with no upper limit on the number of items stored; instead, the more items that are held in memory, the less precisely each can be recalled. Recent findings from a color report task have been taken as crucial new evidence in favor of the slot model. However, while this task has previously been thought of as a simple test of memory for color, here we show that performance also critically depends on memory for location. When errors in memory are considered for both color and location, performance on this task is in fact well explained by the resource model. These results demonstrate that visual working memory consists of a common resource distributed dynamically across the visual scene, with no need to invoke an upper limit on the number of objects represented. |
Mark W. Becker; Brian Detweiler-Bedell Early detection and avoidance of threatening faces during passive viewing Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 7, pp. 1257–1264, 2009. @article{Becker2009, To evaluate whether there is an early attentional bias towards negative stimuli, we tracked participants' eyes while they passively viewed displays composed of four Ekman faces. In Experiment 1 each display consisted of three neutral faces and one face depicting fear or happiness. In half of the trials, all faces were inverted. Although the passive viewing task should have been very sensitive to attentional biases, we found no evidence that overt attention was biased towards fearful faces. Instead, people tended to actively avoid looking at the fearful face. This avoidance was evident very early in scene viewing, suggesting that the threat associated with the faces was evaluated rapidly. Experiment 2 replicated this effect and extended it to angry faces. In sum, our data suggest that negative facial expressions are rapidly analysed and influence visual scanning, but, rather than attract attention, such faces are actively avoided. |
Stefanie I. Becker; Ulrich Ansorge; Massimo Turatto Saccades reveal that allocentric coding of the moving object causes mislocalization in the flash-lag effect Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 6, pp. 1313–1324, 2009. @article{Becker2009a, The flash-lag effect is a visual misperception of a position of a flash relative to that of a moving object: Even when both are at the same position, the flash is reported to lag behind the moving object. In the present study, the flash-lag effect was investigated with eye-movement measurements: Subjects were required to saccade to either the flash or the moving object. The results showed that saccades to the flash were precise, whereas saccades to the moving object showed an offset in the direction of motion. A further experiment revealed that this offset in the saccades to the moving object was eliminated when the whole background flashed. This result indicates that saccadic offsets to the moving stimulus critically depend on the spatially distinctive flash in the vicinity of the moving object. The results are incompatible with current theoretical explanations of the flash-lag effect, such as the motion extrapolation account. We propose that allocentric coding of the position of the moving object could account for the flash-lag effect. |
Artem V. Belopolsky; Jan Theeuwes When are attention and saccade preparation dissociated? Journal Article In: Psychological Science, vol. 20, no. 11, pp. 1340–1347, 2009. @article{Belopolsky2009, To understand the mechanisms of visual attention, it is crucial to know the relationship between attention and saccades. Some theories propose a close relationship, whereas others view the attention and saccade systems as completely independent. One possible way to resolve this controversy is to distinguish between the maintenance and shifting of attention. The present study used a novel paradigm that allowed simultaneous measurement of attentional allocation and saccade preparation. Saccades toward the location where attention was maintained were either facilitated or suppressed depending on the probability of making a saccade to that location and the match between the attended location and the saccade location on the previous trial. Shifting attention to another location was always associated with saccade facilitation. The findings provide a new view, demonstrating that the maintenance of attention and shifting of attention differ in their relationship to the oculomotor system. |
Christoph Bledowski; Benjamin Rahm; James B. Rowe What "works" in working memory? Separate systems for selection and updating of critical information Journal Article In: Journal of Neuroscience, vol. 29, no. 43, pp. 13735–13741, 2009. @article{Bledowski2009, Cognition depends critically on working memory, the active representation of a limited number of items over short periods of time. In addition to the maintenance of information during the course of cognitive processing, many tasks require that some of the items in working memory become transiently more important than others. Based on cognitive models of working memory, we hypothesized two complementary essential cognitive operations to achieve this: a selection operation that retrieves the most relevant item, and an updating operation that changes the focus of attention onto it. Using functional magnetic resonance imaging, high-resolution oculometry, and behavioral analysis, we demonstrate that these two operations are functionally and neuroanatomically dissociated. Updating the attentional focus elicited transient activation in the caudal superior frontal sulcus and posterior parietal cortex. In contrast, increasing demands on selection selectively modulated activation in rostral superior frontal sulcus and posterior cingulate/precuneus. We conclude that prioritizing one memory item over others invokes independent mechanisms of mnemonic retrieval and attentional focusing, each with its distinct neuroanatomical basis within frontal and parietal regions. These support the developing understanding of working memory as emerging from the interaction between memory and attentional systems. |
Tanya Blekher; Marjorie R. Weaver; Xueya Cai; Siu L. Hui; Jeanine Marshall; Jacqueline Gray Jackson; Joanne Wojcieszek; Robert D. Yee; Tatiana M. Foroud Test-retest reliability of saccadic measures in subjects at risk for Huntington disease Journal Article In: Investigative Ophthalmology & Visual Science, vol. 50, no. 12, pp. 5707–5711, 2009. @article{Blekher2009, PURPOSE Abnormalities in saccades appear to be sensitive and specific biomarkers in the prediagnostic stages of Huntington disease (HD). The goal of this study was to evaluate test-retest reliability of saccadic measures in prediagnostic carriers of the HD gene expansion (PDHD) and normal controls (NC). METHODS The study sample included 9 PDHD and 12 NC who completed two study visits within an approximate 1-month interval. At the first visit, all participants completed a uniform clinical evaluation. A high-resolution, video-based system was used to record eye movements during completion of a battery of visually guided, antisaccade, and memory-guided tasks. Latency, velocity, gain, and percentage of errors were quantified. Test-retest reliability was estimated by calculating the intraclass correlation (ICC) of the saccade measures collected at the first and second visits. In addition, an equality test based on Fisher's z-transformation was used to evaluate the effects of group (PDHD and NC) and the subject's sex on ICC. RESULTS The percentage of errors showed moderate to high reliability in the antisaccade and memory-guided tasks (ICC = 0.64-0.93). The latency of the saccades also demonstrated moderate to high reliability (ICC = 0.55-0.87) across all tasks. The velocity and gain of the saccades showed moderate reliability. The ICC was similar in the PDHD and NC groups. There was no significant effect of sex on the ICC. CONCLUSIONS Good reliability of saccadic latency and percentage of errors in both antisaccade and memory-guided tasks suggests that these measures could serve as biomarkers to evaluate progression in HD. |
Tanya Blekher; Marjorie R. Weaver; Jeanine Marshall; Siu L. Hui; Jacqueline Gray Jackson; Julie C. Stout; Xabier Beristain; Joanne Wojcieszek; Robert D. Yee; Tatiana M. Foroud Visual scanning and cognitive performance in prediagnostic and early-stage Huntington's disease Journal Article In: Movement Disorders, vol. 24, no. 4, pp. 533–540, 2009. @article{Blekher2009a, The objective of this study was to evaluate visual scanning strategies in carriers of the Huntington disease (HD) gene expansion and to test whether there is an association between measures of visual scanning and cognitive performance. The study sample included control (NC |
Tanya Blekher; Marjorie R. Weaver; Jason Rupp; William C. Nichols; Siu L. Hui; Jacqueline Gray; Robert D. Yee; Joanne Wojcieszek; Tatiana M. Foroud Multiple step pattern as a biomarker in Parkinson disease Journal Article In: Parkinsonism and Related Disorders, vol. 15, no. 7, pp. 506–510, 2009. @article{Blekher2009b, Objective: To evaluate quantitative measures of saccades as possible biomarkers in early stages of Parkinson disease (PD) and in a population at-risk for PD. Methods: The study sample (n = 68) included mildly to moderately affected PD patients, their unaffected siblings, and control individuals. All participants completed a clinical evaluation by a movement disorder neurologist. Genotyping of the G2019S mutation in the LRRK2 gene was performed in the PD patients and their unaffected siblings. A high resolution, video-based eye tracking system was employed to record eye positions during a battery of visually guided, anti-saccadic (AS), and two memory-guided (MG) tasks. Saccade measures (latency, velocity, gain, error rate, and multiple step pattern) were quantified. Results: PD patients and a subgroup of their unaffected siblings had an abnormally high incidence of multiple step patterns (MSP) and reduced gain of saccades as compared with controls. The abnormalities were most pronounced in the more challenging version of the MG task. For this task, the MSP measure demonstrated good sensitivity (87%) and excellent specificity (96%) in the ability to discriminate PD patients from controls. PD patients and their siblings also made more errors in the AS task. Conclusions: Abnormalities in eye movement measures appear to be sensitive and specific measures in PD patients as well as a subset of those at-risk for PD. The inclusion of quantitative laboratory testing of saccadic movements may increase the sensitivity of the neurological examination to identify individuals who are at greater risk for PD. |
Jens Bölte; Andrea Böhl; Christian Dobel; Pienie Zwitserlood Effects of referential ambiguity, time constraints and addressee orientation on the production of morphologically complex words Journal Article In: European Journal of Cognitive Psychology, vol. 21, no. 8, pp. 1166–1199, 2009. @article{Boelte2009, In five experiments, participants were asked to describe unambiguously a target picture in a picture-picture paradigm. In the same-category condition, target (e. g., water bucket) and distractor picture (e. g., ice bucket) had identical names when their preferred, morphologically simple, name was used (e. g., bucket). The ensuing lexical ambiguity could be resolved by compound use (e. g., water bucket). Simple names sufficed as means of specification in other conditions, with distractors identical to the target, completely unrelated, or geometric figures. With standard timing parameters, participants produced mainly ambiguous answers in Experiment 1. An increase in available processing time hardly improved unambiguous responding (Experiment 2). A referential communication instruction (Experiment 3) increased the number of compound responses considerably, but morphologically simple answers still prevailed. Unambiguous responses outweighed ambiguous ones in Experiment 4, when timing parameters were further relaxed. Finally, the requirement to name both objects resulted in a nearly perfect ambiguity resolution (Experiment 5). Together, the results showed that speakers overcome lexical ambiguity only when time permits, when an addressee perspective is given and, most importantly, when their own speech overtly signals the ambiguity. |
Walter R. Boot; Ensar Becic; Arthur F. Kramer Stable individual differences in search strategy?: The effect of task demands and motivational factors on scanning strategy in visual search Journal Article In: Journal of Vision, vol. 9, no. 3, pp. 1–16, 2009. @article{Boot2009, Previous studies have demonstrated large individual differences in scanning strategy during a dynamic visual search task (E. Becic, A. F. Kramer, & W. R. Boot, 2007; W. R. Boot, A. F. Kramer, E. Becic, D. A. Wiegmann, & T. Kubose, 2006). These differences accounted for substantial variance in performance. Participants who chose to search covertly (without eye movements) excelled, participants who searched overtly (with eye movements) performed poorly. The aim of the current study was to investigate the stability of scanning strategies across different visual search tasks in an attempt to explain why a large percentage of observers might engage in maladaptive strategies. Scanning strategy was assessed for a group of observers across a variety of search tasks without feedback (efficient search, inefficient search, change detection, dynamic search). While scanning strategy was partly determined by task demands, stable individual differences emerged. Participants who searched either overtly or covertly tended to adopt the same strategy regardless of the demands of the search task, even in tasks in which such a strategy was maladaptive. However, when participants were given explicit feedback about their performance during search and performance incentives, strategies across tasks diverged. Thus it appears that observers by default will favor a particular search strategy but can modify this strategy when it is clearly maladaptive to the task. |
Erhardt Barth; Eleonora Vig; Michael Dorr Efficient visual coding and the predictability of eye movements on natural movies Journal Article In: Spatial Vision, vol. 22, no. 5, pp. 397–408, 2009. @article{Barth2009, We deal with the analysis of eye movements made on natural movies in free-viewing conditions. Saccades are detected and used to label two classes of movie patches as attended and non-attended. Machine learning techniques are then used to determine how well the two classes can be separated, i.e., how predictable saccade targets are. Although very simple saliency measures are used and then averaged to obtain just one average value per scale, the two classes can be separated with an ROC score of around 0.7, which is higher than previously reported results. Moreover, predictability is analysed for different representations to obtain indirect evidence for the likelihood of a particular representation. It is shown that the predictability correlates with the local intrinsic dimension in a movie. |
Gerry T. M. Altmann; Yuki Kamide Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation Journal Article In: Cognition, vol. 111, no. 1, pp. 55–71, 2009. @article{Altmann2009, Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations). |
Amanda L. Gamble; Ronald M. Rapee The time-course of attentional bias in anxious children and adolescents Journal Article In: Journal of Anxiety Disorders, vol. 23, no. 7, pp. 841–847, 2009. @article{Gamble2009, This study examined the time-course of attentional bias in anxious and non-anxious children and adolescents aged 7-17 years using eye movement as an index of selective attention. Participants completed two eye-tracking tasks in which they viewed happy-neutral and negative-neutral face pairs for 3000 and 500 ms, respectively. When face pairs were presented for 3000 ms eye movement data showed no evidence of an attentional bias at any stage of attentional processing. When face pairs were presented for 500 ms a bias in initial orienting occurred; anxious adolescents directed their first fixation away from negative faces and anxious children directed their first fixation away from happy faces. Results suggest that childhood anxiety is characterized by a bias in initial orienting, with no bias in sustained attention, although only for briefly presented faces. |
Karsten Georg; Markus Lappe Effects of saccadic adaptation on visual localization before and during saccades Journal Article In: Experimental Brain Research, vol. 192, no. 1, pp. 9–23, 2009. @article{Georg2009, Short-term saccadic adaptation is a mechanism that adjusts saccade amplitude to accurately reach an intended saccade target. Short-term saccadic adaptation induces a shift of perceived localization of objects flashed before the saccade. This shift, being detectable only before an adapted saccade, disappears at some time around saccade onset. Up to now, the exact time course of this effect has remained unknown. In previous experiments, the mislocalization caused by this adaptation-induced shift was overlapping with the mislocalization caused by a different, saccade-related localization error, the peri-saccadic compression. Due to peri-saccadic compression, objects flashed immediately at saccade onset appear compressed towards the saccade target. First, we tested whether the adaptation-induced shift and the peri-saccadic compression were either independent or related processes. We performed experiments with two different luminance-contrast conditions to separate the adaptation-induced shift and the peri-saccadic compression. Human participants had to indicate the perceived location of briefly presented stimuli before, during or after an adapted saccade. Adaptation-induced shift occurred similarly in either contrast condition, with or without peri-saccadic compression. Second, after validating the premise of both processes being independent and superimposing, we aimed at characterizing the time course of the adaptation-induced shift in more detail. Being present up to 1 s before an adapted saccade, the adaptation-induced shift begins to gradually decline from about 150 ms before saccade onset, and ceases during the saccade. A final experiment revealed that visual references make a major contribution to adaptation-induced mislocalization. |
Todd M. Herrington; Nicolas Y. Masse; Karim J. Hachmeh; Jackson E. T. Smith; John A. Assad; Erik P. Cook In: Journal of Neuroscience, vol. 29, no. 18, pp. 5793–5805, 2009. @article{Herrington2009, It is widely reported that the activity of single neurons in visual cortex is correlated with the perceptual decision of the subject. The strength of this correlation has implications for the neuronal populations generating the percepts. Here we asked whether microsaccades, which are small, involuntary eye movements, contribute to the correlation between neural activity and behavior. We analyzed data from three different visual detection experiments, with neural recordings from the middle temporal (MT), lateral intraparietal (LIP), and ventral intraparietal (VIP) areas. All three experiments used random dot motion stimuli, with the animals required to detect a transient or sustained change in the speed or strength of motion. We found that microsaccades suppressed neural activity and inhibited detection of the motion stimulus, contributing to the correlation between neural activity and detection behavior. Microsaccades accounted for as much as 19% of the correlation for area MT, 21% for area LIP, and 17% for VIP. While microsaccades only explain part of the correlation between neural activity and behavior, their effect has implications when considering the neuronal populations underlying perceptual decisions. |
Susanne Hertel; Andreas Sprenger; Christine Klein; Detlef Kömpf; Christoph Helmchen; Hubert Kimmig Different saccadic abnormalities in PINK1 mutation carriers and in patients with non-genetic Parkinson's disease Journal Article In: Journal of Neurology, vol. 256, no. 7, pp. 1192–1194, 2009. @article{Hertel2009, |
Richard W. Hertle; Joost Felius; Dongsheng Yang; Matthew Kaufman Eye muscle surgery for infantile nystagmus syndrome in the first two years of life Journal Article In: Clinical Ophthalmology, vol. 3, no. 1, pp. 615–624, 2009. @article{Hertle2009, PURPOSE: To report visual and elctrophysioloigcal effects of eye muscle surgery in young patients with infantile nystagmus syndrome (INS). METHODS: Prospective, interventional case cohort of 19 patients aged under 24 months who were operated on for combinations of strabismus, an anomalous head posture, and nystagmus. All patients were followed at least nine months. Outcome measures, part of an institutionally approved study, included Teller acuity, head position, strabismic deviation, and eye movement recordings, from which waveform types and a nystagmus optimal foveation fraction (NOFF). Computerized parametric and nonparametric statistical analysis of data were perfomed using standard software on both individual and group data. RESULTS: Age averaged 17.7 months (13.1-month follow-up). Thirteen (68%) patients had associated optic nerve or retinal disease. 42% had amblyopia, 68% had refractive errors. Group means in binocular Teller acuity (P < 0.05), strabismic deviation (P < 0.05), head posture (P < 0.001), and the NOFF measures (P < 0.01) from eye movement recordings improved in all patients. There was a change in null zone waveforms to more favorable jerk types. There were no reoperations or surgical complications. CONCLUSIONS: Surgery on the extraocular muscles in patients aged less than two years with INS results in improvements in multiple aspects of ocular motor and visual function. |
Valerie Higenell; Brian J. White; Joshua R. Hwang; Douglas P. Munoz Localizing the neural substrate of reflexive covert orienting Journal Article In: Journal of Eye Movement Research, vol. 6, no. 1, pp. 1–14, 2009. @article{Higenell2009, The capture of covert spatial attention by salient visual events influences subsequent gaze behavior. A task irrelevant stimulus (cue) can reduce (Attention capture) or prolong (Inhibition of return) saccade reaction time to a subsequent target stimulus depending on the cue-target delay. Here we investigated the mechanisms that underlie the sensory-based account of AC/IOR by manipulating the visual processing stage where the cue and target interact. In Experiment 1, liquid crystal shutter goggles were used to test whether AC/IOR occur at a monocular versus binocular processing stage (before versus after signals from both eyes converge). In Experiment 2, we tested whether visual orientation selective mechanisms are critical for AC/IOR by using oriented Gabor stimuli. We found that the magnitude of AC and IOR was not different between monocular and interocular viewing conditions, or between iso- and ortho-oriented cue-target interactions. The results suggest that the visual mechanisms that contribute to AC/IOR arise at an orientation-independent binocular processing stage. |
Jesse Hochstadt Set-shifting and the on-line processing of relative clauses in Parkinson's disease: Results from a novel eye-tracking method Journal Article In: Cortex, vol. 45, no. 8, pp. 991–1011, 2009. @article{Hochstadt2009, Past research indicates that in Parkinson's disease (PD), set-shifting deficits cause impaired comprehension of sentences containing restrictive relative clauses (RCs). Some research also suggests that verbal working memory deficits impair comprehension of long-distance (LD) dependencies in sentences with center-embedded RCs. To test whether these deficits impair comprehension by affecting on-line processing, we tracked patients' eye movements as they matched pictures with sentences with final- or center-embedded RCs (e.g., The queen was kicking the cook who was fat, The queen who was kicking the cook was thin) and active or passive verbs. Decreases in looks to distracters ruled out at the transitive verb (e.g., a cook kicking a queen) and the adjective (a fat queen kicking a thin cook) reflected how effective processing was at those points. Though patients showed greater difficulty comprehending center-embedded and passive sentences, set-shifting errors correlated with comprehension of all sentences. Consistent with this, patients with poorer comprehension exhibited impaired on-line processing of both center-embedded and final RCs (for which comprehension was better due to their grammatical simplicity), and these effects correlated with set-shifting errors. We consider two possible explanations for this apparently general RC-processing deficit. First, because RCs are infrequent, set-shifting may be needed to override the processor's expectations for higher-frequency structures. Second, because restrictive RCs typically refer to information already in the discourse context, set-shifting may be needed to redirect attention from linguistic foreground to background information. Eye-tracking data indicated no difficulty processing LD dependencies; correlations of verbal working memory with comprehension of passive and center-embedded sentences may reflect off-line use of memory. In trials with passive verbs, patients looked toward the verb distracter before even processing the verb. This effect was larger than that previously seen for young participants, suggesting that PD may amplify a normal bias to assume the subject noun is the agent. |
Timothy L. Hodgson; Benjamin A. Parris; Nicola J. Gregory; Tracey Jarvis The saccadic Stroop effect: Evidence for involuntary programming of eye movements by linguistic cues Journal Article In: Vision Research, vol. 49, no. 5, pp. 569–574, 2009. @article{Hodgson2009, The effect of automatic priming of behaviour by linguistic cues is well established. However, as yet these effects have not been directly demonstrated for eye movement responses. We investigated the effect of linguistic cues on eye movements using a modified version of the Stroop task in which a saccade was made to the location of a peripheral colour patch which matched the "ink" colour of a centrally presented word cue. The words were either colour words ("red", "green", "blue", "yellow") or location words ("up", "down", "left", "right"). As in the original version of the Stroop task the identity of the word could be either congruent or incongruent with the response location. The results showed that oculomotor programming was influenced by word identity, even though the written word provided no task relevant information. Saccade latency was increased on incongruent trials and an increased frequency of error saccades was observed in the direction congruent with the word identity. The results argue against traditional distinctions between reflexive and voluntary programming of saccades and suggest that linguistic cues can also influence eye movement programming in an automatic manner. |
Lee Hogarth; Anthony Dickinson; Theodora Duka Detection versus sustained attention to drug cues have dissociable roles in mediating drug seeking behavior Journal Article In: Experimental and Clinical Psychopharmacology, vol. 17, no. 1, pp. 21–30, 2009. @article{Hogarth2009, It is commonly thought that attentional bias for drug cues plays an important role in motivating human drug-seeking behavior. To assess this claim, two groups of smokers were trained in a discrimination task in which a tobacco-seeking response was rewarded only in the presence of 1 particular stimulus (the S+). The key manipulation was that whereas 1 group could control the duration of S+ presentation, for the second group, this duration was fixed. The results showed that the fixed-duration group acquired a sustained attentional bias to the S+ over training, indexed by greater dwell time and fixation count, which emerged in parallel with the control exerted by the S+ over tobacco-seeking behavior. By contrast, the controllable-duration group acquired no sustained attentional bias for S+ and instead used efficient detection of the S+ to achieve a comparable level of control over tobacco seeking. These data suggest that detection and sustained attention to drug cues have dissociable roles in enabling drug cues to motivate drug-seeking behavior, which has implications for attentional retraining as a treatment for addiction. |
Emmanuel Guzman-Martinez; Parkson Leung; Steven L. Franconeri; Marcia Grabowecky; Satoru Suzuki Rapid eye-fixation training without eyetracking Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 3, pp. 491–496, 2009. @article{GuzmanMartinez2009, Maintenance of stable central eye fixation is crucial for a variety of behavioral, electrophysiological, and neuroimaging experiments. Naive observers in these experiments are not typically accustomed to fixating, either requiring the use of cumbersome and costly eyetracking or producing confounds in results. We devised a flicker display that produced an easily detectable visual phenomenon whenever the eyes moved. A few minutes of training using this display dramatically improved the accuracy of eye fixation while observers performed a demanding spatial attention cuing task. The same amount of training using control displays did not produce significant fixation improvements, and some observers consistently made eye movements to the peripheral attention cue, contaminating the cuing effect. Our results indicate that (1) eye fixation can be rapidly improved in naive observers by providing real-time feedback about eye movements, and (2) our simple flicker technique provides an easy and effective method for providing this feedback. |
Tuomo Häikiö; Raymond Bertram; Jukka Hyönä; Pekka Niemi Development of the letter identity span in reading: Evidence from the eye movement moving window paradigm Journal Article In: Journal of Experimental Child Psychology, vol. 102, no. 2, pp. 167–181, 2009. @article{Haeikioe2009, By means of the moving window paradigm, we examined how many letters can be identified during a single eye fixation and whether this letter identity span changes as a function of reading skill. The results revealed that 8-year-old Finnish readers identify approximately 5 characters, 10-year-old readers identify approximately 7 characters, and 12-year-old and adult readers identify approximately 9 characters to the right of fixation. Comparison with earlier studies revealed that the letter identity span is smaller than the span for identifying letter features and that it is as wide in Finnish as in English. Furthermore, the letter identity span of faster readers of each age group was larger than that of slower readers, indicating that slower readers, unlike faster readers, allocate most of their processing resources to foveally fixated words. Finally, slower second graders were largely not disrupted by smaller windows, suggesting that their word decoding skill is not yet fully automatized. |
Glenda Halliday; Maria Trinidad Herrero; Karen Murphy; Heather McCann; Francisco Ros-Bernal; Carlos Barcia; Hideo Mori; Francisco J. Blesa; Jose A. Obeso No Lewy pathology in monkeys with over 10 years of severe MPTP Parkinsonism Journal Article In: Movement Disorders, vol. 24, no. 10, pp. 1519–1545, 2009. @article{Halliday2009, The recent knowledge that 10 years after trans- plantation surviving human fetal neurons adopt the histopathology of Parkinson's disease suggests that Lewy body formation takes a decade to achieve. To determine whether similar histopathology occurs in 1-methyl-4- phenyl-1,2,3,6-tetrahydropyridine (MPTP)-primate models over a similar timeframe, the brains of two adult monkeys made parkinsonian in their youth with intermittent injections of MPTP were studied. Despite substantial nigral degeneration and increased α-synuclein immunoreactivity within surviving neurons, there was no evidence of Lewy body formation. This suggests that MPTP-induced oxidative stress and inflammation per se are not sufficient for Lewy body formation, or Lewy bodies are human specific |
Derek A. Hamilton; Travis E. Johnson; Edward S. Redhead; Steven P. Verney Control of rodent and human spatial navigation by room and apparatus cues Journal Article In: Behavioural Processes, vol. 81, no. 2, pp. 154–169, 2009. @article{Hamilton2009, A growing body of literature indicates that rats prefer to navigate in the direction of a goal in the environment (directional responding) rather than to the precise location of the goal (place navigation). This paper provides a brief review of this literature with an emphasis on recent findings in the Morris water task. Four experiments designed to extend this work to humans in a computerized, virtual Morris water task are also described. Special emphasis is devoted to how directional responding and place navigation are influenced by room and apparatus cues, and how these cues control distinct components of navigation to a goal. Experiments 1 and 2 demonstrate that humans, like rats, perform directional responses when cues from the apparatus are present, while Experiment 3 demonstrates that place navigation predominates when apparatus cues are eliminated. In Experiment 4, an eyetracking system measured gaze location in the virtual environment dynamically as participants navigated from a start point to the goal. Participants primarily looked at room cues during the early segment of each trial, but primarily focused on the apparatus as the trial progressed, suggesting distinct, sequential stimulus functions. Implications for computational modeling of navigation in the Morris water task and related tasks are discussed. |
Robin Hawes Vision and reality: Relativity in art Journal Article In: Digital Creativity, vol. 20, no. 3, pp. 177–186, 2009. @article{Hawes2009, Artist and researcher, Robin Hawes, presents a recently completed art/science collaboration which examined the processes undertaken by the eye in providing sensory data to the brain and aimed to explore the internally construc- tive and idiosyncratic aspects of visual percep- tion. With the physiology of the retina providing inconsistent quality of information across our field of view, the project set out to reveal the disparity between the visual information gath- ered by our eyes and the conscious picture of ‘reality' formed in our minds. The paper will map out the psychological, physiological and philosophical basis for the research, as well as presenting images produced by the project. In essence, each time someone contemplates a work of art, the work of art is re-constructed ‘internally'. This project set out, in part at least, to make ‘visible' this hitherto internal, idiosyncratic, unique and unshared neurological event. |
Benjamin Y. Hayden; Jack L. Gallant Combined effects of spatial and feature-based attention on responses of V4 neurons Journal Article In: Vision Research, vol. 49, no. 10, pp. 1182–1187, 2009. @article{Hayden2009, Attention is thought to be controlled by a specialized fronto-parietal network that modulates the responses of neurons in sensory and association cortex. However, the principles by which this network affects the responses of these sensory and association neurons remains unknown. In particular, it remains unclear whether different forms of attention, such as spatial and feature-based attention, independently modulate responses of single neurons. We recorded responses of single V4 neurons in a task that controls both forms of attention independently. We find that the combined effects of spatial and feature-based attention can be described as the sum of independent processes with a small super-additive interaction term. This pattern of effects demonstrates that the spatial and feature-based aspects of the attentional control system can independently affect responses of single neurons. These results are consistent with the idea that spatial and feature-based attention are controlled by distinct neural substrates whose effects combine synergistically to influence responses of visual neurons. |
Benjamin Y. Hayden; David V. Smith; Michael L. Platt Electrophysiological correlates of default-mode processing in macaque posterior cingulate cortex Journal Article In: Proceedings of the National Academy of Sciences, vol. 106, no. 14, pp. 5948–5953, 2009. @article{Hayden2009a, During the course of daily activity, our level of engagement with the world varies on a moment-to-moment basis. Although these fluctuations in vigilance have critical consequences for our thoughts and actions, almost nothing is known about the neuronal substrates governing such dynamic variations in task engagement. We investigated the hypothesis that the posterior cingulate cortex (CGp), a region linked to default-mode processing by hemodynamic and metabolic measures, controls such variations. We recorded the activity of single neurons in CGp in 2 macaque monkeys performing simple tasks in which their behavior varied from vigilant to inattentive. We found that firing rates were reliably suppressed during task performance and returned to a higher resting baseline between trials. Importantly, higher firing rates predicted errors and slow behavioral responses, and were also observed during cued rest periods when monkeys were temporarily liberated from exteroceptive vigilance. These patterns of activity were not observed in the lateral intraparietal area, an area linked to the frontoparietal attention network. Our findings provide physiological confirmation that CGp mediates exteroceptive vigilance and are consistent with the idea that CGp is part of the "default network" of brain areas associated with control of task engagement. |
H. S. Greenwald; David C. Knill Cue integration outside central fixation: A study of grasping in depth Journal Article In: Journal of Vision, vol. 9, no. 2, pp. 1–16, 2009. @article{Greenwald2009, We assessed the usefulness of stereopsis across the visual field by quantifying how retinal eccentricity and distance from the horopter affect humans' relative dependence on monocular and binocular cues about 3D orientation. The reliabilities of monocular and binocular cues both decline with eccentricity, but the reliability of binocular information decreases more rapidly. Binocular cue reliability also declines with increasing distance from the horopter, whereas the reliability of monocular cues is virtually unaffected. We measured how subjects integrated these cues to orient their hands when grasping oriented discs at different eccentricities and distances from the horopter. Subjects relied increasingly less on binocular disparity as targets' retinal eccentricity and distance from the horopter increased. The measured cue influences were consistent with what would be predicted from the relative cue reliabilities at the various target locations. Our results showed that relative reliability affects how cues influence motor control and that stereopsis is of limited use in the periphery and away from the horopter because monocular cues are more reliable in these regions. |
Stefan Grondelaers; Dirk Speelman; Denis Drieghe; Marc Brysbaert; Dirk Geeraerts In: Acta Psychologica, vol. 130, no. 2, pp. 1–33, 2009. @article{Grondelaers2009, This paper reports on the ways in which new entities are introduced into discourse. First, we present the evidence in support of a model of indefinite reference processing based on three principles: the listener's ability to make predictive inferences in order to decrease the unexpectedness of upcoming words, the availability to the speaker of grammatical constructions that customize predictive inferences, and the use of "expectancy monitors" to signal and facilitate the introduction of highly unpredictable entities. We provide evidence that one of these expectancy monitors in Dutch is the post-verbal variant of existential er (the equivalent of the unstressed existential "there" in English). In an eye-tracking experiment we demonstrate that the presence of er decreases the processing difficulties caused by low subject expectancy. A corpus-based regression analysis subsequently confirms that the production of er is determined almost exclusively by seven parameters of low subject expectancy. Together, the comprehension and production data suggest that while existential er functions as an expectancy monitor in much the same way as speech disfluencies (hesitations, pauses and filled pauses), er is a higher-level expectancy monitor because it is available in spoken and written discourse and because it is produced more systematically than any disfluency. |
Mackenzie G. Glaholt; Eyal M. Reingold The time course of gaze bias in visual decision tasks Journal Article In: Visual Cognition, vol. 17, no. 8, pp. 1228–1243, 2009. @article{Glaholt2009a, In three experiments, we used eyetracking to investigate the time course of biases in looking behaviour during visual decision making. Our study replicated and extended prior research by Shimojo, Simion, Shimojo, and Scheier (2003), and Simion and Shimojo (2006). Three groups of participants performed forced-choice decisions in a two-alternative free-viewing condition (Experiment 1a), a two-alternative gaze-contingent window condition (Experiment 1b), and an eight-alternative free-viewing condition (Experiment 1c). Participants viewed photographic art images and were instructed to select the one that they preferred (preference task), or the one that they judged to be photographed most recently (recency task). Across experiments and tasks, we demonstrated robust bias towards the chosen item in either gaze duration, gaze frequency or both. The present gaze bias effect was less task specific than those reported previously. Importantly, in the eight-alternative condition we demonstrated a very early gaze bias effect, which rules out a postdecision response-related explanation. [ABSTRACT FROM AUTHOR] |
Mackenzie G. Glaholt; Mei-Chun Wu; Eyal M. Reingold Predicting preference from fixations Journal Article In: PsychNology Journal, vol. 7, no. 2, pp. 141–158, 2009. @article{Glaholt2009, We measured the strength of the association between looking behaviour and preference. Participants selected the most preferred face out of a grid of 8 faces. Fixation times were correlated with selection on a trial-by-trial basis, as well as with explicit preference ratings. Furthermore, by ranking features based on fixation times, we were able to successfully predict participants' preferences for novel feature combinations in a two-alternative forced choice task. In addition, we obtained a similar pattern of findings in a very different stimulus domain: mock company logos. Our results indicated that fixation times can be used to predict selection in large arrays and they might also be employed to estimate preferences for whole stimuli as well as their constituent features. |
Diana J. Gorbet; Lauren E. Sergio The behavioural consequences of dissociating the spatial directions of eye and arm movements Journal Article In: Brain Research, vol. 1284, pp. 77–88, 2009. @article{Gorbet2009, Many of our daily movements use visual information to guide our arms toward objects of interest. Typically, these visually guided movements involve first focusing our gaze on the intended target and then reaching toward the direction of our gaze. The literature on eye-hand coordination provides a great deal of evidence that circuitry in the brain exists which can couple eye and arm movements. Moving both of these effectors towards a common spatial direction may be a default setting used by the brain to simplify the planning of movements. We tested this idea in 20 subjects using two experimental tasks. In a "Standard" condition, the eyes and a cursor were guided to the same spatial location by moving the arm (on a touchpad) and the eyes in the same direction. In a "Dissociated" condition, the eye and cursor were again guided to the same spatial location but the arm was required to move in a direction opposite to the eyes to successfully achieve this goal. In this study, we observed that dissociating the directions of eye and arm movement significantly changed the kinematic properties of both effectors including the latency and peak velocity of eye movements and the curvature of hand-path trajectories. Thus, forcing the brain to plan simultaneous eye and arm movements in different directions alters some of the basic (and often stereotyped) characteristics of motor responses. We suggest that interference with the function of a neural network that couples gaze and reach to congruent spatial locations underlies these kinematic alterations. |
John M. Henderson; Myriam Chanceaux; Tim J. Smith The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements Journal Article In: Journal of Vision, vol. 9, no. 1, pp. 1–8, 2009. @article{Henderson2009b, We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes. |
John M. Henderson; George L. Malcolm; Charles Schandl Searching in the dark: Cognitive relevance drives attention in real-world scenes Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 850–856, 2009. @article{Henderson2009, We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes. |
John M. Henderson; Tim J. Smith How are eye fixation durations controlled during scene viewing? Further evidence from a scene onset delay paradigm Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1055–1082, 2009. @article{Henderson2009a, Recent research on eye movements during scene viewing has focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. In two scene memorization and one visual search experiments, the scene was removed from view during critical fixations for a predetermined delay, and then restored following the delay. Experiment 1 compared filled (pattern mask) and unfilled (grey field) delays. Experiment 2 compared random to blocked delays. Experiment 3 extended the results to a visual search task. The results demonstrate that fixation durations in scene viewing comprise two fixation populations. One population remains relatively constant across delay, and the second population increases with scene onset delay. The results are consistent with a mixed eye movement control model that incorporates an autonomous control mechanism with process monitoring. The results suggest that a complete gaze control model will have to account for both fixation location and fixation duration. |
Po-He Tseng; Ran Carmi; Ian G. M. Cameron; Douglas P. Munoz; Laurent Itti Quantifying center bias of observers in free viewing of dynamic natural scenes Journal Article In: Journal of Vision, vol. 9, no. 7, pp. 4–4, 2009. @article{Tseng2009, Human eye-tracking studies have shown that gaze fixations are biased toward the center of natural scene stimuli ("center bias"). This bias contaminates the evaluation of computational models of attention and oculomotor behavior. Here we recorded eye movements from 17 participants watching 40 MTV-style video clips (with abrupt scene changes every 2-4 s), to quantify the relative contributions of five causes of center bias: photographer bias, motor bias, viewing strategy, orbital reserve, and screen center. Photographer bias was evaluated by five naive human raters and correlated with eye movements. The frequently changing scenes in MTV-style videos allowed us to assess how motor bias and viewing strategy affected center bias across time. In an additional experiment with 5 participants, videos were displayed at different locations within a large screen to investigate the influences of orbital reserve and screen center. Our results demonstrate quantitatively for the first time that center bias is correlated strongly with photographer bias and is influenced by viewing strategy at scene onset, while orbital reserve, screen center, and motor bias contribute minimally. We discuss methods to account for these influences to better assess computational models of visual attention and gaze using natural scene stimuli. |
Naotsugu Tsuchiya; Farshad Moradi; Csilla Felsen; Madoka Yamazaki; Ralph Adolphs Intact rapid detection of fearful faces in the absence of the amygdala Journal Article In: Nature Neuroscience, vol. 12, no. 10, pp. 1224–1225, 2009. @article{Tsuchiya2009, The amygdala is thought to process fear-related stimuli rapidly and nonconsciously. We found that an individual with complete bilateral amygdala lesions, who cannot recognize fear from faces, nonetheless showed normal rapid detection and nonconscious processing of those same fearful faces. We conclude that the amygdala is not essential for early stages of fear processing but, instead, modulates recognition and social judgment. |
Ilse Tydgat; Jonathan Grainger Serial position effects in the identification of letters, digits, and symbols Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 2, pp. 480–498, 2009. @article{Tydgat2009, In 6 experiments, the authors investigated the form of serial position functions for identification of letters, digits, and symbols presented in strings. The results replicated findings obtained with the target search paradigm, showing an interaction between the effects of serial position and type of stimulus, with symbols generating a distinct serial position function compared with letters and digits. When the task was 2-alternative forced choice, this interaction was driven almost exclusively by performance at the first position in the string, with letters and digits showing much higher levels of accuracy than symbols at this position. A final-position advantage was reinstated in Experiment 6 by placing the two alternative responses below the target string. The end-position (first and last positions) advantage for letters and digits compared with symbol stimuli was further confirmed with the bar-probe technique (postcued partial report) in Experiments 5 and 6. Overall, the results further support the existence of a specialized mechanism designed to optimize processing of strings of letters and digits by modifying the size and shape of retinotopic character detectors' receptive fields. |
Geoffrey Underwood; Tom Foulsham; Katherine Humphrey Saliency and scan patterns in the inspection of real-world scenes: Eye movements during encoding and recognition Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 812–834, 2009. @article{Underwood2009, How do sequences of eye fixations match each other when viewing a picture during encoding and again during a recognition test, and to what extent are fixation sequences (scan patterns) determined by the low-level visual features of the picture rather than the domain knowledge of the viewer? The saliency map model of visual attention was tested in two experiments to ask whether the rank ordering of regions by their saliency values can be used to predict the sequence of fixations made when first looking at an image. Experiment 1 established that the sequence of fixations on first inspection during encoding was similar to that made when looking at the picture the second time, in the recognition test. Experiment 2 confirmed this similarity of fixation sequences at encoding and recognition, and also found a similarity between scan patterns made during the initial recognition test and during a second recognition test 1 week later. The fixation scan patterns were not similar to those predicted by the saliency map model in either experiment, however. These conclusions are qualified by interactions involving the match between the content of the image and the domain of interest of the viewers. How do sequences of eye fixations match each other when viewing a picture during encoding and again during a recognition test, and to what extent are fixation sequences (scan patterns) determined by the low-level visual features of the picture rather than the domain knowledge of the viewer? The saliency map model of visual attention was tested in two experiments to ask whether the rank ordering of regions by their saliency values can be used to predict the sequence of fixations made when first looking at an image. Experiment 1 established that the sequence of fixations on first inspection during encoding was similar to that made when looking at the picture the second time, in the recognition test. Experiment 2 confirmed this similarity of fixation sequences at encoding and recognition, and also found a similarity between scan patterns made during the initial recognition test and during a second recognition test 1 week later. The fixation scan patterns were not similar to those predicted by the saliency map model in either experiment, however. These conclusions are qualified by interactions involving the match between the content of the image and the domain of interest of the viewers. |
Seppo Vainio; Jukka Hyönä; Anneli Pajunen Lexical predictability exerts robust effects on fixation duration, but not on initial landing position during reading Journal Article In: Experimental Psychology, vol. 56, no. 1, pp. 66–74, 2009. @article{Vainio2009, An eye movement experiment was conducted to examine effects of local lexical predictability on fixation durations and fixation locations during sentence reading. In the high-predictability condition, a verb strongly constrained the lexical identity of the following word, while in the low-predictability condition the target word could not be predicted on the basis of the verb. The results showed that first fixation and gaze duration on the target noun were reliably shorter in the high-predictability than in the low-predictability condition. However, initial fixation location was not affected by lexical predictability. As regards eye guidance in reading, the present study indicates that local lexical predictability influences when decisions but not where the initial fixation lands in a word. |
Matteo Valsecchi; Massimo Turatto Microsaccadic responses in a bimodal oddball task Journal Article In: Psychological Research, vol. 73, no. 1, pp. 23–33, 2009. @article{Valsecchi2009, In a visual oddball task the presentation of rare targets induces a prolonged microsaccadic inhibition as compared to standards. Here, we replicated this effect also in the auditory modality. In addition, although auditory standards induced a more limited modulation of microsaccadic frequency as compared to visual standards, auditory oddballs induced a prolonged microsaccadic inhibition. With bimodal standard stimuli the microsaccadic response was determined by the attended modality, resembling that produced by attended unimodal stimuli. The present findings support the idea that the microsaccadic response to oddball and standard stimuli is partly driven by cognitive mechanisms common to both the visual and the auditory modality, and that microsaccades can be used as an implicit behavioral measure of ongoing cognitive processes. |
Eva Van Assche; Wouter Duyck; Robert J. Hartsuiker; Kevin Diependaele Does bilingualism change cognate effects in a sentence context Journal Article In: Psychological Science, vol. 20, no. 8, pp. 923–927, 2009. @article{VanAssche2009, Becoming a bilingual can change a person's cognitive functioning and language processing in a number of ways. This study focused on how knowledge of a second language influences how people read sentences written in their native language. We used the cognate-facilitation effect as a marker of cross-lingual activations in both languages. Cognates (e.g., Dutch-English schip [ship]) and controls were presented in a sentence context, and eye movements were monitored. Results showed faster reading times for cognates than for controls. Thus, this study shows that one of people's most automated skills, reading in one's native language, is changed by the knowledge of a second language. |
Karli K. Watson; Jason H. Ghodasra; Michael L. Platt Serotonin transporter genotype modulates social reward and punishment in rhesus macaques Journal Article In: PLoS ONE, vol. 4, no. 1, pp. e4156, 2009. @article{Watson2009a, BACKGROUND: Serotonin signaling influences social behavior in both human and nonhuman primates. In humans, variation upstream of the promoter region of the serotonin transporter gene (5-HTTLPR) has recently been shown to influence both behavioral measures of social anxiety and amygdala response to social threats. Here we show that length polymorphisms in 5-HTTLPR predict social reward and punishment in rhesus macaques, a species in which 5-HTTLPR variation is analogous to that of humans. METHODOLOGY/PRINCIPAL FINDINGS: In contrast to monkeys with two copies of the long allele (L/L), monkeys with one copy of the short allele of this gene (S/L) spent less time gazing at face than non-face images, less time looking in the eye region of faces, and had larger pupil diameters when gazing at photos of a high versus low status male macaques. Moreover, in a novel primed gambling task, presentation of photos of high status male macaques promoted risk-aversion in S/L monkeys but promoted risk-seeking in L/L monkeys. Finally, as measured by a "pay-per-view" task, S/L monkeys required juice payment to view photos of high status males, whereas L/L monkeys sacrificed fluid to see the same photos. CONCLUSIONS/SIGNIFICANCE: These data indicate that genetic variation in serotonin function contributes to social reward and punishment in rhesus macaques, and thus shapes social behavior in humans and rhesus macaques alike. |
Tamara L. Watson; Bart Krekelberg The relationship between saccadic suppression and perceptual stability Journal Article In: Current Biology, vol. 19, no. 12, pp. 1040–1043, 2009. @article{Watson2009, Introspection makes it clear that we do not see the visual motion generated by our saccadic eye movements. We refer to the lack of awareness of the motion across the retina that is generated by a saccade as saccadic omission [1]: the visual stimulus generated by the saccade is omitted from our subjective awareness. In the laboratory, saccadic omission is often studied by investigating saccadic suppression, the reduction in visual sensitivity before and during a saccade (see Ross et al. [2] and Wurtz [3] for reviews). We investigated whether perceptual stability requires that a mechanism like saccadic suppression removes perisaccadic stimuli from visual processing to prevent their presumed harmful effect on perceptual stability [4, 5]. Our results show that a stimulus that undergoes saccadic omission can nevertheless generate a shape contrast illusion. This illusion can be generated when the inducer and test stimulus are separated in space and is therefore thought to be generated at a later stage of visual processing [6]. This shows that perceptual stability is attained without removing stimuli from processing and suggests a conceptually new view of perceptual stability in which perisaccadic stimuli are processed by the early visual system, but these signals are prevented from reaching awareness at a later stage of processing. |
Andrew E. Welchman; Julie M. Harris; Eli Brenner Extra-retinal signals support the estimation of 3D motion Journal Article In: Vision Research, vol. 49, no. 7, pp. 782–789, 2009. @article{Welchman2009, In natural settings, our eyes tend to track approaching objects. To estimate motion, the brain should thus take account of eye movements, perhaps using retinal cues (retinal slip of static objects) or extra-retinal signals (motor commands). Previous work suggests that extra-retinal ocular vergence signals do not support the perceptual judgments. Here, we re-evaluate this conclusion, studying motion judgments based on retinal slip and extra-retinal signals. We find that (1) each cue can be sufficient, and, (2) retinal and extra-retinal signals are combined, when estimating motion-in-depth. This challenges the accepted view that observers are essentially blind to eye vergence changes. |
Åsa Wengelin; Mark Torrance; Kenneth Holmqvist; Sol Simpson; David Galbraith; Victoria Johansson; Roger Johansson Combined eyetracking and keystroke-logging methods for studying cognitive processes in text production Journal Article In: Behavior Research Methods, vol. 41, no. 2, pp. 337–351, 2009. @article{Wengelin2009, Writers typically spend a certain proportion of time looking back over the text that they have written. This is likely to serve a number of different functions, which are currently poorly understood. In this article, we present two systems, ScriptLog+ TimeLine and EyeWrite, that adopt different and complementary approaches to exploring this activity by collecting and analyzing combined eye movement and keystroke data from writers composing extended texts. ScriptLog+ TimeLine is a system that is based on an existing keystroke-logging program and uses heuristic, pattern-matching methods to identify reading episodes within eye movement data. EyeWrite is an integrated editor and analysis system that permits identification of the words that the writer fixates and their location within the developing text. We demonstrate how the methods instantiated within these systems can be used to make sense of the large amount of data generated by eyetracking and keystroke logging in order to inform understanding of the cognitive processes that underlie written text production. |
Gregory L. West; Timothy N. Welsh; Jay Pratt Saccadic trajectories receive online correction: Evidence for a feedback-based system of oculomotor control Journal Article In: Journal of Motor Behavior, vol. 41, no. 2, pp. 117–126, 2009. @article{West2009, Although a considerable amount of research has investigated the planning and production of saccadic eye movements, it remains unclear whether (a) central planning processes prior to movement onset largely determine these eye movements or (b) they receive online correction during the actual trajectory. To investigate this issue, the authors measured the spatial position of the eye at specific kinematic markers during saccadic movements (i.e., peak acceleration, peak velocity, peak deceleration, saccade endpoint). In 2 experiments, the authors examined saccades ranging in amplitude from 4 to 20 degrees and computed the variability profiles (SD) of eye position at each kinematic marker and the proportion of explained variance (R2) between each kinematic marker and the saccade endpoint. In Experiment 1, the authors examined differences in the kinematic signature of saccadic online control between eye movements made in gap or overlap conditions. In Experiment 2, the authors examined the online control of saccades made from stored target information after delays of 500, 1,500, and 3,500 ms. Findings evince a robust and consistent feedback-based system of online oculomotor control during saccadic eye movements. |
Chin-An Wang; Jie-Li Tsai; Albrecht W. Inhoff; Ovid J. L. Tzeng Acquisition of linguistic information to the left of fixation during the reading of Chinese text Journal Article In: Language and Cognitive Processes, vol. 24, no. 7-8, pp. 1097–1123, 2009. @article{Wang2009b, The linguistic properties of the first (critical) character of a two-character Chinese word were manipulated when the eyes moved to the right of the critical character during reading to determine whether character processing is strictly unidirectional. In Experiment 1, the critical character was replaced with a congruent or incongruent character or left unchanged. Critical character changes did not influence the fixation duration, but incongruent changes led to more regressions than congruent changes. In Experiment 2, the critical character was replaced with either a homophonic or a non-homophonic character when it was to the left of fixation. The fixation following the change was now longer when the replaced character and the critical character were homophones than when they were phonologically dissimilar. These results indicate that readers obtain phonological and semantic information to the left of a fixated character and that the recognition of consecutive Chinese characters is not strictly unidirectional. |
Hsueh-Cheng Wang; Alex D. Hwang; Marc Pomplun Object frequency and predictability effects on eye fixation durations in real-world scene viewing Journal Article In: Journal of Eye Movement Research, vol. 3, no. 3, pp. 1–10, 2009. @article{Wang2009c, During text reading, the durations of eye fixations decrease with greater frequency and predictability of the currently fixated word (Rayner, 1998; 2009). However, it has not been tested whether those results also apply to scene viewing. We computed object frequency and predictability from both linguistic and visual scene analysis (LabelMe, Russell et al., 2008), and Latent Semantic Analysis (Landauer et al., 1998) was applied to estimate predictability. In a scene-viewing experiment, we found that, for small objects, linguistics-based frequency, but not scene-based frequency, had effects on first fixation duration, gaze duration, and total time. Both linguistic and scene-based predictability affected total time. Similar to reading, fixation duration decreased with higher frequency and predictability. For large objects, we found the direction of effects to be the inverse of those found in reading studies. These results suggest that the recognition of small objects in scene viewing shares some characteristics with the recognition of words in reading. |
Z. I. Wang; Louis F. Dell'Osso Factors influencing pursuit ability in infantile nystagmus syndrome: Target timing and foveation capability. Journal Article In: Vision Research, vol. 49, no. 2, pp. 182–189, 2009. @article{Wang2009, We wished to determine the influential factors for Infantile Nystagmus Syndrome (INS) subjects' ability to acquire and pursue moving targets using predictions from the behavioral Ocular Motor System (OMS) model and data from INS subjects. Ocular motor simulations using a behavioral OMS model were performed in MATLAB Simulink. Eye-movement recordings were performed using a high-speed digital video system. We studied five INS subjects who pursued a 10°/s ramp target to both left and right. We measured their target-acquisition times based on position criteria. The following parameters were studied: Lt (measured from the target-ramp initiation to the first on-target foveation period), target pursuit direction, and foveation-period pursuit gain. Analyses and simulations were performed in MATLAB environment using OMLAB software (OMtools, download from http://www.omlab.org). Ramp-target timing influenced target-acquisition time; the closer to the intrinsic saccades in the waveform the ramp stimuli started, the longer was Lt. However, arriving at the target position may not guarantee its foveation. Foveation-period pursuit gains vs. target or slow-phase direction had an idiosyncratic relationship for each subject. Adjustments to the model's Fixation subsystem reproduced the idiosyncratic foveation-period pursuit gains; the gain of the Smooth Pursuit subsystem was maintained at its normal value. The model output predicted a steady-state error when target initiation occurred during intrinsic saccades, consistent with human data. We conclude that INS subjects acquire ramp targets with longer latency for target initiations during or near the intrinsic saccades, consistent with the findings in our step-stimuli timing study. This effect might be due to the interaction between the saccadic and pursuit systems. The combined effects of target timing and Fixation-subsystem gain determined how fast and how well the INS subjects pursued ramp stimuli during their foveations periods (i.e., their foveation-period pursuit gain). The OMS model again demonstrated its behavioral characteristics and prediction capabilities (e.g., steady-state error) and revealed an important interaction between the Fixation and Smooth Pursuit subsystems. |
Tessa Warren; Sarah J. White; Erik D. Reichle Investigating the causes of wrap-up effects: Evidence from eye movements and E-Z Reader Journal Article In: Cognition, vol. 111, no. 1, pp. 132–137, 2009. @article{Warren2009, Wrap-up effects in reading have traditionally been thought to reflect increased processing associated with intra- and inter-clause integration (Just, M. A. & Carpenter, P. A. (1980). A theory of reading: From eye fixations to comprehension. Psychological Review, 87(4), 329-354; Rayner, K., Kambe, G., & Duffy, S. A. (2000). The effect of clause wrap-up on eye movements during reading. The Quarterly Journal of Experimental Psychology, 53A(4), 1061-1080; cf. Hirotani, M., Frazier, L., & Rayner, K. (2006). Punctuation and intonation effects on clause and sentence wrap-up: Evidence from eye movements. Journal of Memory and Language, 54, 425-443). We report an eye-tracking experiment with a strong manipulation of integrative complexity at a critical word that was either sentence-final, ended a comma-marked clause, or was not comma-marked. Although both complexity and punctuation had reliable effects, they did not interact in any eye-movement measure. These results as well as simulations using the E-Z Reader model of eye-movement control (Reichle, E. D., Warren, T., & McConnell, K. (2009). Using E-Z Reader to model the effects of higher-level language processing on eye movements during reading. Psychonomic Bulletin & Review, 16(1), 1-20) suggest that traditional accounts of clause wrap-up are incomplete. |
Carolin Wienrich; Uta Heße; Gisela Müller-Plath Eye movements and attention in visual feature search with graded target-distractor-similarity Journal Article In: Journal of Eye Movement Research, vol. 3, no. 1, pp. 1–19, 2009. @article{Wienrich2009, We conducted a visual feature search experiment in which we varied the target-distractor- similarity in four steps, the number of items (4, 6, and 8), and the presence of the target. In addition to classical search parameters like error rate and reaction time (RT), we analyzed saccade amplitudes, fixation durations, and the portion of reinspections (recurred fixation on an item with at least one different item fixated in between) and refixations (recurred fixation on an item without a different item fixated in between) per trial. When target- distractor-similarity was increased, more errors and longer RTs were observed, accompa- nied by shorter saccade amplitudes, longer fixation durations, and more reinspec- tions/refixations. An increasing set size resulted in longer saccade amplitudes and shorter fixation durations. Finally, in target-absent trials we observed more reinspections than refixations, whereas in target-present trials refixations were more frequent than reinspec- tions. The results on saccade amplitude and fixation duration support saliency-based search theo- ries that assume an attentional focus variable in size according to task demands and a vari- able attentional dwell time. Reinspections and refixations seem to be rather a sign of in- complete perceptual processing of items than being due to memory failure. |
Carrick C. Williams; Alexander Pollatsek; Kyle R. Cave; Michael J. Stroud More than just finding color: Strategy in global visual search is shaped by learned target probabilities Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 3, pp. 688–699, 2009. @article{Williams2009, In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue T) was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though clusters, the use of color was determined by the probability that the target would appear in a cluster of a certain color type: When the target was equally likely to be in any cluster containing the target color, fixations were directed to those clusters approximately equally, but when targets were more likely to appear in clusters with more target-color items, those clusters were likely to be fixated sooner. (The target probabilities guided search without explicit instruction.) Once fixated, the time spent within a cluster depended on the number of target-color elements, consistent with a search of only those elements. Thus, between-cluster search was influenced by global target probabilities signaled by amount of color or color ratios, whereas within-cluster search was directly driven by presence of the target color. |
Heather Winskel Reading in Thai: the case of misaligned vowels Journal Article In: Reading and Writing, vol. 22, no. 1, pp. 1–24, 2009. @article{Winskel2009, Thai has its own distinctive alphabetic script with syllabic characteristics as it has implicit vowels for some consonants. Consonants are written in a linear order, but vowels can be written non-linearly above, below or to either side of the consonant. Of particular interest to the current study are that vowels can precede the consonant in writing but follow it in speech, hence a mismatch between the spoken and written sequence occurs. In order to investigate if there is a processing cost associated with this discrepancy between spoken and written sequence for vowels and the implications this has in relation to the grain size used when reading Thai, eye movements of adults reading words with and without misaligned vowels in sentences using the EyeLink II tracking system was conducted. Twenty-four university students read 50 pairs of words with misaligned and aligned vowel words matched for length and frequency embedded in same sentence frames. In addition, rapid naming data from forty adults was collected. Data from forty children 6;6-8;6 years old reading and spelling comparable words was also collected and analysed for errors. Results revealed a processing cost due to the more severely misaligned words where the vowel operates across the syllable, and gives support for a syllabic level of segmentation rather than phonemic for reading and spelling in Thai adults and children. |
Heather Winskel; Ralph Radach; Sudaporn Luksaneeyanawin Eye movements when reading spaced and unspaced Thai and English: A comparison of Thai-English bilinguals and English monolinguals Journal Article In: Journal of Memory and Language, vol. 61, no. 3, pp. 339–351, 2009. @article{Winskel2009a, The study investigated the eye movements of Thai-English bilinguals when reading both Thai and English with and without interword spaces, in comparison with English monolinguals. Thai is an alphabetic orthography without interword spaces. Participants read sentences with high and low frequency target words embedded in same sentence frames with and without interword spaces. Interword spaces had a selective effect on reading in Thai, as they facilitated word recognition, but did not affect eye guidance and lexical segmentation. Initial saccade landing positions were similar in spaced and unspaced text. As expected, removal of spaces severely disrupted reading in English, as reflected by the eye movement measures, in both bilinguals and monolinguals. Here, initial landing positions were significantly nearer the beginning of the target words when reading unspaced rather than spaced text. Effects were more accentuated in the bilinguals. In sum, results from reading in Thai give qualified support for a facilitatory function of interword spaces. |
Dagmar A. Wismeijer; Casper J. Erkelens The effect of changing size on vergence is mediated by changing disparity Journal Article In: Journal of vision, vol. 9, no. 13, pp. 12 1–10, 2009. @article{Wismeijer2009, In this study, we investigated the effect of changing size on vergence. Erkelens and Regan (1986) proposed that this cue to motion in depth affects vergence in a similar way as it affects perception. The measured effect on vergence was small and we wondered why the vergence system would use changing size as an additional cue to changing disparity. To elucidate the effect of changing size on vergence, we used an annulus carrying both changing size and changing disparity signals to motion in depth. The cues were either congruent or signaled a different depth. The results showed that vergence was affected by changing size, however in an opposite way than that perception was affected. These results were incongruent with those reported by Erkelens and Regan (1986). We therefore additionally measured the effects on vergence of the individual parameters associated with changing size, i.e., stimulus area, retinal eccentricity, and luminance. Stimulus (retinal) eccentricity was inversely related to vergence gain. Luminance, on the other hand, had a smaller but positive relation to vergence gain. Thus, changing size affected the disparity signal two-fold: it changed the retinal location of the disparity signal and it changed the strength of the disparity signal (luminance change). These effects of changing size on disparity can explain both our results (change in retinal location of the disparity signal) and those of Erkelens and Regan (1986; change in luminance). We thus conclude that changing size did not in itself contribute to vergence, rather its effect on vergence was mediated by disparity. |
Menno Schoot; Annemieke H. Bakker Arkema; Tako M. Horsley; Ernest C. D. M. Lieshout In: Contemporary Educational Psychology, vol. 34, no. 1, pp. 58–66, 2009. @article{Schoot2009, This study examined the effects of consistency (relational term consistent vs. inconsistent with required arithmetic operation) and markedness (relational term unmarked ['more than'] vs. marked ['less than']) on word problem solving in 10-12 years old children differing in problem-solving skill. The results showed that for unmarked word problems, less successful problem solvers showed an effect of consistency on regressive eye movements (longer and more regressions to solution-relevant problem information for inconsistent than consistent word problems) but not on error rate. For marked word problems, they showed the opposite pattern (effects of consistency on error rate, not on regressive eye movements). The conclusion was drawn that, like more successful problem solvers, less successful problem solvers can appeal to a problem-model strategy, but that they do so only when the relational term is unmarked. The results were discussed mainly with respect to the linguistic-semantic aspects of word problem solving. |
Menno Schoot; Alain L. Vasbinder; Tako M. Horsley; Albert Reijntjes; Ernest C. D. M. Lieshout Lexical ambiguity resolution in good and poor comprehenders: An eye fixation and self-paced reading study in primary school children Journal Article In: Journal of Educational Psychology, vol. 101, no. 1, pp. 21–36, 2009. @article{Schoot2009a, To investigate the use of context and monitoring of comprehension in lexical ambiguity resolution in children, the authors asked 10- to 12-year-old good and poor comprehenders to read sentences consisting of 2 clauses, 1 containing the ambiguous word and the other the disambiguating information. The order of the clauses was reversed so that disambiguating information either preceded or followed the ambiguous word. Context use and comprehension monitoring were examined by measuring eye fixations (Experiment 1) and self-paced reading times (Experiment 2) on the ambiguous word and disambiguating region. The results of Experiment 1 and 2 showed that poor comprehenders made use of prior context to facilitate lexical ambiguity resolution as effectively as good comprehenders but that they monitored their comprehension less effectively than good comprehenders. Good comprehenders corrected an initial interpretation error on an ambiguous word and restored comprehension once they encountered the disambiguating region. Poor comprehenders failed to deal with this type of comprehension failure. |
Stefan Van der Stigchel; Manon Mulckhuyse; Jan Theeuwes Eye cannot see it: The interference of subliminal distractors on saccade metrics Journal Article In: Vision Research, vol. 49, no. 16, pp. 2104–2109, 2009. @article{VanderStigchel2009, The present study investigated whether subliminal (unconsciously perceived) visual information influences eye movement metrics, like saccade trajectories and endpoints. Participants made eye movements upwards and downwards while a subliminal distractor was presented in the periphery. Results showed that the subliminal distractor interfered with the execution of an eye movement, although the effects were smaller compared to a control experiment in which the distractor was presented supraliminal. Because saccade metrics are mediated by low level brain areas, this indicates that subliminal visual information evokes competition at a very low level in the oculomotor system. |
Helene M. Ettinger-Veenstra; W. Huijbers; Tjerk P. Gutteling; M. Vink; J. Leon Kenemans; Sebastiaan F. W. Neggers In: Journal of Neurophysiology, vol. 102, no. 6, pp. 3469–3480, 2009. @article{EttingerVeenstra2009, It is well known that parts of a visual scene are prioritized for visual processing, depending on the current situation. How the CNS moves this focus of attention across the visual image is largely unknown, although there is substantial evidence that preparation of an action is a key factor. Our results support the view that direct corticocortical feedback connections from frontal oculomotor areas to the visual cortex are responsible for the coupling between eye movements and shifts of visuospatial attention. Functional magnetic resonance imaging (fMRI)-guided transcranial magnetic stimulation (TMS) was applied to the frontal eye fields (FEFs) and intraparietal sulcus (IPS). A single pulse was delivered 60, 30, or 0 ms before a discrimination target was presented at, or next to, the target of a saccade in preparation. Results showed that the known enhancement of discrimination performance specific to locations to which eye movements are being prepared was enhanced by early TMS on the FEF contralateral to eye movement direction, whereas TMS on the IPS resulted in a general performance increase. The current findings indicate that the FEF affects selective visual processing within the visual cortex itself through direct feedback projections. |
Kate Janse Van Rensburg; Adrian Taylor; Timothy L. Hodgson The effects of acute exercise on attentional bias towards smoking-related stimuli during temporary abstinence from smoking Journal Article In: Addiction, vol. 104, no. 11, pp. 1910–1917, 2009. @article{VanRensburg2009, RATIONALE: Attentional bias towards smoking-related cues is increased during abstinence and can predict relapse after quitting. Exercise has been found to reduce cigarette cravings and desire to smoke during temporary abstinence and attenuate increased cravings in response to smoking cues. OBJECTIVE: To assess the acute effects of exercise on attentional bias to smoking-related cues during temporary abstinence from smoking. METHOD: In a randomized cross-over design, on separate days regular smokers (n = 20) undertook 15 minutes of exercise (moderate intensity stationary cycling) or passive seating following 15 hours of nicotine abstinence. Attentional bias was measured at baseline and post-treatment. The percentage of dwell time and direction of initial fixation was assessed during the passive viewing of a series of paired smoking and neutral images using an Eyelink II eye-tracking system. Self-reported desire to smoke was recorded at baseline, mid- and post-treatment and post-eye-tracking task. RESULTS: There was a significant condition x time interaction for desire to smoke, F((1,18)) = 10.67 |
Melissa L. -H. Võ; John M. Henderson Does gravity matter? Effects of semantic and syntactic inconsistencies on the allocation of attention during scene perception Journal Article In: Journal of Vision, vol. 9, no. 3, pp. 24–24, 2009. @article{Vo2009, It has been shown that attention and eye movements during scene perception are preferentially allocated to semantically inconsistent objects compared to their consistent controls. However, there has been a dispute over how early during scene viewing such inconsistencies are detected. In the study presented here, we introduced syntactic object–scene inconsistencies (i.e., floating objects) in addition to semantic inconsistencies to investigate the degree to which they attract attention during scene viewing. In Experiment 1 participants viewed scenes in preparation for a subsequent memory task, while in Experiment 2 participants were instructed to search for target objects. In neither experiment were we able to find evidence for extrafoveal detection of either type of inconsistency. However, upon fixation both semantically and syntactically inconsistent objects led to increased object processing as seen in elevated gaze durations and number of fixations. Interestingly, the semantic inconsistency effect was diminished for floating objects, which suggests an interaction of semantic and syntactic scene processing. This study is the first to provide evidence for the influence of syntactic in addition to semantic object–scene inconsistencies on eye movement behavior during real-world scene viewing. WABBLE: |
Michael Wagner; Walter H. Ehrenstein; Thomas V. Papathomas Vergence in reverspective: Percept-driven versus data-driven eye movement control Journal Article In: Neuroscience Letters, vol. 449, no. 2, pp. 142–146, 2009. @article{Wagner2009, 'Reverspectives' (by artist Patrick Hughes) consist of truncated pyramids with their small faces closer to the viewer, allowing realistic scenes to be painted on them. Because their pictorial perspective reverses the physical depth arrangement, reverspectives provide a bistable paradigm of two radically different, competing depth percepts, even when viewed binocularly: points that are physically further are perceived to be closer and vice versa. The key question addressed here is whether vergence is governed by the physical and/or the perceived depth of fixated targets. Vergence eye movements were recorded using the EyeLink II system under conditions optimized to obtain both the veridical and illusory depth percepts of a reverspective. Six gaze locations were signaled by LEDs placed at strategically selected depths on the stimulus surface. We obtained strong evidence that stable vergence fixations were governed by the percept: for the same LED position, eyes converged under veridical depth percepts and diverged under illusory percepts, thus rendering pictorial cues to be as effective as physical cues in vergence control. These results, obtained with stable fixations, do not disagree with earlier studies that found rapid fixational eye movements to be governed by physical depth cues. Together, these results allow us to speculate on the existence of at least two eye movement systems: an automatic, data-driven system for rapid successions of fixations; and a deliberate schema-driven vergence system that accounts for stable fixations based on the perceptual state of the observer. |
Robin Walker; Puncharat Techawachirakul; Patrick Haggard Frontal eye field stimulation modulates the balance of salience between target and distractors Journal Article In: Brain Research, vol. 1270, pp. 54–63, 2009. @article{Walker2009, Natural scenes generally include several possible objects that can be the target for a shift of gaze and attention. The oculomotor system may select a single target by boosting neural activation representing the target, and also by inhibiting neural activity associated with competing alternatives (distractors). We examine the role of the frontal eye field (FEF) in these processes through the effects of single-pulse transcranial magnetic stimulation (TMS) on the distractor-related modulation of saccade trajectories. Participants made voluntary saccades to peripheral locations specified by a central arrow-cue. On some trials, visual distractors appeared remote from the target location. The competing distractor produced a deviation of saccade trajectory, away from the distractor location. Single-pulse TMS stimulation of the right frontal eye field increased this distractor-related deviation compared that observed when stimulation was applied to a control site (vertex). The increase in distractor-related deviation of trajectory, following FEF stimulation, was observed for saccades made in both the left and right visual fields and could not be attributed to an effect of TMS on saccade latency. The enhanced distractor-related deviation following FEF stimulation could reflect increased inhibition of the competing distractor, or reduced salience of the endogenous saccade goal. The results are interpreted in light of neurophysiological evidence that the human FEF is involved in the dynamic interaction between competing stimuli for the selection of a candidate target. |
Chin-An Wang; Albrecht W. Inhoff; Ralph Radach Is attention confined to one word at a time? The spatial distribution of parafoveal preview benefits during reading Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 7, pp. 1487–1494, 2009. @article{Wang2009a, Eye movements were recorded while participants read declarative sentences. Each sentence contained a criti- cal three-word sequence with a three-letter target word (n), a spatially adjacent post-target word (n+1), and a subsequent nonadjacent post-target word (n+2). The parafoveal previews of words n and n+2 were manipulated so that they were either fully visible or masked until they were fixated. The results revealed longer word n and word n+1 viewing durations when word n had been masked in the parafovea, and this occurred irrespective of whether the target was skipped or fixated. Furthermore, masking of word n diminished the usefulness of the preview of word n+2. These results indicate that the effect of a parafoveally available target preview was not strictly localized. Instead, it influenced target viewing and the viewing of the two subsequent words in the text. These results are difficult to reconcile with the assumption that attention is confined to one word at a time until that word is recognized and that attention is then shifted from the recognized word to the next. |
Joris Vangeneugden; Frank E. Pollick; Rufin Vogels Functional differentiation of macaque visual temporal cortical neurons using a parametric action space Journal Article In: Cerebral Cortex, vol. 19, no. 3, pp. 593–611, 2009. @article{Vangeneugden2009, Neurons in the rostral superior temporal sulcus (STS) are responsive to displays of body movements. We employed a parametric action space to determine how similarities among actions are represented by visual temporal neurons and how form and motion information contributes to their responses. The stimulus space consisted of a stick-plus-point-light figure performing arm actions and their blends. Multidimensional scaling showed that the responses of temporal neurons represented the ordinal similarity between these actions. Further tests distinguished neurons responding equally strongly to static presentations and to actions ("snapshot" neurons), from those responding much less strongly to static presentations, but responding well when motion was present ("motion" neurons). The "motion" neurons were predominantly found in the upper bank/fundus of the STS, and "snapshot" neurons in the lower bank of the STS and inferior temporal convexity. Most "motion" neurons showed strong response modulation during the course of an action, thus responding to action kinematics. "Motion" neurons displayed a greater average selectivity for these simple arm actions than did "snapshot" neurons. We suggest that the "motion" neurons code for visual kinematics, whereas the "snapshot" neurons code for form/posture, and that both can contribute to action recognition, in agreement with computation models of action recognition. |
Rolf Verleger; Andreas Sprenger; Sina Gebauer; Michaela Fritzmannova; Monique Friedrich; Stefanie Kraft; Piotr Jaśkowski; Piotr Jas On why left events are the right ones: Neural mechanisms underlying the left-hemifield advantage in rapid serial visual presentation Journal Article In: Journal of Cognitive Neuroscience, vol. 21, no. 3, pp. 474–488, 2009. @article{Verleger2009, When simultaneous series of stimuli are rapidly presented left and right, containing two target stimuli T1 and T2, T2 is much better identified when presented in the left than in the right hemifield. Here, this effect was replicated, even when shifts of gaze were controlled, and was only partially compensated when T1 side provided the cue where to expect T2. Electrophysiological measurement revealed earlier latencies of T1- and T2-evoked N2(pc) peaks at the right than at the left visual cortex, and larger right-hemisphere T2-evoked N2(pc) amplitudes when T2 closely followed T1. These findings suggest that the right hemisphere was better able to single out the targets in time. Further, sustained contralateral slow shifts remained active after T1 for longer time at the right than at the left visual cortex, and developed more consistently at the right visual cortex when expecting T2 on the contralateral side. These findings might reflect better capacity of right-hemisphere visual working memory. These findings about the neurophysiological underpinnings of the large right-hemisphere advantage in this complex visual task might help elucidating the mechanisms responsible for the severe disturbance of hemineglect following damage to the right hemisphere. |
Marine Vernet Binocular motor coordination during saccades and fixations while reading: A magnitude and time analysis Journal Article In: Journal of Vision, vol. 9, no. 2009, pp. 1–13, 2009. @article{Vernet2009, Reading involves saccades and fixations. Misalignment of the eyes should be small enough to allow sensory fusion. Recent studies reported disparity of the eyes during fixations. This study examines disconjugacy, i.e. change in disparity over time, both during saccades and fixations. Text reading saccades and saccades to single targets of similar sizes (2.5-) are compared. Young subjects were screened to avoid problems of binocular vision and oculomotor vergence. The results show high quality of motor binocular coordination in both tasks: the amplitude difference between the saccade of the eyes was approximately 0.16-; during the fixation period, the drift difference was only 0.13-. The disconjugate drift occurred mainly during the first 48 ms of fixation, was equally distributed to the eyes and was often reducing the saccade disconjugacy. Quality of coordination regardless of the task is indicative of robust physiological mechanisms. We suggest the existence of active binocular control mechanisms in which vergence signals may have a central role. Even computation of saccades may be based on continuous interaction between saccade and vergence. |
Marine Vernet; Qing Yang; Marie Gruselle; Mareike Trams; Zoï Kapoula Switching between gap and overlap pro-saccades: Cost or benefit? Journal Article In: Experimental Brain Research, vol. 197, no. 1, pp. 49–58, 2009. @article{Vernet2009a, Triggering of saccades depends on the task: in the gap task, fixation point switches off and target appears after a gap period; in the overlap task, target appears while fixation point is still on. Saccade latencies are shorter in the gap task, due to fixation disengagement and advanced movement preparation during the gap. The two modes of initiation are also hypothesized to be subtended by different cortical-subcortical circuits. This study tested whether interleaving the two tasks modifies latencies, due to switching between different modes of triggering. Two groups of healthy participants (21-29 vs. 39-55 years) made horizontal and vertical saccades in gap, overlap, and mixed tasks; saccades were recorded with the Eyelink. Both groups showed shorter latencies in the gap task, i.e. a robust gap effect and systematic differences between directions. For young adults, interleaving tasks made the latencies shorter or longer depending on direction, while for middle-age adults, latencies became longer for all directions. Our observations can be explained in the context of models such as that of Brown et al. (Neural Netw 17:471-510, 2004), which proposed that different combinations of frontal eye field (FEF) layers, interacting with cortico-subcortical areas, control saccade triggering in gap and overlap trials. Moreover, we suggest that in early adulthood, the FEF is functioning optimally; frequent changes of activity in the FEF can be beneficial, leading to shorter latencies, at least for some directions. However, for middle-age adults, frequent changes of activity of a less optimally functioning FEF can be time consuming. Studying the alternation of gap and overlap tasks provides a fine tool to explore development, aging and disease. |
Eric D. Vidoni; Jason S. McCarley; Jodi D. Edwards; Lara A. Boyd Manual and oculomotor performance develop contemporaneously but independently during continuous tracking Journal Article In: Experimental Brain Research, vol. 195, no. 4, pp. 611–620, 2009. @article{Vidoni2009, The coordination of the oculomotor and manual effector systems is an important component of daily motor behavior. Previous work has primarily examined oculomotor/manual coordination in discrete targeting tasks. Here we extend this work to learning a tracking task that requires continuous response and movement update. Over two sessions, participants practiced controlling a computer mouse with movements of their arm to follow a target moving in a repeated sequence. Eye movements were also recorded. In a retention test, participants demonstrated sequence-specific learning with both effector systems, but differences between effectors also were apparent. Time series analysis and multiple linear regression were employed to probe spatial and temporal contributions to overall tracking accuracy within each effector system. Sequence-specific oculomotor learning occurred only in the spatial domain. By contrast, sequence-specific learning at the arm was evident only in the temporal domain. There was minimal interdependence in error rates for the two effector systems, underscoring their independence during tracking. These findings suggest that the oculomotor and manual systems learn contemporaneously, but performance improvements manifest differently and rely on different elements of motor execution. The results may in part be a function of what the motor learning system values for each effector as a function of its effector's inertial properties. |
Sébastien Miellet; Patrick J. O'Donnell; Sara C. Sereno Parafoveal magnification: Visual acuity does not modulate the perceptual span in reading Journal Article In: Psychological Science, vol. 20, no. 6, pp. 721–728, 2009. @article{Miellet2009, Models of eye guidance in reading rely on the concept of the perceptual span—the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm—parafoveal magnification (PM)— that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account ofeye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word. |
Bettina Olk; Alan Kingstone A new look at aging and performance in the antisaccade task: The impact of response selection Journal Article In: European Journal of Cognitive Psychology, vol. 21, no. 2-3, pp. 406–427, 2009. @article{Olk2009, Aged adults respond more slowly and less accurately in the antisaccade task, in which a saccade away from a visual stimulus is required. This decreased performance has been attributed to a decline in the ability to inhibit prepotent responses with age. Considering that antisaccades also involve response selection, the present experiment investigated the contribution of inhibition and response selection. Young and aged adults were compared between conditions that required varying percentages of prosaccades, antisaccades, and no-go trials. The comparison between no-go (inhibition of a prosaccade) and antisaccade trials (inhibition of a prosaccade 和 selection of an antisaccade) showed significantly worse performance in the antisaccade task, especially for the older group, suggesting that they failed to select the antisaccade in a situation in which a competing, prepotent response is available. The impact of this response selection failure was underlined by an equivalent ability of both groups to impose inhibition. |
Alper Açik; Selim Onat; Frank Schumann; Wolfgang Einhäuser; Peter König Effects of luminance contrast and its modifications on fixation behavior during free viewing of images from different categories Journal Article In: Vision Research, vol. 49, no. 12, pp. 1541–1553, 2009. @article{Acik2009, During viewing of natural scenes, do low-level features guide attention, and if so, does this depend on higher-level features? To answer these questions, we studied the image category dependence of low-level feature modification effects. Subjects fixated contrast-modified regions often in natural scene images, while smaller but significant effects were observed for urban scenes and faces. Surprisingly, modifications in fractal images did not influence fixations. Further analysis revealed an inverse relationship between modification effects and higher-level, phase-dependent image features. We suggest that high- and mid-level features - such as edges, symmetries, and recursive patterns - guide attention if present. However, if the scene lacks such diagnostic properties, low-level features prevail. We posit a hierarchical framework, which combines aspects of bottom-up and top-down theories and is compatible with our data. |
Arash Afraz; Patrick Cavanagh The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations Journal Article In: Journal of Vision, vol. 9, no. 10, pp. 1–17, 2009. @article{Afraz2009, In four experiments, we measured the gender-specific face-aftereffect following subject's eye movement, head rotation, or head movement toward the display and following movement of the adapting stimulus itself to a new test location. In all experiments, the face aftereffect was strongest at the retinal position, orientation, and size of the adaptor. There was no advantage for the spatiotopic location in any experiment nor was there an advantage for the location newly occupied by the adapting face after it moved in the final experiment. Nevertheless, the aftereffect showed a broad gradient of transfer across location, orientation and size that, although centered on the retinotopic values of the adapting stimulus, covered ranges far exceeding the tuning bandwidths of neurons in early visual cortices. These results are consistent with a high-level site of adaptation (e.g. FFA) where units of face analysis have modest coverage of visual field, centered in retinotopic coordinates, but relatively broad tolerance for variations in size and orientation. |
Ozgur E. Akman; Richard A. Clement; David S. Broomhead; Sabira K. Mannan; Ian Moorhead; Hugh R. Wilson Probing bottom-up processing with multistable images Journal Article In: Journal of Eye Movement Research, vol. 1, no. 3, pp. 1–7, 2009. @article{Akman2009, The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models. |
Weimin Mou; Xianyun Liu; Timothy P. McNamara Layout geometry in encoding and retrieval of spatial memory Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 1, pp. 83–93, 2009. @article{Mou2009, Two experiments investigated whether the spatial reference directions that are used to specify objects' locations in memory can be solely determined by layout geometry. Participants studied a layout of objects from a single viewpoint while their eye movements were recorded. Subsequently, participants used memory to make judgments of relative direction (e.g., "Imagine you are standing at X, facing Y, please point to Z"). When the layout had a symmetric axis that was different from participants' viewing direction, the sequence of eye fixations on objects during learning and the preferred directions in pointing judgments were both determined by the direction of the symmetric axis. These results provide further evidence that interobject spatial relations are represented in memory with intrinsic frames of reference. |
Mulckhuyse Mulckhuyse; Stefan Van der Stigchel; Jan Theeuwes Early and late modulation of saccade deviations by target distractor similarity Journal Article In: Journal of Neurophysiology, vol. 102, no. 3, pp. 1451–1458, 2009. @article{Mulckhuyse2009, In this study, we investigated the time course of oculomotor competition between bottom-up and top-down selection processes using saccade trajectory deviations as a dependent measure. We used a paradigm in which we manipulated saccade latency by offsetting the fixation point at different time points relative to target onset. In experiment 1, observers made a saccade to a filled colored circle while another irrelevant distractor circle was presented. The distractor was either similar (i.e., identical) or dissimilar to the target. Results showed that the strength of saccade deviation was modulated by target distractor similarity for short saccade latencies. To rule out the possibility that the similar distractor affected the saccade trajectory merely because it was identical to the target, the distractor in experiment 2 was a square shape of which only the color was similar or dissimilar to the target. The results showed that deviations for both short and long latencies were modulated by target distractor similarity. When saccade latencies were short, we found less saccade deviation away from a similar than from a dissimilar distractor. When saccade latencies were long, the opposite pattern was found: more saccade deviation away from a similar than from a dissimilar distractor. In contrast to previous findings, our study shows that task-relevant information can already influence the early processes of oculomotor control. We conclude that competition between saccadic goals is subject to two different processes with different time courses: one fast activating process signaling the saliency and task relevance of a location and one slower inhibitory process suppressing that location. |
Jérôme Munuera; Pierre Morel; Jean-Rene Duhamel; Sophie Deneve Optimal sensorimotor control in eye movement sequences Journal Article In: Journal of Neuroscience, vol. 29, no. 10, pp. 3026–3035, 2009. @article{Munuera2009, Fast and accurate motor behavior requires combining noisy and delayed sensory information with knowledge of self-generated body motion; much evidence indicates that humans do this in a near-optimal manner during arm movements. However, it is unclear whether this principle applies to eye movements. We measured the relative contributions of visual sensory feedback and the motor efference copy (and/or proprioceptive feedback) when humans perform two saccades in rapid succession, the first saccade to a visual target and the second to a memorized target. Unbeknownst to the subject, we introduced an artificial motor error by randomly "jumping" the visual target during the first saccade. The correction of the memory-guided saccade allowed us to measure the relative contributions of visual feedback and efferent copy (and/or proprioceptive feedback) to motor-plan updating. In a control experiment, we extinguished the target during the saccade rather than changing its location to measure the relative contribution of motor noise and target localization error to saccade variability without any visual feedback. The motor noise contribution increased with saccade amplitude, but remained <30% of the total variability. Subjects adjusted the gain of their visual feedback for different saccade amplitudes as a function of its reliability. Even during trials where subjects performed a corrective saccade to compensate for the target-jump, the correction by the visual feedback, while stronger, remained far below 100%. In all conditions, an optimal controller predicted the visual feedback gain well, suggesting that humans combine optimally their efferent copy and sensory feedback when performing eye movements. |