EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2009 |
Eugene McSorley; Patrick Haggard; Robin Walker The spatial and temporal shape of oculomotor inhibition Journal Article In: Vision Research, vol. 49, no. 6, pp. 608–614, 2009. @article{McSorley2009, Selecting a stimulus as the target for a goal-directed movement involves inhibiting other competing possible responses. Inhibition has generally proved hard to study behaviorally, because it results in no measurable output. The effect of distractors on the shape of oculomotor and manual trajectories provide evidence of such inhibition. Individual saccades may deviate initially either towards, or away from, a competing distractor - the direction and extent of this deviation depends upon saccade latency, target predictability and the target to distractor separation. The experiment reported here used these effects to show how inhibition of distractor locations develops over time. Distractors could be presented at various distances from unpredictable and predictable targets in two separate experiments. The deviation of saccade trajectories was compared between trials with and without distractors. Inhibition was measured by saccade trajectory deviation. Inhibition was found to increase as the distractor distance from target decreased but was found to increase with saccade latency at all distractor distances (albeit to different peaks). Surprisingly, no differences were found between unpredictable and predictable targets perhaps because our saccade latencies were generally long (∼260-280 ms.). We conclude that oculomotor inhibition of saccades to possible target objects involves the same mechanisms for all distractor distances and target types. |
Eugene McSorley; Rachel McCloy Saccadic eye movements as an index of perceptual decision-making Journal Article In: Experimental Brain Research, vol. 198, no. 4, pp. 513–520, 2009. @article{McSorley2009a, One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options. |
John Palmer; Cathleen M. Moore Using a filtering task to measure the spatial extent of selective attention Journal Article In: Vision Research, vol. 49, no. 10, pp. 1045–1064, 2009. @article{Palmer2009, The spatial extent of attention was investigated by measuring sensitivity to stimuli at to-be-ignored locations. Observers detected a stimulus at a cued location (target), while ignoring otherwise identical stimuli at nearby locations (foils). Only an attentional cue distinguished target from foil. Several experiments varied the contrast and separation of targets and foils. Two theories of selection were compared: contrast gain and a version of attention switching called an all-or-none mixture model. Results included large effects of separation, rejection of the contrast gain model, and the measurement of the size and profile of the spatial extent of attention. |
Femke Maij; Eli Brenner; Jeroen B. J. Smeets Temporal information can influence spatial localization Journal Article In: Journal of Neurophysiology, vol. 102, no. 1, pp. 490–495, 2009. @article{Maij2009, To localize objects relative to ourselves, we need to combine various sensory and motor signals. When these signals change abruptly, as information about eye orientation does during saccades, small differences in latency between the signals could introduce localization errors. We examine whether independent temporal information can influence such errors. We asked participants to follow a randomly jumping dot with their eyes and to point at flashes that occurred near the time they made saccades. Such flashes are mislocalized. We presented a tone at different times relative to the flash. We found that the flash was mislocalized as if it had occurred closer in time to the tone. This demonstrates that temporal information is taken into consideration when combining sensory information streams for localization. |
George L. Malcolm; John M. Henderson The effects of target template specificity on visual search in real-world scenes: Evidence from eye movements Journal Article In: Journal of Vision, vol. 9, no. 11, pp. 1–13, 2009. @article{Malcolm2009, We can locate an object more quickly in a real-world scene when a specific target template is held in visual working memory, but it is not known exactly how a target template's specificity affects real-world search. In the present study, we compared word and picture cues in real-world scene search. Using an eye-tracker, we segmented search time into three behaviorally defined epochs: search initiation time, scanning time, and verification time. Results from three experiments indicated that target template specificity affects scanning and verification time. Within the scanning epoch, target template specificity affected the number of scene regions visited and the mean fixation duration. Chages to SOA did not affect this pattern of results. Similarly, the pattern of results did not change when participants were familiarized with target images prior to testing, suggesting that an immediately preceding picture provides a more useful search template than one stored in long-term memory. The results suggest that the specificity of the target cue affects both the activation map representing potential target locations and the process that matches a fixated object to an internal representation of the target. |
Jamal K. Mansour; R. C. L. Lindsay; Neil Brewer; Kevin G. Munhall Characterizing visual behaviour in a lineup task Journal Article In: Applied Cognitive Psychology, vol. 23, no. 7, pp. 1012–1026, 2009. @article{Mansour2009, Eye tracking was used to monitor participants' visual behaviour while viewing lineups in order to determine whether gaze behaviour predicted decision accuracy. Participants viewed taped crimes followed by simultaneous lineups. Participants (N¼34) viewed 4 target-present and 4 target-absent lineups. Decision time, number of fixations and duration of fixations differed for selections vs. non- selections. Correct and incorrect selections differed only in terms of comparison-type behaviour involving the selected face. Correct and incorrect non-selections could be distinguished by decision time, number of fixations and duration of fixations on the target or most-attended face and comparisons. Implications of visual behaviour for judgment strategy (relative vs. absolute) are discussed. |
Sophie Marat; Tien Ho Phuoc; Lionel Granjon; Nathalie Guyader; Denis Pellerin; Anne Guérin-Dugué Modelling spatio-temporal saliency to predict gaze direction for short videos Journal Article In: International Journal of Computer Vision, vol. 82, no. 3, pp. 231–243, 2009. @article{Marat2009, This paper presents a spatio-temporal saliency model that predicts eye movement during video free viewing. This model is inspired by the biology of the first steps of the human visual system. The model extracts two signals from video stream corresponding to the two main outputs of the retina: parvocellular and magnocellular. Then, both sig-nals are split into elementary feature maps by cortical-like filters. These feature maps are used to form two saliency maps: a static and a dynamic one. These maps are then fused into a spatio-temporal saliency map. The model is evaluated by comparing the salient areas of each frame predicted by the spatio-temporal saliency map to the eye positions of dif-ferent subjects during a free video viewing experiment with a large database (17000 frames). In parallel, the static and the dynamic pathways are analyzed to understand what is more or less salient and for what type of videos our model is a good or a poor predictor of eye movement. |
Michi Matsukura; James R. Brockmole; John M. Henderson Overt attentional prioritization of new objects and feature changes during real-world scene viewing Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 835–855, 2009. @article{Matsukura2009, The authors investigated the extent to which a change to an object's colour is overtly prioritized for fixation relative to the appearance of a new object during real-world scene viewing. Both types of scene change captured gaze (and attention) when introduced during a fixation, although colour changes captured attention less often than new objects. Neither of these scene changes captured attention when they occurred during a saccade, but slower and less reliable memory-based mechanisms were nevertheless able to prioritize new objects and colour changes relative to the other stable objects in the scene. These results indicate that online memory for object identity and at least some object features are functional in detecting changes to real-world scenes. Additionally, visual factors such as the salience of onsets and colour changes did not affect prioritization of these events. We discuss these results in terms of current theories of attention allocation within, and online memory representations of, real-world scenes. |
Sonja Engmann; Bernard Marius Hart; Thomas Sieren; Selim Onat; Peter König; Wolfgang Einhäuser Saliency on a natural scene background: Effects of color and luminance contrast add linearly Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 6, pp. 1337–1352, 2009. @article{Engmann2009, In natural vision, shifts in spatial attention are associated with shifts of gaze. Computational models of such overt attention typically use the concept of a saliency map: Normalized maps of center–surround differences are computed for individual stimulus features and added linearly to obtain the saliency map. Although the predictions of such models correlate with fixated locations better than chance, their mechanistic assumptions are less well investigated. Here, we tested one key assumption: Do the effects of different features add linearly or according to a max-type of interaction? We measured the eye position of observers viewing natural stimuli whose luminance contrast and/or color contrast (saturation) increased gradually toward one side. We found that these feature gradients biased fixations toward regions of high contrasts. When two contrast gradients (color and luminance) were superimposed, linear summation of their individual effects predicted their combined effect. This demonstrated that the interaction of color and luminance contrast with respect to human overt attention is—irrespective of the precise model—consistent with the assumption of linearity, but not with a max-type interaction of these features. |
Kris Evans; Caren M. Rotello; Xingshan Li; Keith Rayner Scene perception and memory revealed by eye movements and receiver-operating characteristic analyses: Does a cultural difference truly exist? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 2, pp. 276–285, 2009. @article{Evans2009, Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory. |
K. A. Ford; Stefan Everling Neural activity in primate caudate nucleus associated with pro- and antisaccades Journal Article In: Journal of Neurophysiology, vol. 102, no. 4, pp. 2334–2341, 2009. @article{Ford2009, The basal ganglia (BG) play a central role in movement and it has been demonstrated that the discharge rate of neurons in these structures are modulated by the behavioral context of a given task. Here we used the antisaccade task, in which a saccade toward a flashed visual stimulus must be inhibited in favor of a saccade to the opposite location, to investigate the role of the caudate nucleus, a major input structure of the BG, in flexible behavior. In this study, we recorded extracellular neuronal activity while monkeys performed pro- and antisaccade trials. We identified two populations of neurons: those that preferred contralateral saccades (CSNs) and those that preferred ipsilateral saccades (ISNs). CSNs increased their firing rates for prosaccades, but not for antisaccades, and ISNs increased their firing rates for antisaccades, but not for prosaccades. We propose a model in which CSNs project to the direct BG pathway, facilitating saccades, and ISNs project to the indirect pathway, suppressing saccades. This model suggests one possible mechanism by which these neuronal populations could be modulating activity in the superior colliculus. |
Tom Foulsham; Jason J. S. Barton; Alan Kingstone; Richard Dewhurst; Geoffrey Underwood Fixation and saliency during search of natural scenes: The case of visual agnosia Journal Article In: Neuropsychologia, vol. 47, no. 8-9, pp. 1994–2003, 2009. @article{Foulsham2009, Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance. |
Tom Foulsham; Geoffrey Underwood Does conspicuity enhance distraction? Saliency and eye landing position when searching for objects Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 6, pp. 1088–1098, 2009. @article{Foulsham2009a, While visual saliency may sometimes capture attention, the guidance of eye movements in search is often dominated by knowledge of the target. How is the search for an object influenced by the saliency of an adjacent distractor? Participants searched for a target amongst an array of objects, with distractor saliency having an effect on response time and on the speed at which targets were found. Saliency did not predict the order in which objects in target-absent trials were fixated. The within-target landing position was distributed around a modal position close to the centre of the object. Saliency did not affect this position, the latency of the initial saccade, or the likelihood of the distractor being fixated, suggesting that saliency affects the allocation of covert attention and not just eye movements. |
Diana J. Gorbet; Lauren E. Sergio The behavioural consequences of dissociating the spatial directions of eye and arm movements Journal Article In: Brain Research, vol. 1284, pp. 77–88, 2009. @article{Gorbet2009, Many of our daily movements use visual information to guide our arms toward objects of interest. Typically, these visually guided movements involve first focusing our gaze on the intended target and then reaching toward the direction of our gaze. The literature on eye-hand coordination provides a great deal of evidence that circuitry in the brain exists which can couple eye and arm movements. Moving both of these effectors towards a common spatial direction may be a default setting used by the brain to simplify the planning of movements. We tested this idea in 20 subjects using two experimental tasks. In a "Standard" condition, the eyes and a cursor were guided to the same spatial location by moving the arm (on a touchpad) and the eyes in the same direction. In a "Dissociated" condition, the eye and cursor were again guided to the same spatial location but the arm was required to move in a direction opposite to the eyes to successfully achieve this goal. In this study, we observed that dissociating the directions of eye and arm movement significantly changed the kinematic properties of both effectors including the latency and peak velocity of eye movements and the curvature of hand-path trajectories. Thus, forcing the brain to plan simultaneous eye and arm movements in different directions alters some of the basic (and often stereotyped) characteristics of motor responses. We suggest that interference with the function of a neural network that couples gaze and reach to congruent spatial locations underlies these kinematic alterations. |
H. S. Greenwald; David C. Knill Cue integration outside central fixation: A study of grasping in depth Journal Article In: Journal of Vision, vol. 9, no. 2, pp. 1–16, 2009. @article{Greenwald2009, We assessed the usefulness of stereopsis across the visual field by quantifying how retinal eccentricity and distance from the horopter affect humans' relative dependence on monocular and binocular cues about 3D orientation. The reliabilities of monocular and binocular cues both decline with eccentricity, but the reliability of binocular information decreases more rapidly. Binocular cue reliability also declines with increasing distance from the horopter, whereas the reliability of monocular cues is virtually unaffected. We measured how subjects integrated these cues to orient their hands when grasping oriented discs at different eccentricities and distances from the horopter. Subjects relied increasingly less on binocular disparity as targets' retinal eccentricity and distance from the horopter increased. The measured cue influences were consistent with what would be predicted from the relative cue reliabilities at the various target locations. Our results showed that relative reliability affects how cues influence motor control and that stereopsis is of limited use in the periphery and away from the horopter because monocular cues are more reliable in these regions. |
Emmanuel Guzman-Martinez; Parkson Leung; Steven L. Franconeri; Marcia Grabowecky; Satoru Suzuki Rapid eye-fixation training without eyetracking Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 3, pp. 491–496, 2009. @article{GuzmanMartinez2009, Maintenance of stable central eye fixation is crucial for a variety of behavioral, electrophysiological, and neuroimaging experiments. Naive observers in these experiments are not typically accustomed to fixating, either requiring the use of cumbersome and costly eyetracking or producing confounds in results. We devised a flicker display that produced an easily detectable visual phenomenon whenever the eyes moved. A few minutes of training using this display dramatically improved the accuracy of eye fixation while observers performed a demanding spatial attention cuing task. The same amount of training using control displays did not produce significant fixation improvements, and some observers consistently made eye movements to the peripheral attention cue, contaminating the cuing effect. Our results indicate that (1) eye fixation can be rapidly improved in naive observers by providing real-time feedback about eye movements, and (2) our simple flicker technique provides an easy and effective method for providing this feedback. |
Mackenzie G. Glaholt; Eyal M. Reingold The time course of gaze bias in visual decision tasks Journal Article In: Visual Cognition, vol. 17, no. 8, pp. 1228–1243, 2009. @article{Glaholt2009a, In three experiments, we used eyetracking to investigate the time course of biases in looking behaviour during visual decision making. Our study replicated and extended prior research by Shimojo, Simion, Shimojo, and Scheier (2003), and Simion and Shimojo (2006). Three groups of participants performed forced-choice decisions in a two-alternative free-viewing condition (Experiment 1a), a two-alternative gaze-contingent window condition (Experiment 1b), and an eight-alternative free-viewing condition (Experiment 1c). Participants viewed photographic art images and were instructed to select the one that they preferred (preference task), or the one that they judged to be photographed most recently (recency task). Across experiments and tasks, we demonstrated robust bias towards the chosen item in either gaze duration, gaze frequency or both. The present gaze bias effect was less task specific than those reported previously. Importantly, in the eight-alternative condition we demonstrated a very early gaze bias effect, which rules out a postdecision response-related explanation. [ABSTRACT FROM AUTHOR] |
Mackenzie G. Glaholt; Mei-Chun Wu; Eyal M. Reingold Predicting preference from fixations Journal Article In: PsychNology Journal, vol. 7, no. 2, pp. 141–158, 2009. @article{Glaholt2009, We measured the strength of the association between looking behaviour and preference. Participants selected the most preferred face out of a grid of 8 faces. Fixation times were correlated with selection on a trial-by-trial basis, as well as with explicit preference ratings. Furthermore, by ranking features based on fixation times, we were able to successfully predict participants' preferences for novel feature combinations in a two-alternative forced choice task. In addition, we obtained a similar pattern of findings in a very different stimulus domain: mock company logos. Our results indicated that fixation times can be used to predict selection in large arrays and they might also be employed to estimate preferences for whole stimuli as well as their constituent features. |
Michael Dorr; Karl R. Gegenfurtner; Erhardt Barth The contribution of low-level features at the centre of gaze to saccade target selection Journal Article In: Vision Research, vol. 49, no. 24, pp. 2918–2926, 2009. @article{Dorr2009, Does it matter what observers are looking at right now to determine where they will look next? We recorded eye movements and computed colour, local orientation, motion, and geometrical invariants on dynamic natural scenes. The distributions of differences between features at successive fixations were compared with those from random scanpaths of varying similarity to natural scanpaths. Although distributions show significant differences, these feature correlations are mainly due to spatio-temporal correlations in natural scenes and a target selection bias, e.g. towards moving objects. Our results indicate that low-level features at fixation contribute little to the choice of the next saccade target. ]$backslash$ |
Jason A. Droll; C. K. Abbey; Miguel P. Eckstein Learning cue validity through performance feedback Journal Article In: Journal of Vision, vol. 9, no. 2, pp. 1–22, 2009. @article{Droll2009, Targets of a visual search are often not randomly positioned within a scene, but may be more likely to co-occur adjacent to other objects or background properties. Studies on target-cue co-occurrence (e.g. cue validity) suggest that observers can exploit this knowledge to increase performance in detection and localization tasks. However, little is known regarding how observers learn this co-occurrence. The present experiment sought to determine if observers were capable of learning the probability of cue validity, and determine how this learning is shaped by feedback. Separate groups of subjects performed a search task using one of three different feedback conditions providing varying degrees of information: unsupervised feedback, response reinforcement, or supervised feedback. Results show that saccadic and perceptual decisions reflect larger cueing effects as feedback information increased. This suggests that internal signals generated from response selection are insufficient for exploiting cue validity, but that reinforcement may be sufficient. However, final explicit estimates of cue validity were independent of feedback condition, suggesting that implicit behaviors are subject to unique learning constraints. Comparison to an ideal observer reveals that the rate at which participants learned cue validity was suboptimal, which may have impaired performance during initial familiarization with scene statistics. |
Derek A. Hamilton; Travis E. Johnson; Edward S. Redhead; Steven P. Verney Control of rodent and human spatial navigation by room and apparatus cues Journal Article In: Behavioural Processes, vol. 81, no. 2, pp. 154–169, 2009. @article{Hamilton2009, A growing body of literature indicates that rats prefer to navigate in the direction of a goal in the environment (directional responding) rather than to the precise location of the goal (place navigation). This paper provides a brief review of this literature with an emphasis on recent findings in the Morris water task. Four experiments designed to extend this work to humans in a computerized, virtual Morris water task are also described. Special emphasis is devoted to how directional responding and place navigation are influenced by room and apparatus cues, and how these cues control distinct components of navigation to a goal. Experiments 1 and 2 demonstrate that humans, like rats, perform directional responses when cues from the apparatus are present, while Experiment 3 demonstrates that place navigation predominates when apparatus cues are eliminated. In Experiment 4, an eyetracking system measured gaze location in the virtual environment dynamically as participants navigated from a start point to the goal. Participants primarily looked at room cues during the early segment of each trial, but primarily focused on the apparatus as the trial progressed, suggesting distinct, sequential stimulus functions. Implications for computational modeling of navigation in the Morris water task and related tasks are discussed. |
Robin Hawes Vision and reality: Relativity in art Journal Article In: Digital Creativity, vol. 20, no. 3, pp. 177–186, 2009. @article{Hawes2009, Artist and researcher, Robin Hawes, presents a recently completed art/science collaboration which examined the processes undertaken by the eye in providing sensory data to the brain and aimed to explore the internally construc- tive and idiosyncratic aspects of visual percep- tion. With the physiology of the retina providing inconsistent quality of information across our field of view, the project set out to reveal the disparity between the visual information gath- ered by our eyes and the conscious picture of ‘reality' formed in our minds. The paper will map out the psychological, physiological and philosophical basis for the research, as well as presenting images produced by the project. In essence, each time someone contemplates a work of art, the work of art is re-constructed ‘internally'. This project set out, in part at least, to make ‘visible' this hitherto internal, idiosyncratic, unique and unshared neurological event. |
John M. Henderson; Myriam Chanceaux; Tim J. Smith The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements Journal Article In: Journal of Vision, vol. 9, no. 1, pp. 1–8, 2009. @article{Henderson2009b, We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes. |
John M. Henderson; George L. Malcolm; Charles Schandl Searching in the dark: Cognitive relevance drives attention in real-world scenes Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 5, pp. 850–856, 2009. @article{Henderson2009, We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes. |
John M. Henderson; Tim J. Smith How are eye fixation durations controlled during scene viewing? Further evidence from a scene onset delay paradigm Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1055–1082, 2009. @article{Henderson2009a, Recent research on eye movements during scene viewing has focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. In two scene memorization and one visual search experiments, the scene was removed from view during critical fixations for a predetermined delay, and then restored following the delay. Experiment 1 compared filled (pattern mask) and unfilled (grey field) delays. Experiment 2 compared random to blocked delays. Experiment 3 extended the results to a visual search task. The results demonstrate that fixation durations in scene viewing comprise two fixation populations. One population remains relatively constant across delay, and the second population increases with scene onset delay. The results are consistent with a mixed eye movement control model that incorporates an autonomous control mechanism with process monitoring. The results suggest that a complete gaze control model will have to account for both fixation location and fixation duration. |
Valerie Higenell; Brian J. White; Joshua R. Hwang; Douglas P. Munoz Localizing the neural substrate of reflexive covert orienting Journal Article In: Journal of Eye Movement Research, vol. 6, no. 1, pp. 1–14, 2009. @article{Higenell2009, The capture of covert spatial attention by salient visual events influences subsequent gaze behavior. A task irrelevant stimulus (cue) can reduce (Attention capture) or prolong (Inhibition of return) saccade reaction time to a subsequent target stimulus depending on the cue-target delay. Here we investigated the mechanisms that underlie the sensory-based account of AC/IOR by manipulating the visual processing stage where the cue and target interact. In Experiment 1, liquid crystal shutter goggles were used to test whether AC/IOR occur at a monocular versus binocular processing stage (before versus after signals from both eyes converge). In Experiment 2, we tested whether visual orientation selective mechanisms are critical for AC/IOR by using oriented Gabor stimuli. We found that the magnitude of AC and IOR was not different between monocular and interocular viewing conditions, or between iso- and ortho-oriented cue-target interactions. The results suggest that the visual mechanisms that contribute to AC/IOR arise at an orientation-independent binocular processing stage. |
2008 |
S. M. EMRICH; J. D. N. RUPPEL; N. AL-AIDROOS; J. PRATT; S. FERBER Out with the old: Inhibition of old items in a preview search is limited Journal Article In: Perception and Psychophysics, vol. 70, no. 8, pp. 1552–1557, 2008. @article{EMRICH2008, If some of the distractors in a visual search task are previewed prior to the presentation of the remaining distractors and the target, search time is reduced relative to when all of the items are displayed simultaneously. Here, we tested whether the ability to preferentially search new items during such a preview search is limited. We confirmed previous studies: The proportion of fixations on old items was significantly less than chance. However, the probability of fixating old locations was negatively affected by increasing the number of previewed distractors, suggesting that inhibition is limited to a small number of old items. Furthermore, the ability to inhibit old locations was limited to the first four fixations, indicating that by the fifth fixation, the resources required to sustain inhibition had been depleted. Together, these findings suggest that inhibition of old items in a preview search is a top-down mediated process dependent on capacity-limited cognitive resources. |
R. GODIJN; A. F. KRAMER The effect of attentional demands on the antisaccade cost Journal Article In: Perception and Psychophysics, vol. 70, no. 5, pp. 795–806, 2008. @article{GODIJN2008, In the present study, we examined the effect of attentional demands on the antisaccade cost (the latency difference between antisaccades and prosaccades). Participants performed a visual search for a target digit and were required to execute a saccade toward (prosaccade) or away from (antisaccade) the target. The results of Experiment 1 revealed that the antisaccade cost was greater when the target was premasked (i.e., presented through the removal of line segments) than when it appeared as an onset. Furthermore, in premasked target conditions, the antisaccade cost was increased by the presentation of onset distractors. The results of Experiment 2 revealed that the antisaccade cost was greater in a difficult search task (a numeral 2 among 5s) than in an easy one (a 2 among 7s). The findings provide evidence that attentional demands increase the antisaccade cost. We propose that the attentional demands of the search task interfere with the attentional control required to select the antisaccade goal. |
Jay Pratt; Bas Neggers Inhibition of return in single and dual tasks: Examining saccadic, keypress, and pointing responses Journal Article In: Perception and Psychophysics, vol. 70, no. 2, pp. 257–265, 2008. @article{Pratt2008, Two experiments are reported in which inhibition of return (IOR) was examined with single-response tasks (ither manual responses alone or saccadic responses alone) and dual-response tasks (simultaneous manual and saccadic responses). The first experiment—using guided limb movements that require considerable spatial information—showed more IOR for saccades than for pointing responses. In addition, saccadic IOR was reduced with concurrent pointing movements, but manual IOR was not affected by concurrent saccades. Importantly, at the time of saccade initiation, the arm movements did not start yet, indicating that the influence on saccade IOR is due to arm-movement preparation. In the second experiment, using localization keypress responses that required only minimal spatial information, greater IOR was again found for saccadic than for manual responses, but no effect of concurrent movements was found. These findings add further support that there is a dissociation between oculomotor and skeletal-motor IOR. Moreover, the results show that the preparation manual responses tend to mediate saccadic behavior—but only when the manual responses require high levels of spatial accuracy—and that the superior colliculus is the likely neural substrate integrating IOR for eye and arm movements. |
Heinz-Werner Priess; Sabine Born; Ulrich Ansorge Inhibition of return after color singletons Journal Article In: Journal of Eye Movement Research, vol. 5, no. 5, pp. 1–12, 2008. @article{Priess2008, Inhibition of return (IOR) is the faster selection of hitherto unattended than previously attended positions. Some previous studies failed to find evidence for IOR after attention capture by color singletons. Others, however, did report IOR effects after color singletons. The current study examines the role of cue relevance for obtaining IOR effects. By using a potentially more sensitive method - saccadic IOR - we tested and found IOR after relevant color singleton cues that required an attention shift (Experiment 1). In contrast, irrelevant color singletons failed to produce reliable IOR effects in Experiment 2. Also, Experiment 2 rules out an alternative explanation of our IOR findings in terms of masking. We discuss our results in light of pertaining theories of IOR. |
Cliodhna Quigley; Selim Onat; Sue Harding; Martin Cooke; Peter König Audio-visual integration during overt visual attention Journal Article In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 4, 2008. @article{Quigley2008, How do different sources of information arising from different modalities interact to control where we look? To answer this question with respect to real-world operational conditions we presented natural images and spatially localized sounds in (V)isual, Audiovisual (AV) and (A)uditory conditions and measured subjects' eye-movements. Our results demonstrate that eye-movements in AV conditions are spatially biased towards the part of the image corresponding to the sound source. Interestingly, this spatial bias is dependent on the probability of a given image region to be fixated (saliency) in the V condition. This indicates that fixation behaviour during the AV conditions is the result of an integration process. Regression analysis shows that this integration is best accounted for by a linear combination of unimodal saliencies. |
Christoph Rasche; Karl R. Gegenfurtner Orienting during gaze guidance in a letter-identification task Journal Article In: Journal of Eye Movement Research, vol. 3, no. 4, pp. 1–10, 2008. @article{Rasche2008, The idea of gaze guidance is to lead a viewer's gaze through a visual display in order to facilitate the viewer's search for specific information in a least-obtrusive manner. This study investigates saccadic orienting when a viewer is guided in a fast-paced, low-contrast letter identification task. Despite the task's difficulty and although guiding cues were ad-justed to gaze eccentricity, observers preferred attentional over saccadic shifts to obtain a letter identification judgment; and if a saccade was carried out its saccadic constant error was 50%. From those results we derive a number of design recommendations for the process of gaze guidance. |
Xingshan Li; Gordon D. Logan Object-based attention in Chinese readers of Chinese words: Beyond Gestalt principles Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 5, pp. 945–949, 2008. @article{Li2008, Most object-based attention studies use objects defined bottom-up by Gestalt principles. In the present study, we defined objects top-down, using Chinese words that were seen as objects by skilled readers of Chinese. Using a spatial cuing paradigm, we found that a target character was detected faster if it was in the same word as the cued character than if it was in a different word. Because there were no bottom-up factors that distinguished the words, these results showed that objects defined by subjects' knowledge–in this case, lexical information–can also constrain the deployment of attention. |
Sebastian Pannasch; Jens R. Helmert; Katharina Roth; Ann-Katrin Herbold; Henrik Walter Visual fixation durations and saccade amplitudes: Shifting relationship in a variety of conditions Journal Article In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–19, 2008. @article{Pannasch2008, Is there any relationship between visual fixation durations and saccade amplitudes in free exploration of pictures and scenes? In four experiments with naturalistic stimuli, we compared eye movements during early and late phases of scene perception. Influences of repeated presentation of similar stimuli (Experiment 1), object density (Experiment 2), emotional stimuli (Experiment 3) and mood induction (Experiment 4) were examined. The results demonstrate a systematic increase in the durations of fixations and a decrease for saccadic amplitudes over the time course of scene perception. This relationship was very stable across the variety of studied conditions. It can be interpreted in terms of a shifting balance of the two modes of visual information processing. |
Alicia Peltsch; Aaron B. Hoffman; I. T. Armstrong; Giovanna Pari; D. P. Munoz Saccadic impairments in Huntington's disease Journal Article In: Experimental Brain Research, vol. 186, no. 3, pp. 457–469, 2008. @article{Peltsch2008, Huntington's disease (HD), a progressive neurological disorder involving degeneration in basal ganglia structures, leads to abnormal control of saccadic eye movements. We investigated whether saccadic impairments in HD (N = 9) correlated with clinical disease severity to determine the relationship between saccadic control and basal ganglia pathology. HD patients and age/sex-matched controls performed various eye movement tasks that required the execution or suppression of automatic or voluntary saccades. In the "immediate" saccade tasks, subjects were instructed to look either toward (pro-saccade) or away from (anti-saccade) a peripheral stimulus. In the "delayed" saccade tasks (pro-/anti-saccades; delayed memory-guided sequential saccades), subjects were instructed to wait for a central fixation point to disappear before initiating saccades towards or away from a peripheral stimulus that had appeared previously. In all tasks, mean saccadic reaction time was longer and more variable amongst the HD patients. On immediate anti-saccade trials, the occurrence of direction errors (pro-saccades initiated toward stimulus) was higher in the HD patients. In the delayed tasks, timing errors (eye movements made prior to the go signal) were also greater in the HD patients. The increased variability in saccadic reaction times and occurrence of errors (both timing and direction errors) were highly correlated with disease severity, as assessed with the Unified Huntington's Disease Rating Scale, suggesting that saccadic impairments worsen as the disease progresses. Thus, performance on voluntary saccade paradigms provides a sensitive indicator of disease progression in HD. |
Angélica Pérez Fornos; Jörg Sommerhalder; Alexandre Pittard; Avinoam B. Safran; Marco Pelizzone Simulation of artificial vision: IV. Visual information required to achieve simple pointing and manipulation tasks Journal Article In: Vision Research, vol. 48, no. 16, pp. 1705–1718, 2008. @article{PerezFornos2008, Retinal prostheses attempt to restore some amount of vision to totally blind patients. Vision evoked this way will be however severely constrained because of several factors (e.g., size of the implanted device, number of stimulating contacts, etc.). We used simulations of artificial vision to study how such restrictions of the amount of visual information provided would affect performance on simple pointing and manipulation tasks. Five normal subjects participated in the study. Two tasks were used: pointing on random targets (LEDs task) and arranging wooden chips according to a given model (CHIPs task). Both tasks had to be completed while the amount of visual information was limited by reducing the resolution (number of pixels) and modifying the size of the effective field of view. All images were projected on a 10° × 7° viewing area, stabilised at a given position on the retina. In central vision, the time required to accomplish the tasks remained systematically slower than with normal vision. Accuracy was close to normal at high image resolutions and decreased at 500 pixels or below, depending on the field of view used. Subjects adapted quite rapidly (in less than 15 sessions) to performing both tasks in eccentric vision (15° in the lower visual field), achieving after adaptation performances close to those observed in central vision. These results demonstrate that, if vision is restricted to a small visual area stabilised on the retina (as would be the case in a retinal prosthesis), the perception of several hundreds of retinotopically arranged phosphenes is still needed to restore accurate but slow performance on pointing and manipulation tasks. Considering that present prototypes afford less than 100 stimulation contacts and that our simulations represent the most favourable visual input conditions that the user might experience, further development is required to achieve optimal rehabilitation prospects. |
Casimir J. H. Ludwig; Adam Ranson; Iain D. Gilchrist Oculomotor capture by transient events: A comparison of abrupt onsets, offsets, motion, and flicker Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–16, 2008. @article{Ludwig2008, Attentional and oculomotor capture by some salient visual event gives insight into what types of dynamic signals the human orienting system is sensitive to. We examined the sensitivity of the saccadic eye movement system to 4 types of dynamic, but task-irrelevant, visual events: abrupt onset, abrupt offset, motion onset and flicker onset. We varied (1) the primary task (contrast vs. motion discrimination) and (2) the amount of prior knowledge of the location of the dynamic event. Interference from the irrelevant events was quantified using a discrimination threshold metric. When the primary task involved contrast discrimination, all four events disrupted performance approximately equally, including the sudden disappearance of an old object. However, when motion was the task-relevant dimension, abrupt onsets and offsets did not disrupt performance at all, but motion onset had a strong effect. Providing more spatial certainty to observers decreased the amount of direct oculomotor capture but nevertheless impaired performance. We conclude that oculomotor capture is predominantly contingent upon the channel the observer monitors in order to perform the primary visual task. |
Amy D. Lykins; Marta Meana; Gregory P. Strauss Sex differences in visual attention to erotic and non-erotic stimuli Journal Article In: Archives of Sexual Behavior, vol. 37, no. 2, pp. 219–228, 2008. @article{Lykins2008, It has been suggested that sex differences in the processing of erotic material (e.g., memory, genital arousal, brain activation patterns) may also be reflected by differential attention to visual cues in erotic material. To test this hypothesis, we presented 20 heterosexual men and 20 heterosexual women with erotic and non-erotic images of heterosexual couples and tracked their eye movements during scene presentation. Results supported previous findings that erotic and non-erotic information was visually processed in a different manner by both men and women. Men looked at opposite sex figures significantly longer than did women, and women looked at same sex figures significantly longer than did men. Within-sex analyses suggested that men had a strong visual attention preference for opposite sex figures as compared to same sex figures, whereas women appeared to disperse their attention evenly between opposite and same sex figures. These differences, however, were not limited to erotic images but evidenced in non-erotic images as well. No significant sex differences were found for attention to the contextual region of the scenes. Results were interpreted as potentially supportive of recent studies showing a greater non-specificity of sexual arousal in women. This interpretation assumes there is an erotic valence to images of the sex to which one orients, even when the image is not explicitly erotic. It also assumes a relationship between visual attention and erotic valence. |
Antonio F. Macedo; Michael D. Crossland; Gary S. Rubin The effect of retinal image slip on peripheral visual acuity Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–11, 2008. @article{Macedo2008, Retinal image slip promoted by fixational eye movements prevents image fading in central vision. However, in the periphery a higher amount of movement is necessary to prevent this fading. We assessed the effect of different levels of retinal image slip in peripheral vision by measuring peripheral visual acuity (VA), with and without crowding, while modulating retinal eccentricity. Gaze position was monitored throughout using an infrared eyetracker. The target was presented for up to 500 msec, either with no retinal image slip, with reduced retinal slip, or with increased retinal image slip. Without crowding, peripheral visual acuity improved with increased retinal image slip compared with the other two conditions. IN contrast to the previous result, under crowded conditions, peripheral visual acuity decreased markedly with increased retinal image slip. Therefore, the effects of increased retinal image slip are different for simple (noncrowded) and more complex (crowded) visual tasks. These results provide further evidence for the importance of fixation stability on complex visual tasks when using the peripheral retina. |
George L. Malcolm; Linda J. Lanyon; Andrew J. B. Fugard; Jason J. S. Barton Scan patterns during the processing of facial expression versus identity: An exploration of task-driven and stimulus-driven effects Journal Article In: Journal of Vision, vol. 8, no. 8, pp. 1–9, 2008. @article{Malcolm2008, Perceptual studies suggest that processing facial identity emphasizes upper-face information, whereas processing expressions of anger or happiness emphasizes the lower-face. The two goals of the present study were to determine (a) if the distributions of eye fixations reflect these upper/lower-face biases, and (b) whether this bias is task- or stimulus-driven. We presented a target face followed by a probe pair of morphed faces, neither of which was identical to the target. Subjects judged which of the pair was more similar to the target face while eye movements were recorded. In Experiment 1 the probe pair always differed from each other in both identity and expression on each trial. In one block subjects judged which probe face was more similar to the target face in identity, and in a second block subjects judged which probe face was more similar to the target face in expression. In Experiment 2 the two probe faces differed in either expression or identity, but not both. Subjects were not informed which dimension differed, but simply asked to judge which probe face was more similar to the target face. We found that subjects scanned the upper-face more than the lower-face during the identity task but the lower-face more than the upper-face during the expression task in Experiment 1 (task-driven effects), with significantly less variation in bias in Experiment 2 (stimulus-driven effects). We conclude that fixations correlate with regional variations of diagnostic information in different processing tasks, but that these reflect top-down task-driven guidance of information acquisition more than stimulus-driven effects. |
Jason S. McCarley; Christopher Grant State-trace analysis of the effects of a visual illusion on saccade amplitudes and perceptual judgments Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 5, pp. 1008–1014, 2008. @article{McCarley2008, Visual illusions often appear to have a larger influence on subjective judgments than on visuomotor behavior. Although this effect has been taken as evidence for multiple estimates of stimulus size in the visual brain, dissociations between subjective judgments and visuomotor measures can frequently be reconciled with a single-estimate model. To circumvent this difficulty, we used state-trace analysis in a pair of experiments to examine the effects of the Müller-Lyer illusion on subjective length estimates, voluntary saccade amplitudes, and reflexive saccade amplitudes. All dependent measures were affected by the illusion. However, state-trace analyses revealed nonmonotonic relationships among all three variables, a pattern inconsistent with the possibility of a single underlying estimate of stimulus size. |
David Melcher Dynamic, object-based remapping of visual features in trans-saccadic perception Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008. @article{Melcher2008, Saccadic eye movements can dramatically change the location in which an object is projected onto the retina. One mechanism that might potentially underlie the perception of stable objects, despite the occurrence of saccades, is the "remapping" of receptive fields around the time of saccadic eye movements. Here we examined two possible models of trans-saccadic remapping of visual features: (1) spatiotopic coordinates that remain constant across saccades or (2) an object-based remapping in retinal coordinates. We used form adaptation to test "object" and "space" based predictions for an adapter that changed spatial and/or retinal location due to eye movements, object motion or manual displacement using a computer mouse. The predictability and speed of the object motion was also manipulated. The main finding was that maximum transfer of the form aftereffect in retinal coordinates occurred when there was a saccade and when the object motion was attended and predictable. A small transfer was also found when observers moved the object across the screen using a computer mouse. The overall pattern of results is consistent with the theory of object-based remapping for salient stimuli. Thus, the active updating of the location and features of attended objects may play a role in perceptual stability. |
Matthew S. Peterson; Melissa R. Beck; Jason H. Wong Were you paying attention to where you looked? The role of executive working memory in visual search Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 2, pp. 372–377, 2008. @article{Peterson2008, Recent evidence has indicated that performing a working memory task that loads executive working memory leads to less efficient visual search (Han & Kim, 2004). We explored the role that executive functioning plays in visual search by examining the pattern of eye movements while participants performed a search task with or without a secondary executive working memory task. Results indicate that executive functioning plays two roles in visual search: the identification of objects and the control of the disengagement of attention. |
Matthew H. Phillips; Jay A. Edelman The dependence of visual scanning performance on search direction and difficulty Journal Article In: Vision Research, vol. 48, no. 21, pp. 2184–2192, 2008. @article{Phillips2008, Phillips and Edelman [Phillips, M. H., & Edelman, J. A. (2008). The dependence of visual scanning performance on saccade, fixation, and perceptual metrics. Vision Research, 48(7), 926-936] presented evidence that performance variability in a visual scanning task depends on oculomotor variables related to saccade amplitude rather than fixation duration, and that saccade-related metrics reflects perceptual span. Here, we extend these results by showing that even for extremely difficult searches trial-to-trial performance variability still depends on saccade-related metrics and not fixation duration. We also show that scanning speed is faster for horizontal than for vertical searches, and that these differences derive again from differences in saccade-based metrics and not from differences in fixation duration. We find perceptual span to be larger for horizontal than vertical searches, and approximately symmetric about the line of gaze. |
Mark B. Neider; Gregory J. Zelinsky Exploring set size effects in scenes: Identifying the objects of search Journal Article In: Visual Cognition, vol. 16, no. 1, pp. 1–10, 2008. @article{Neider2008, Traditional search paradigms utilize simple displays, allowing a precise determination of set size. However, objects in realistic scenes are largely uncountable, and typically visually and semantically complex. Can traditional conceptions of set size be applied to search in realistic scenes? Observers searched quasirealistic scenes for a tank target hidden among tree distractors varying in number and density. Search efficiency improved as trees were added to the display, a reverse set size effect. Eye movement analyses revealed that observers fixated individual trees when the set size was small, and the open regions between trees when the set size was large. Rather than a set size consisting of objectively countable objects, we interpret these data as evidence for a restricted functional set size consisting of idiosyncratically defined objects of search. Observers exploit low-level perceptual grouping processes and high-level semantic scene constraints to dynamically create objects that are appropriate to a given search task. |
M. Niwa; J. Ditterich Perceptual decisions between multiple directions of visual motion Journal Article In: Journal of Neuroscience, vol. 28, no. 17, pp. 4435–4445, 2008. @article{Niwa2008, Previous studies and models of perceptual decision making have largely focused on binary choices. However, we often have to choose from multiple alternatives. To study the neural mechanisms underlying multialternative decision making, we have asked human subjects to make perceptual decisions between multiple possible directions of visual motion. Using a multicomponent version of the random-dot stimulus, we were able to control experimentally how much sensory evidence we wanted to provide for each of the possible alternatives. We demonstrate that this task provides a rich quantitative dataset for multialternative decision making, spanning a wide range of accuracy levels and mean response times. We further present a computational model that can explain the structure of our behavioral dataset. It is based on the idea of a race between multiple integrators to a decision threshold. Each of these integrators accumulates net sensory evidence for a particular choice, provided by linear combinations of the activities of decision-relevant pools of sensory neurons. |
Lauri Nummenmaa; Jussi Hirvonen; Riitta Parkkola; Jari K. Hietanen Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy Journal Article In: NeuroImage, vol. 43, no. 3, pp. 571–580, 2008. @article{Nummenmaa2008, Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy. |
Thomas Nyffeler; Dario Cazzoli; Pascal Wurtz; Mathias Lüthi; Roman Von Wartburg; Silvia Chaves; Anouk Déruaz; Christian W. Hess; René M. Müri Neglect-like visual exploration behaviour after theta burst transcranial magnetic stimulation of the right posterior parietal cortex Journal Article In: European Journal of Neuroscience, vol. 27, no. 7, pp. 1809–1813, 2008. @article{Nyffeler2008, The right posterior parietal cortex (PPC) is critically involved in visual exploration behaviour, and damage to this area may lead to neglect of the left hemispace. We investigated whether neglect-like visual exploration behaviour could be induced in healthy subjects using theta burst repetitive transcranial magnetic stimulation (rTMS). To this end, one continuous train of theta burst rTMS was applied over the right PPC in 12 healthy subjects prior to a visual exploration task where colour photographs of real-life scenes were presented on a computer screen. In a control experiment, stimulation was also applied over the vertex. Eye movements were measured, and the distribution of visual fixations in the left and right halves of the screen was analysed. In comparison to the performance of 28 control subjects without stimulation, theta burst rTMS over the right PPC, but not the vertex, significantly decreased cumulative fixation duration in the left screen-half and significantly increased cumulative fixation duration in the right screen-half for a time period of 30 min. These results suggest that theta burst rTMS is a reliable method of inducing transient neglect-like visual exploration behaviour. |
Areh Mikulić; Michael C. Dorris Temporal and spatial allocation of motor preparation during a mixed-strategy game Journal Article In: Journal of Neurophysiology, vol. 100, no. 4, pp. 2101–2108, 2008. @article{Mikulic2008, Adopting a mixed response strategy in competitive situations can prevent opponents from exploiting predictable play. What drives stochastic action selection is unclear given that choice patterns suggest that, on average, players are indifferent to available options during mixed-strategy equilibria. To gain insight into this stochastic selection process, we examined how motor preparation was allocated during a mixed-strategy game. If selection processes on each trial reflect a global indifference between options, then there should be no bias in motor preparation (unbiased preparation hypothesis). If, however, differences exist in the desirability of options on each trial then motor preparation should be biased toward the preferred option (biased preparation hypothesis). We tested between these alternatives by examining how saccade preparation was allocated as human subjects competed against an adaptive computer opponent in an oculomotor version of the game "matching pennies." Subjects were free to choose between two visual targets using a saccadic eye movement. Saccade preparation was probed by occasionally flashing a visual distractor at a range of times preceding target presentation. The probability that a distractor would evoke a saccade error, and when it failed to do so, the probability of choosing each of the subsequent targets quantified the temporal and spatial evolution of saccade preparation, respectively. Our results show that saccade preparation became increasingly biased as the time of target presentation approached. Specifically, the spatial locus to which saccade preparation was directed varied from trial to trial, and its time course depended on task timing. |
William L. Miller; Vincenzo Maffei; Gianfranco Bosco; Marco Iosa; Myrka Zago; Emiliano Macaluso; Francesco Lacquaniti Vestibular nuclei and cerebellum put visual gravitational motion in context Journal Article In: Journal of Neurophysiology, vol. 99, no. 4, pp. 1969–1982, 2008. @article{Miller2008, Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion. |
Hillel Aviezer; Ran R. Hassin; Jennifer D. Ryan; Cheryl L. Grady; Josh Susskind; Adam Anderson; Morris Moscovitch; Shlomo Bentin Angry, disgusted, or afraid? Studies on the malleability of emotion perception Journal Article In: Psychological Science, vol. 19, no. 7, pp. 724–732, 2008. @article{Aviezer2008, Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly ‘‘read out'' from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels. |
Brian P. Bailey; Shamsi T. Iqbal Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management Journal Article In: ACM Transactions on Computer-Human Interaction, vol. 14, no. 4, pp. 1–28, 2008. @article{Bailey2008, Notifications can have reduced interruption cost if delivered at moments of lower mental workload during task execution. Cognitive theorists have speculated that these moments occur at subtask boundaries. In this article, we empirically test this speculation by examining how workload changes during execution of goal-directed tasks, focusing on regions between adjacent chunks within the tasks, that is, the subtask boundaries. In a controlled experiment, users performed several interactive tasks while their pupil dilation, a reliable measure of workload, was continuously measured using an eye tracking system. The workload data was extracted from the pupil data, precisely aligned to the corresponding task models, and analyzed. Our principal findings include (i) workload changes throughout the execution of goal-directed tasks; (ii) workload exhibits transient decreases at subtask boundaries relative to the preceding subtasks; (iii) the amount of decrease tends to be greater at boundaries corresponding to the completion of larger chunks of the task; and (iv) different types of subtasks induce different amounts of workload. We situate these findings within resource theories of attention and discuss important implications for interruption management systems. |
Daniel Baldauf; Heiner Deubel Visual attention during the preparation of bimanual movements Journal Article In: Vision Research, vol. 48, no. 4, pp. 549–563, 2008. @article{Baldauf2008, We investigated the deployment of visual attention during the preparation of bimanually coordinated actions. In a dual-task paradigm participants had to execute bimanual pointing movements to different peripheral locations, and to identify target letters that had been briefly presented at various peripheral locations during the latency period before movement initialisation. The discrimination targets appeared either at the movement goal of the left or the right hand, or at other locations that were not movement-relevant in the particular trial. Performance in the letter discrimination task served as a measure for the distribution of visual attention during the action preparation. The results showed that the goal positions of both hands are selected before movement onset, revealing a superior discrimination performance at the action-relevant locations (Experiment 1). Selection-for-action in the preparation of bimanual movements involved attention being spread to both goal locations in parallel, independently of whether the targets had been cued by colour or semantically (Experiment 2). A comparison with perceptual performance in unimanual reaching suggested that the total amount of attentional resources that are distributed over the visual field depended on the demands of the primary motor task, with more attentional resources being deployed for the selection of multiple goal positions than for the selection of a single goal (Experiment 3). |
Luke Barrington; Tim K. Marks; Janet Hui-wen Hsiao; Garrison W. Cottrell NIMBLE: A kernel density model of saccade-based visual memory Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 17–17, 2008. @article{Barrington2008, We present a Bayesian version of J. Lacroix, J. Murre, and E. Postma's (2006) Natural Input Memory (NIM) model of saccadic visual memory. Our model, which we call NIMBLE (NIM with Bayesian Likelihood Estimation), uses a cognitively plausible image sampling technique that provides a foveated representation of image patches. We conceive of these memorized image fragments as samples from image class distributions and model the memory of these fragments using kernel density estimation. Using these models, we derive class-conditional probabilities of new image fragments and combine individual fragment probabilities to classify images. Our Bayesian formulation of the model extends easily to handle multi-class problems. We validate our model by demonstrating human levels of performance on a face recognition memory task and high accuracy on multi-category face and object identification. We also use NIMBLE to examine the change in beliefs as more fixations are taken from an image. Using fixation data collected from human subjects, we directly compare the performance of NIMBLE's memory component to human performance, demonstrating that using human fixation locations allows NIMBLE to recognize familiar faces with only a single fixation. |
Mark W. Becker; Ian P. Rasmussen Guidance of attention to objects and locations by long-term memory of natural scenes Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 6, pp. 1325–1338, 2008. @article{Becker2008, Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered. |
Eva Belke; Glyn W. Humphreys; Derrick G. Watson; Antje S. Meyer; Anna L. Telling Top-down effects of semantic knowledge in visual search are modulated by cognitive but not perceptual load Journal Article In: Perception and Psychophysics, vol. 70, no. 8, pp. 1444–1458, 2008. @article{Belke2008, Moores, Laiti, and Chelazzi (2003) found semantic interference from associate competitors during visual object search, demonstrating the existence of top-down semantic influences on the deployment of attention to objects. We examined whether effects of semantically related competitors (same-category members or associates) interacted with the effects of perceptual or cognitive load. We failed to find any interaction between competitor effects and perceptual load. However, the competitor effects increased significantly when participants were asked to retain one or five digits in memory throughout the search task. Analyses of eye movements and viewing times showed that a cognitive load did not affect the initial allocation of attention but rather the time it took participants to accept or reject an object as the target. We discuss the implications of our findings for theories of conceptual short-term memory and visual attention. |
Elaine J. Anderson; Sabira K. Mannan; Geraint Rees; Petroc Sumner; Christopher Kennard A role for spatial and nonspatial working memory processes in visual search Journal Article In: Experimental Psychology, vol. 55, no. 5, pp. 301–312, 2008. @article{Anderson2008b, Searching a cluttered visual scene for a specific item of interest can take several seconds to perform if the target item is difficult to discriminate from surrounding items. Whether working memory processes are utilized to guide the path of attentional selection during such searches remains under debate. Previous studies have found evidence to support a role for spatial working memory in inefficient search, but the role of nonspatial working memory remains unclear. Here, we directly compared the role of spatial and nonspatial working memory for both an efficient and inefficient search task. In Experiment 1, we used a dual-task paradigm to investigate the effect of performing visual search within the retention interval of a spatial working memory task. Importantly, by incorporating two working memory loads (low and high) we were able to make comparisons between dual-task conditions, rather than between dual-task and single-task conditions. This design allows any interference effects observed to be attributed to changes in memory load, rather than to nonspecific effects related to "dual-task" performance. We found that the efficiency of the inefficient search task declined as spatial memory load increased, but that the efficient search task remained efficient. These results suggest that spatial memory plays an important role in inefficient but not efficient search. In Experiment 2, participants performed the same visual search tasks within the retention interval of visually matched spatial and verbal working memory tasks. Critically, we found comparable dual-task interference between inefficient search and both the spatial and nonspatial working memory tasks, indicating that inefficient search recruits working memory processes common to both domains. |
Sofie Moresi; Jos J. Adam; Jons Rijcken; Pascal W. M. Van Gerven Cue validity effects in response preparation: A pupillometric study Journal Article In: Brain Research, vol. 1196, pp. 94–102, 2008. @article{Moresi2008, This study examined the effects of cue validity and cue difficulty on response preparation to provide a test of the Grouping Model [Adam, J.J., Hommel, B. and Umiltà, C., 2003. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognit. Psychol. 46(3), 302-58, Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) automatic and effortful processes in response cuing. Vis. Cogn. 12(8), 1444-1473.]. We used the pupillary response to index the cognitive processing load during and after the preparatory interval (2 s). Twenty-two participants performed the finger-cuing tasks with valid (75%) and invalid (25%) cues. Results showed longer reaction times, more errors, and larger pupil dilations for invalid than valid cues. During the preparation interval, pupil dilation varied systematically with cue difficulty, with easy cues (specifying 2 fingers on 1 hand) showing less pupil dilation than difficult cues (specifying 2 fingers on 2 hands). After the preparation interval, this pattern of differential pupil dilation as a function of cue difficulty reversed for invalid cues, suggesting that cues which incorrectly specified fingers on one hand required more effortful reprogramming operations than cues which incorrectly specified fingers on two hands. These outcomes were consistent with predictions derived from the Grouping Model. Finally, all participants exhibited two distinct pupil dilation strategies: an "early" strategy in which the onset of the main pupil dilation was tied to onset of the cue, and a "late" strategy in which the onset of the main pupil dilation was tied to the onset of the target. Thus, whereas the early pupil dilation strategy showed a strong dilation during the preparation interval, the late pupil strategy showed a strong constriction. Interestingly, only the late onset pupil dilation strategy revealed the above reported sensitivity to cue difficulty, showing for the first time that the well-known pupil's sensitivity to task difficulty can also emerge when the pupil is constricting instead of dilating. |
Sofie Moresi; Jos J. Adam; Jons Rijcken; Pascal W. M. Van Gerven; Harm Kuipers; Jelle Jolles Pupil dilation in response preparation Journal Article In: International Journal of Psychophysiology, vol. 67, no. 2, pp. 124–130, 2008. @article{Moresi2008a, This study examined changes in pupil size during response preparation in a finger-cuing task. Based on the Grouping Model of finger preparation [Adam, J.J., Hommel, B. and Umiltà, C., 2003b. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognitive Psychology. 46, (3), 302-358.; Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) Automatic and effortfull Processes in Response cuing. Visual Cognition. 12, (8), 1444-1473.], it was hypothesized that the selection and preparation of more difficult response sets would be accompanied by larger pupillary dilations. The results supported this prediction, thereby extending the validity of pupil size as a measure of cognitive load to the domain of response preparation. |
Brad C. Motter; Diglio A. Simoni Changes in the functional visual field during search with and without eye movements Journal Article In: Vision Research, vol. 48, pp. 2382–2393, 2008. @article{Motter2008, The size of the functional visual field (FVF) is dynamic, changing with the context and attentive demand that each fixation brings as we move our eyes and head to explore the visual scene. Using performance measures of the FVF we show that during search conditions with eye movements, the FVF is small compared to the size of the FVF measured during search without eye movements. In all cases the size of the FVF is constrained by the density of distracting items. During search without eye movements the FVF expands with time; subjects have idiosyncratic spatial biases suggesting covert shifts of attention. For search within the constraints imposed by item density, the rate of item inspection is the same across all search conditions. Array set size effects are not apparent once stimulus density is taken into account, a result that is consistent with a spatial constraint for the FVF based on the cortical separation hypothesis. |
Manon Mulckhuyse; Wieske Zoest; Jan Theeuwes Capture of the eyes by relevant and irrelevant onsets Journal Article In: Experimental Brain Research, vol. 186, no. 2, pp. 225–235, 2008. @article{Mulckhuyse2008, During early visual processing the eyes can be captured by salient visual information in the environment. Whether a salient stimulus captures the eyes in a purely automatic, bottom-up fashion or whether capture is contingent on task demands is still under debate. In the first experiment, we manipulated the relevance of a salient onset distractor. The onset distractor could either be similar or dissimilar to the target. Error saccade latency distributions showed that early in time, oculomotor capture was driven purely bottom-up irrespective of distractor similarity. Later in time, top-down information became available resulting in contingent capture. In the second experiment, we manipulated the saliency information at the target location. A salient onset stimulus could be presented either at the target or at a non-target location. The latency distributions of error and correct saccades had a similar time-course as those observed in the first experiment. Initially, the distributions overlapped but later in time task-relevant information decelerated the oculomotor system. The present findings reveal the interaction between bottom-up and top-down processes in oculomotor behavior. We conclude that the task relevance of a salient event is not crucial for capture of the eyes to occur. Moreover, task-relevant information may integrate with saliency information to initiate saccades, but only later in time. |
Ikuya Murakami; Rumi Hisakata The effects of eccentricity and retinal illuminance on the illusory motion seen in a stationary luminance gradient Journal Article In: Vision Research, vol. 48, no. 19, pp. 1940–1948, 2008. @article{Murakami2008, Kitaoka recently reported a novel illusion named the Rotating Snakes [Kitaoka, A., & Ashida, H. (2003). Phenomenal characteristics of the peripheral drift illusion. Vision, 15, 261-262], in which a stationary pattern appears to rotate constantly. In the first experiment, we attempted to quantify the anecdote that this illusion is better perceived in the periphery. The stimulus was a ring composed of stepwise luminance patterns and was presented in the left visual field. With increasing eccentricity up to 10-14 deg, the cancellation velocity required to establish perceptual stationarity increased. In the next experiment, we examined the effect of retinal illuminance. Interestingly, the cancellation velocity decreased as retinal illuminance was decreased. We also estimated the human temporal impulse response at some retinal illuminances by using the double-pulse method to confirm that the shape of the impulse response actually changes from biphasic to monophasic, which indicates that the transient processing system has weaker activities at lower illuminances. We conclude that some transient temporal processing system is necessary for the illusion. |
Chie Nakatani; Cees Van Leeuwen A pragmatic approach to multi-modality and non-normality in fixation duration studies of cognitive processes Journal Article In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 1–12, 2008. @article{Nakatani2008, Interpreting eye-fixation durations in terms of cognitive processing load is complicated by the multimodality of their distribution. An important source of multimodality is the distinction between single and multiple fixations to the same object. Based on the distinction, we separated a log-transformed distribution made to an object in non-reading task. We could reasonably conclude that the separated distributions belong to the same, general logistic distribution, which has a finite population mean and variance. This allowed us to use the sample means as dependent variables in a parametric analysis. Six tasks were compared, which required different levels of post-perceptual processing. A no-task control condition was added to test for perceptual processing. Fixation durations differentiated task-specific perceptual, but not post-perceptual processing demands. |
Benjamin W. Tatler; Benjamin T. Vincent Systematic tendencies in scene viewing Journal Article In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–18, 2008. @article{Tatler2008, While many current models of scene perception debate the relative roles of low- and high- level factors in eye guidance, systematic tendencies in how the eyes move may be infor- mative. We consider how each saccade and fixation is influenced by that which preceded or followed it, during free inspection of images of natural scenes. We find evidence to suggest periods of localized scanning separated by ‘global' relocations to new regions of the scene. We also find evidence to support the existence of small amplitude ‘corrective' saccades in natural image viewing. Our data reveal statistical dependencies between suc- cessive eye movements, which may be informative in furthering our understanding of eye guidance. |
Benjamin W. Tatler; Nicholas J. Wade; Kathrin Kaulard Examining art: Dissociating pattern and perceptual influences on oculomotor behaviour Journal Article In: Spatial Vision, vol. 21, no. 1, pp. 165–184, 2008. @article{Tatler2008a, When observing art the viewer's understanding results from the interplay between the marks made on the surface by the artist and the viewer's perception and knowledge of it. Here we use a novel set of stimuli to dissociate the influences of the marks on the surface and the viewer's perceptual experience upon the manner in which the viewer inspects art. Our stimuli provide the opportunity to study situations in which (1) the same visual stimulus can give rise to two different perceptual experiences in the viewer, and (2) the visual stimuli differ but give rise to the same perceptual experience in the viewer. We find that oculomotor behaviour changes when the perceptual experience changes. Oculomotor behaviour also differs when the viewer's perceptual experience is the same but the visual stimulus is different. The methodology used and insights gained from this study offer a first step toward an experimental exploration of the relative influences of the artist's creation and viewer's perception when viewing art and also toward a better understanding of the principles of composition in portraiture. |
Masahiko Terao; Junji Watanabe; Akihiro Yagi; Shin'ya Nishida Reduction of stimulus visibility compresses apparent time intervals Journal Article In: Nature Neuroscience, vol. 11, no. 5, pp. 541–542, 2008. @article{Terao2008, The neural mechanisms underlying visual estimation of subsecond durations remain unknown, but perisaccadic underestimation of interflash intervals may provide a clue as to the nature of these mechanisms. Here we found that simply reducing the flash visibility, particularly the visibility of transient signals, induced similar time underestimation by human observers. Our results suggest that weak transient responses fail to trigger the proper detection of temporal asynchrony, leading to increased perception of simultaneity and apparent time compression. |
Aidan A. Thompson; Denise Y. P. Henriques Updating visual memory across eye movements for ocular and arm motor control Journal Article In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2507–2514, 2008. @article{Thompson2008, Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating. |
Manabu Shikauchi; Shin Ishii; Tomohiro Shibata Prediction of aperiodic target sequences by saccades Journal Article In: Behavioural Brain Research, vol. 189, no. 2, pp. 325–331, 2008. @article{Shikauchi2008, Through recording of saccadic eye movements, we investigated whether humans can achieve prediction of aperiodic target sequences which cannot be predicted based solely on memorizing short-length patterns of the target sequence. We proposed a novel experimental paradigm in which Auto-Regressive (AR) processes are used to generate aperiodic target sequences. If subjects can fully utilize the knowledge on the AR dynamics that have generated the target sequence, optimal prediction can be made. As a control task, a completely unpredictable (random) target sequence was generated by shuffling the AR sequences. Behavioral analysis suggested that the prediction of the next target position in the AR sequence was significantly more successful than that by the random guess or the optimal guess for the random sequence. Although their performances were not optimal, learning of the AR dynamics was observed for first-order AR sequences, suggesting that the subjects attempted to predict the next target position based on partially identified AR dynamics. |
Mariano Sigman; Jérôme Sackur; Antoine Del Cul; Stanislas Dehaene Illusory displacement due to object substitution near the consciousness threshold Journal Article In: Journal of Vision, vol. 8, no. 1, pp. 1–10, 2008. @article{Sigman2008, A briefly presented target shape can be made invisible by the subsequent presentation of a mask that replaces the target. While varying the target-mask interval in order to investigate perception near the consciousness threshold, we discovered a novel visual illusion. At some intervals, the target is clearly visible, but its location is misperceived. By manipulating the mask's size and target's position, we demonstrate that the perceived target location is always displaced to the boundary of a virtual surface defined by the mask contours. Thus, mutual exclusion of surfaces appears as a cause of masking. |
Michael A. Silver; Amitai Shenhav; Mark D'Esposito Cholinergic enhancement reduces spatial spread of visual responses in human early visual cortex Journal Article In: Neuron, vol. 60, no. 5, pp. 904–914, 2008. @article{Silver2008, Animal studies have shown that acetylcholine decreases excitatory receptive field size and spread of excitation in early visual cortex. These effects are thought to be due to facilitation of thalamocortical synaptic transmission and/or suppression of intracortical connections. We have used functional magnetic resonance imaging (fMRI) to measure the spatial spread of responses to visual stimulation in human early visual cortex. The cholinesterase inhibitor donepezil was administered to normal healthy human subjects to increase synaptic levels of acetylcholine in the brain. Cholinergic enhancement with donepezil decreased the spatial spread of excitatory fMRI responses in visual cortex, consistent with a role of acetylcholine in reducing excitatory receptive field size of cortical neurons. Donepezil also reduced response amplitude in visual cortex, but the cholinergic effects on spatial spread were not a direct result of reduced amplitude. These findings demonstrate that acetylcholine regulates spatial integration in human visual cortex. |
Timo Stein; Ignacio Vallines; Werner X. Schneider Primary visual cortex repoundsects behavioral performance in the attentional blink Journal Article In: NeuroReport, vol. 19, no. 13, pp. 1277–1281, 2008. @article{Stein2008, When two masked targets are presented in a rapid sequence, attentional limitations are reflected in reduced identification accuracy for the second target (T2). We used functional magnetic resonance imaging to disentangle the distinct neural substrates of T2 processing during this attentional blink phenomenon. Spatially separating the two targets allows the retinotopic localization of the different stimuli's encoding sites in primary visual cortex (V1) and thus enables activation elicited by each target to be differentially measured in V1. The encoding location of the second target mirrored T2 identification accuracy in a retinotopically specific manner. These results are the first evidence for effects of behavioral performance on hemodynamic responses in V1 under conditions of the attentional blink. |
Joshua M. Susskind; Daniel H. Lee; Andrée Cusi; Roman Feiman; Wojtek Grabski; Adam K. Anderson Expressing fear enhances sensory acquisition Journal Article In: Nature Neuroscience, vol. 11, no. 7, pp. 843–850, 2008. @article{Susskind2008, It has been proposed that facial expression production originates in sensory regulation. Here we demonstrate that facial expressions of fear are configured to enhance sensory acquisition. A statistical model of expression appearance revealed that fear and disgust expressions have opposite shape and surface reflectance features. We hypothesized that this reflects a fundamental antagonism serving to augment versus diminish sensory exposure. In keeping with this hypothesis, when subjects posed expressions of fear, they had a subjectively larger visual field, faster eye movements during target localization and an increase in nasal volume and air velocity during inspiration. The opposite pattern was found for disgust. Fear may therefore work to enhance perception, whereas disgust dampens it. These convergent results provide support for the Darwinian hypothesis that facial expressions are not arbitrary configurations for social communication, but rather, expressions may have originated in altering the sensory interface with the physical world. |
Kohske Takahashi; Katsumi Watanabe Persisting effect of prior experience of change blindness Journal Article In: Perception, vol. 37, no. 2, pp. 324–327, 2008. @article{Takahashi2008, Most cognitive scientists know that an airplane tends to lose its engine when the display is flickering. How does such prior experience influence visual search? We recorded eye movements made by vision researchers while they were actively performing a change-detection task. In selected trials, we presented Rensink's familiar 'airplane' display, but with changes occurring at locations other than the jet engine. The observers immediately noticed that there was no change in the location where the engine had changed in the previous change-blindness demonstration. Nevertheless, eye-movement analyses indicated that the observers were compelled to look at the location of the unchanged engine. These results demonstrate the powerful effect of prior experience on eye movements, even when the observers are aware of the futility of doing so. |
Anne Roefs; Anita Jansen; Sofie Moresi; Paul Willems; Sarah Grootel; Anouk Borgh Looking good: BMI, attractiveness bias and visual attention. Journal Article In: Appetite, vol. 51, pp. 552–555, 2008. @article{Roefs2008, The aim of this study was to study attentional bias when viewing one's own and a control body, and to relate this bias to body-weight and attractiveness ratings. Participants were 51 normal-weight female students with an unrestrained eating style. They were successively shown pictures of their own and a control body for 30s each, while their eye movements (overt attention) were being measured. Afterwards, participants were asked to identify the most attractive and most unattractive body part of both their own and a control body. The results show that with increasing BMI and where an individual has given a relatively low rating of attractiveness to their own body, participants attended relatively more to their self-identified most unattractive body part and the control body's most attractive body part. This increasingly negative bias in visual attention for bodies may maintain and/or exacerbate body dissatisfaction. |
Ardi Roelofs Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 6, pp. 1580–1598, 2008. @article{Roelofs2008, Controversy exists about whether dual-task interference from word planning reflects structural bottleneck or attentional control factors. Here, participants named pictures whose names could or could not be phonologically prepared, and they manually responded to arrows presented away from (Experiment 1), or superimposed onto, the pictures (Experiments 2 and 3); or they responded to tones (Experiment 4). Pictures and arrows/tones were presented at stimulus onset asynchronies of 0, 300, and 1,000 ms. Earlier research showed that vocal responding hampers auditory perception, which predicts earlier shifts of attention to the tones than to the arrows. Word planning yielded dual-task interference. Phonological preparation reduced the latencies of picture naming and gaze shifting. The preparation benefit was propagated into the latencies of the manual responses to the arrows but not to the tones. The malleability of the interference supports the attentional control account. This conclusion was corroborated by computer simulations showing that an extension of WEAVER++ (A. Roelofs, 2003) with assumptions about the attentional control of tasks quantitatively accounts for the latencies of vocal responding, gaze shifting, and manual responding. |
J. F. Soechting; Martha Flanders Extrapolation of visual motion for manual interception Journal Article In: Journal of Neurophysiology, vol. 99, no. 6, pp. 2956–2967, 2008. @article{Soechting2008, A frequent goal of hand movement is to touch a moving target or to make contact with a stationary object that is in motion relative to the moving head and body. This process requires a prediction of the target's motion, since the initial direction of the hand movement anticipates target motion. This experiment was designed to define the visual motion parameters that are incorporated in this prediction of target motion. On seeing a go signal (a change in target color), human subjects slid the right index finger along a touch-sensitive computer monitor to intercept a target moving along an unseen circular or oval path. The analysis focused on the initial direction of the interception movement, which was found to be influenced by the time required to intercept the target and the target's distance from the finger's starting location. Initial direction also depended on the curvature of the target's trajectory in a manner that suggested that this parameter was underestimated during the process of extrapolation. The pattern of smooth pursuit eye movements suggests that the extrapolation of visual target motion was based on local motion cues around the time of the onset of hand movement, rather than on a cognitive synthesis of the target's pattern of motion. |
Gianluca U. Sorrento; Denise Y. P. Henriques Reference frame conversions for repeated arm movements Journal Article In: Journal of Neurophysiology, vol. 99, no. 6, pp. 2968–2984, 2008. @article{Sorrento2008, The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame. |
Rike Steenken; Hans Colonius; Adele Diederich; Stefan Rach Visual-auditory interaction in saccadic reaction time: Effects of auditory masker level Journal Article In: Brain Research, vol. 1220, pp. 150–156, 2008. @article{Steenken2008, Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Observed SRT reductions typically range between 10 and 50 ms and decrease as spatial disparity between the stimuli increases. Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory accessory. Here we probe this hypothesis by presenting an additional white-noise masker background of 3 s duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident vs. disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. As verified in a separate auditory localization task, localizability of the auditory accessory decreases with masker level. The SRT results are accounted for by a conceptual model positing that increasing masker level enlarges the area of possible auditory stimulus locations: it implies that perceivable distances decrease for disparate stimulus configurations and increase for coincident stimulus pairs. |
Rike Steenken; Adele Diederich; Hans Colonius Time course of auditory masker effects: Tapping the locus of audiovisual integration? Journal Article In: Neuroscience Letters, vol. 435, no. 1, pp. 78–83, 2008. @article{Steenken2008a, In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object. |
Jochen Laubrock; Ralf Engbert; Reinhold Kliegl Fixational eye movements predict the perceived direction of ambiguous apparent motion Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008. @article{Laubrock2008, Neuronal activity in area LIP is correlated with the perceived direction of ambiguous apparent motion (Z. M. Williams, J. C. Elfar, E. N. Eskandar, L. J. Toth, & J. A. Assad, 2003). Here we show that a similar correlation exists for small eye movements made during fixation. A moving dot grid with superimposed fixation point was presented through an aperture. In a motion discrimination task, unambiguous motion was compared with ambiguous motion obtained by shifting the grid by half of the dot distance. In three experiments we show that (a) microsaccadic inhibition, i.e., a drop in microsaccade frequency precedes reports of perceptual flips, (b) microsaccadic inhibition does not accompany simple response changes, and (c) the direction of microsaccades occurring before motion onset biases the subsequent perception of ambiguous motion. We conclude that microsaccades provide a signal on which perceptual judgments rely in the absence of objective disambiguating stimulus information. |
Vaia Lestou; Frank E. Pollick; Zoe Kourtzi Neural substrates for action understanding at different description levels in the human brain Journal Article In: Journal of Cognitive Neuroscience, vol. 20, no. 2, pp. 324–341, 2008. @article{Lestou2008, Understanding complex movements and abstract action goals is an important skill for our social interactions. Successful social interactions entail understanding of actions at different levels of action description, ranging from detailed movement trajectories that support learning of complex motor skills through imitation to distinct features of actions that allow us to discriminate between action goals and different action styles. Previous studies have implicated premotor, parietal, and superior temporal areas in action understanding. However, the role of these different cortical areas in action understanding at different levels of action description remains largely unknown. We addressed this question using advanced animation and stimulus generation techniques in combination with sensitive functional magnetic resonance imaging adaptation or repetition suppression methods. We tested the neural sensitivity of fronto-parietal and visual areas to differences in the kinematics and goals of actions using kinematic morphs of arm movements. Our findings provide novel evidence for differential involvement of ventral premotor, parietal, and temporal regions in action understanding. We show that the ventral premotor cortex encodes the physical similarity between movement trajectories and action goals that are important for exact copying of actions and the acquisition of complex motor skills. In contrast, whereas parietal regions and the superior temporal sulcus process the perceptual similarity between movements and may support the perception and imitation of abstract action goals and movement styles. Thus, our findings propose that fronto-parietal and visual areas involved in action understanding mediate a cascade of visual-motor processes at different levels of action description from exact movement copies to abstract action goals achieved with different movement styles. |
Annette Kinder; Martin Rolfs; Reinhold Kliegl Sequence learning at optimal stimulus – response mapping: Evidence from a serial reaction time task Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 2, pp. 203–209, 2008. @article{Kinder2008, We propose a new version of the serial reaction time (SRT) task in which participants merely looked at the target instead of responding manually. As response locations were identical to target locations, stimulus–response compatibility was maximal in this task. We demonstrated that saccadic response times decreased during training and increased again when a new sequence was presented. It is unlikely that this effect was caused by stimulus–response (S–R) learning because bonds between (visual) stimuli and (oculomotor) responses were already well established before the experiment started. Thus, the finding shows that the building ofS–R bonds is not essential for learning in the SRT task. Numerous |
P. Christiaan Klink; Raymond Van Ee; M. M. Nijs; G. J. Brouwer; A. J. Noest; Richard J. A. Wezel Early interactions between neuronal adaptation and voluntary control determine perceptual choices in bistable vision Journal Article In: Journal of Vision, vol. 8, no. 5, pp. 1–18, 2008. @article{Klink2008, At the onset of bistable stimuli, the brain needs to choose which of the competing perceptual interpretations willfi rst reach awareness. Stimulus manipulations and cognitive control both infl uence this choice process, but the underlying mechanisms and interactions remain poorly understood. Using intermittent presentation of bistable visual stimuli, we demonstrate that short interruptions cause perceptual reversals upon the next presentation, whereas longer interstimulus intervals stabilize the percept. Top-down voluntary control biases this process but does not override the timing dependencies. Extending a recently introduced low-level neural model, we demonstrate that percept-choice dynamics in bistable vision can be fully understood with interactions in early neural processing stages. Our model includes adaptive neural processing preceding a rivalry resolution stage with cross-inhibition, adaptation, and an interaction of the adaptation levels with a neural baseline. Most importantly, ourfi ndings suggest that top-down attentional control over bistable stimuli interacts with low-level mechanisms at early levels of sensory processing before perceptual confl icts are resolved and perceptual choices about bistable stimuli are made. |
John D. Koehn; Elizabeth Roy; Jason J. S. Barton The "diagonal effect": A systematic error in oblique antisaccades Journal Article In: Journal of Neurophysiology, vol. 100, no. 2, pp. 587–597, 2008. @article{Koehn2008, Antisaccades are known to show greater variable error and also a systematic hypometria in their amplitude compared with visually guided prosaccades. In this study, we examined whether their accuracy in direction (as opposed to amplitude) also showed a systematic error. We had human subjects perform prosaccades and antisaccades to goals located at a variety of polar angles. In the first experiment, subjects made prosaccades or antisaccades to one of eight equidistant locations in each block, whereas in the second, they made saccades to one of two equidistant locations per block. In the third, they made antisaccades to one of two locations at different distances but with the same polar angle in each block. Regardless of block design, the results consistently showed a saccadic systematic error, in that oblique antisaccades (but not prosaccades) requiring unequal vertical and horizontal vector components were deviated toward the 45 degrees diagonal meridians. This finding could not be attributed to range effects in either Cartesian or polar coordinates. A perceptual origin of the diagonal effect is suggested by similar systematic errors in other studies of memory-guided manual reaching or perceptual estimation of direction, and may indicate a common spatial bias when there is uncertain information about spatial location. |
Christof Körner; Iain D. Gilchrist Memory processes in multiple-target visual search Journal Article In: Psychological Research, vol. 72, no. 1, pp. 99–105, 2008. @article{Koerner2008, Gibson, Li, Skow, Brown, and Cooke (Psychological Science, 11, 324–327, 2000) had participants carry out a search task in which they were required to detect the presence of one or two targets. In order to successfully perform such a multiple-target visual search task, participants had to remember the location of the Wrst target while searching for the second target. In two experiments we investigated the cost of remembering this target location. In Experiment 1, we compared performance on the Gibson et al. task with performance on a more conventional present–absent search task. The comparison suggests a substantial performance cost as measured by reaction time, number of Wxations and slope of the search functions. In Experment 2, we looked in detail at reWxations of distractors, which are a direct measure of attentional deployment. We demonstrated that the cost in this multiple-target visual search task was due to an increased number of reWxations on previously visited distractors. Such reWxations were present right from the start of the search. This change in search behaviour may be caused by the necessity of having to remember a target-allocating memory for the upcoming target may consume memory capacity that may otherwise be available for the tagging of distractors. These results support the notion of limited capacity memory processes in search. Introduction |
Chantal Kemner; Lizet Ewijk; Herman Engeland; Ignace T. C. Hooge Brief report: Eye movements during visual search tasks indicate enhanced stimulus discriminability in subjects with PDD Journal Article In: Journal of Autism and Developmental Disorders, vol. 38, no. 3, pp. 553–558, 2008. @article{Kemner2008, Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements during visual search tasks in high functioning adult men with PDD and a control group. Subjects with PDD were significantly faster than controls in these tasks, replicating earlier findings in children. Eye movement data showed that subjects with PDD made fewer eye movements than controls. No evidence was found for a different search strategy between the groups. The data indicate an enhanced ability to discriminate between stimulus elements in PDD. |
Dirk Kerzel; David Souto; Nathalie E. Ziegler Effects of attention shifts to stationary objects during steady-state smooth pursuit eye movements Journal Article In: Vision Research, vol. 48, no. 7, pp. 958–969, 2008. @article{Kerzel2008a, A number of studies have shown that stationary backgrounds compromise smooth pursuit eye movements. It has been suggested that poor attentional selection of the pursuit target was responsible for reductions of pursuit gain. To quantify the detrimental effects of attention, we instructed observers to either pay attention to background objects or to ignore them. The to-be-attended object was indicated by peripheral or central cues. Strong reductions of pursuit gain occurred when the following conditions were met: (a) the subject payed attention to the object (b) a salient event was present, for instance the onset of the target or cue and (c) the attended target produced retinal motion. Removing any of the three conditions resulted in no or far smaller decreases of pursuit gain. Further, decreases in pursuit gain were present with perceptual discrimination and simple manual detection. |
Lee Hogarth; Anthony Dickinson; Alison Austin; Craig Brown; Theodora Duka Attention and expectation in human predictive learning: The role of uncertainty Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 11, pp. 1658–1668, 2008. @article{Hogarth2008, Three localized, visual pattern stimuli were trained as predictive signals of auditory outcomes. One signal partially predicted an aversive noise in Experiment 1 and a neutral tone in Experiment 2, whereas the other signals consistently predicted either the occurrence or absence of the noise. The expectation of the noise was measured during each signal presentation, and only participants for whom this expectation demonstrated contingency knowledge showed differential attention to the signals. Importantly, when attention was measured by visual fixations, the contingency-aware group attended more to the partially predictive signal than to the consistent predictors in both experiments. This profile of visual attention supports the Pearce and Hall (1980) theory of the role of attention in associative learning. |
Lee Hogarth; Anthony Dickinson; Molly Janowski; Aleksandra Nikitina; Theodora Duka The role of attentional bias in mediating human drug-seeking behaviour Journal Article In: Psychopharmacology, vol. 201, no. 1, pp. 29–41, 2008. @article{Hogarth2008a, RATIONALE: The attentional bias for drug cues is believed to be a causal cognitive process mediating human drug seeking and relapse. OBJECTIVES, METHODS AND RESULTS: To test this claim, we trained smokers on a tobacco conditioning procedure in which the conditioned stimulus (or S+) acquired parallel control of an attentional bias (measured with an eye tracker), tobacco expectancy and instrumental tobacco-seeking behaviour. Although this correlation between measures may be regarded as consistent with the claim that the attentional bias for the S+ mediated tobacco seeking, when a secondary task was added in the test phase, the attentional bias for the S+ was abolished, yet the control of tobacco expectancy and tobacco seeking remained intact. CONCLUSIONS: This dissociation suggests that the attentional bias for drug cues is not necessary for the control that drug cues exert over drug-seeking behaviour. The question raised by these data is what function does the attentional bias serve if it does not mediate drug seeking? |
Linus Holm; Johan Eriksson; Linus Andersson Looking as if you know: Systematic object inspection precedes object recognition Journal Article In: Journal of Vision, vol. 8, no. 4, pp. 1–7, 2008. @article{Holm2008, Sometimes we seem to look at the very object we are searching for, without consciously seeing it. How do we select object relevant information before we become aware of the object? We addressed this question in two recognition experiments involving pictures of fragmented objects. In Experiment 1, participants preferred to look at the target object rather than a control region 25 fixations prior to explicit recognition. Furthermore, participants inspected the target as if they had identified it around 9 fixations prior to explicit recognition. In Experiment 2, we investigated the influence of semantic knowledge in guiding object inspection prior to explicit recognition. Consistently, more specific knowledge about target identity made participants scan the fragmented stimulus more efficiently. For instance, non-target regions were rejected faster when participants knew the target object's name. Both experiments showed that participants were looking at the objects as if they knew them before they became aware of their identity. |
Janet Hui-wen Hsiao; Garrison W. Cottrell Two fixations suffice in face recognition Journal Article In: Psychological Science, vol. 19, no. 10, pp. 998–1006, 2008. @article{Hsiao2008, It is well known that there exist preferred landing positions for eye fixations in visual word recogni- tion. However, the existence ofpreferred landing positions in face recognition is less well established. It is also un- known how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked.We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fix- ation is just to the left ofthe center ofthe nose, and that of the second fixation is around the center ofthe nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition. |
Wendy E. Huddleston; Edgar A. DeYoe The representation of spatial attention in human parietal cortex dynamically modulates with performance Journal Article In: Cerebral Cortex, vol. 18, no. 6, pp. 1272–1280, 2008. @article{Huddleston2008, The control and allocation of attention is an essential, ubiquitous neural process that gates our awareness of objects and events in the environment. Neural representations of the locus of spatial attention have been previously demonstrated in parietal cortex. However, the behavioral relevance of these neural representations is not known. While undergoing functional magnetic resonance imaging, subjects performed a covert spatial attention task that yielded a wide range of performance values. Voxels in parietal cortex selective for attended target location also dynamically modulated, becoming more or less responsive as performance levels changed. Surprisingly, this relationship was not linear. Responses peaked at intermediate performance levels and dropped both when performance was very high and when it was very low. Such dynamic modulation may represent a mechanism for organizing neural control signals according to behavioral task demands. |
Amelia R. Hunt; Craig S. Chapman; Alan Kingstone Taking a long look at action and time perception Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 1, pp. 125–136, 2008. @article{Hunt2008, Everyone has probably experienced chronostasis, an illusion of time that can cause a clock's second hand to appear to stand still during an eye movement. Though the illusion was initially thought to reflect a mechanism for preserving perceptual continuity during eye movements, an alternative hypothesis has been advanced that overestimation of time might be a general effect of any action. Contrary to both of these hypotheses, the experiments reported here suggest that distortions of time perception related to an eye movement are not distinct from temporal distortions for other kinds of responses. Moreover, voluntary action is neither necessary nor sufficient for overestimation effects. These results lead to a new interpretation of chronostasis based on the role of attention and memory in time estimation. |
Helene Intraub; Christopher A. Dickinson False memory 1/20th of a second later: What the early onset of boundary extension reveals about perception Journal Article In: Psychological Science, vol. 19, no. 10, pp. 1007–1014, 2008. @article{Intraub2008, Errors of commission are thought to be caused by heavy memory loads, confusing information, lengthy retention intervals, or some combination of these factors. We report false memory beyond the boundaries of a view, boundary extension, after less than 1/20th of a second. Photographs of scenes were interrupted by a 42-ms or 250-ms mask, 250 ms into viewing, before reappearing or being replaced with a different view (Experiment 1). Postinterruption photographs that were unchanged were rated as closer up than the original views; when the photographs were changed, the same pair of closer-up and wider-angle views was rated as more similar when the closer view was first, rather than second. Thus, observers remembered preinterruption views with extended boundaries. Results were replicated when the interruption included a saccade (Experiment 2). The brevity of these interruptions has implications for visual scanning; it also challenges the traditional distinction between perception and memory. We offer an alternative conceptualization that shows how source monitoring can explain false memory after an interruption briefer than an eyeblink. |
Stuart Jackson; Fred Cummins; Nuala Brady Rapid perceptual switching of a reversible biological figure Journal Article In: PLoS ONE, vol. 3, no. 12, pp. e3982, 2008. @article{Jackson2008, Certain visual stimuli can give rise to contradictory perceptions. In this paper we examine the temporal dynamics of perceptual reversals experienced with biological motion, comparing these dynamics to those observed with other ambiguous structure from motion (SFM) stimuli. In our first experiment, naïve observers monitored perceptual alternations with an ambiguous rotating walker, a figure that randomly alternates between walking in clockwise (CW) and counter-clockwise (CCW) directions. While the number of reported reversals varied between observers, the observed dynamics (distribution of dominance durations, CW/CCW proportions) were comparable to those experienced with an ambiguous kinetic depth cylinder. In a second experiment, we compared reversal profiles with rotating and standard point-light walkers (i.e. non-rotating). Over multiple test repetitions, three out of four observers experienced consistently shorter mean percept durations with the rotating walker, suggesting that the added rotational component may speed up reversal rates with biomotion. For both stimuli, the drift in alternation rate across trial and across repetition was minimal. In our final experiment, we investigated whether reversals with the rotating walker and a non-biological object with similar global dimensions (rotating cuboid) occur at random phases of the rotation cycle. We found evidence that some observers experience peaks in the distribution of response locations that are relatively stable across sessions. Using control data, we discuss the role of eye movements in the development of these reversal patterns, and the related role of exogenous stimulus characteristics. In summary, we have demonstrated that the temporal dynamics of reversal with biological motion are similar to other forms of ambiguous SFM. We conclude that perceptual switching with biological motion is a robust bistable phenomenon. |
C. -H. Juan; Neil G. Muggleton; Ovid J. L. Tzeng; D. L. Hung; A. Cowey; Vincent Walsh Segregation of visual selection and saccades in human frontal eye fields Journal Article In: Cerebral Cortex, vol. 18, no. 10, pp. 2410–2415, 2008. @article{Juan2008, The premotor theory of attention suggests that target processing and generation of a saccade to the target are interdependent. Temporally precise transcranial magnetic stimulation (TMS) was delivered over the human frontal eye fields, the area most frequently associated with the premotor theory in association with eye movements, while subjects performed a visually instructed pro-/antisaccade task. Visual analysis and saccade preparation were clearly separated in time, as indicated by 2 distinct time points of TMS delivery that resulted in elevated saccade latencies. These results show that visual analysis and saccade preparation, although frequently enacted together, are dissociable processes. |
Roger Kalla; Neil G. Muggleton; Chi-Hung Juan; Alan Cowey; Vincent Walsh The timing of the involvement of the frontal eye fields and posterior parietal cortex in visual search Journal Article In: NeuroReport, vol. 19, no. 10, pp. 1069–1073, 2008. @article{Kalla2008, The frontal eye fields (FEFs) and posterior parietal cortex (PPC) are important for target detection in conjunction visual search but the relative timings of their contribution have not been compared directly. We addressed this using temporally specific double pulse transcranial magnetic stimulation delivered at different times over FEFs and PPC during performance of a visual search task. Disruption of performance was earlier (0/40 ms) with FEF stimulation than with PPC stimulation (120/160 ms), revealing a clear and substantial temporal dissociation of the involvement of these two areas in conjunction visual search. We discuss these timings with reference to the respective roles of FEF and PPC in the modulation of extrastriate visual areas and selection of responses. |
Tyler W. Garaas; Marc Pomplun Inspection time and visual-perceptual processing Journal Article In: Vision Research, vol. 48, no. 4, pp. 523–537, 2008. @article{Garaas2008, Inspection time (IT) is the most popular simple psychometric measure that is used to account for a large part of the variance in human mental ability, with the estimated corrected correlation between IT and IQ being -0.50. In this study, we investigate the relationship between IT and the performance and oculomotor variables measured during three simple visual tasks. Participants' ITs were first measured using a slight variation of the standard IT task, which was followed by the three simple visual tasks that were designed to test participants' visual-attentional control and visual working memory under varying degrees of difficulty; they included a visual search task, a comparative visual search task, and a visual memorization task. Significant correlations were found between IT and performance variables for each of the visual tasks. The implications of the correlation between IT and performance-related variables are discussed. Oculomotor variables on the other hand only correlated significantly with IT during the retrieval phase of the visual memorization task, which is likely a product of differences in participants' ability to memorize objects during the loading phase of the experiment. This leads us to the conclusion that the oculomotor variables we measured do not correlate with IT in general, but may in the case where a systematic benefit would be realized. |
Katharina Georg; Fred H. Hamker; Markus Lappe Influence of adaptation state and stimulus luminance on peri-saccadic localization Journal Article In: Journal of Vision, vol. 8, no. 1, pp. 1–11, 2008. @article{Georg2008, Spatial localization of flashed stimuli across saccades shows transient distortions of perceived position: Stimuli appear shifted in saccade direction and compressed towards the saccade target. The strength and spatial pattern of this mislocalization is influenced by contrast, duration, and spatial and temporal arrangement of stimuli and background. Because mislocalization of stimuli on a background depends on contrast, we asked whether mislocalization of stimuli in darkness depends on luminance. Since dark adaptation changes luminance thresholds, we compared mislocalization in dark-adapted and light-adapted states. Peri-saccadic mislocalization was measured with near-threshold stimuli and above-threshold stimuli in dark-adapted and light-adapted subjects. In both adaptation states, near-threshold stimuli gave much larger mislocalization than above-threshold stimuli. Furthermore, when the stimulus was presented near-threshold, the perceived positions of the stimuli clustered closer together. Stimulus luminance that produced strong mislocalization in the light-adapted state produced very little mislocalization in the dark-adapted state because it was now well above threshold. We conclude that the strength of peri-saccadic mislocalization depends on the strength of the stimulus: stimuli with near-threshold luminance, and hence low visibility, are more mis-localized than clearly visible stimuli with high luminance. |
Thomas Geyer; Hermann J. Müller; Joseph Krummenacher Expectancies modulate attentional capture by salient color singletons Journal Article In: Vision Research, vol. 48, no. 11, pp. 1315–1326, 2008. @article{Geyer2008, In singleton feature search for a form-defined target, the presentation of a task-irrelevant, but salient singleton color distractor is known to interfere with target detection [Theeuwes, J. (1991). Cross-dimensional perceptual selectivity. Perception & Psychophysics, 50, 184-193; Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & Psychophysics, 51, 599-606]. The present study was designed to re-examine this effect, by presenting observers with a singleton form target (on each trial) that could be accompanied by a salient) singleton color distractor, with the proportion of distractor to no-distractor trials systematically varying across blocks of trials. In addition to RTs, eye movements were recorded in order to examine the mechanisms underlying the distractor interference effect. The results showed that singleton distractors did interfere with target detection only when they were presented on a relatively small (but not on a large) proportion of trials. Overall, the findings suggest that cross-dimensional interference is a covert attention effect, arising from the competition of the target with the distractor for attentional selection [Kumada, T., & Humphreys, G. W. (2002). Cross-dimensional interference and cross-trial inhibition. Perception & Psychophysics, 64, 493-503], with the strength of the competition being modulated by observers' (top-down) incentive to suppress the distractor dimension. |