EyeLink 认知出版物
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2009 |
Peter J. Kohler; Gideon P. Caplovitz; Peter Ulric Tse The whole moves less than the spin of its parts Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 4, pp. 675–679, 2009. @article{Kohler2009, When individually moving elements in the visual scene are perceptually grouped together into a coherently moving object, they can appear to slow down. In the present article, we show that the perceived speed of a particular global-motion percept is not dictated completely by the speed of the local moving elements. We investigated a stimulus that leads to bistable percepts, in which local and global motion may be perceived in an alternating fashion. Four rotating dot pairs, when arranged into a square-like configuration, may be perceived either locally, as independently rotating dot pairs, or globally, as two large squares translating along overlapping circular trajectories. Using a modified version of this stimulus, we found that the perceptually grouped squares appeared to move more slowly than the locally perceived rotating dot pairs, suggesting that perceived motion magnitude is computed following a global analysis of form. Supplemental demos related to this article can be downloaded from app.psychonomic-journals.org/content/supplemental. |
Lynn Huestegge; Iring Koch Dual-task crosstalk between saccades and manual responses Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 2, pp. 352–362, 2009. @article{Huestegge2009, Between-task crosstalk has been discussed as an important source for dual-task costs. In this study, the authors examine concurrently performed saccades and manual responses as a means of studying the role of response-code conflict between 2 tasks. In Experiment 1, participants responded to an imperative auditory stimulus with a left or a right key press (manual task), a left or a right saccade (saccade task), or both. In Experiments 2 and 3, participants crossed their hands, and a modest (Experiment 2) or substantial (Experiment 3) degree of between-task response-code conflict through specific instructions was introduced. In Experiment 4, response codes across tasks were compatible, and stimulus-response mappings in both tasks were incompatible. Overall, the results indicate that performance not only in manual responses but also in saccades suffers from dual-task conditions, even though saccades were typically performed first and are usually assumed to be controlled quite independently. Moreover, the systematic introduction of response-code conflict between tasks modulated the pattern of dual-task performance. The authors propose confusability of response codes as an underlying mechanism of the observed effects of between-task crosstalk. |
Lynn Huestegge; Ralph Radach; Hanns Jürgen Kunert Long-term effects of cannabis on oculomotor function in humans Journal Article In: Journal of Psychopharmacology, vol. 23, no. 6, pp. 714–722, 2009. @article{Huestegge2009b, Cannabis is known to affect human cognitive and visuomotor skills directly after consumption. Some studies even point to rather long-lasting effects, especially after chronic tetrahydrocannabinol (THC) abuse. However, it is still unknown whether long-term effects on basic visual and oculomotor processing may exist. In the present study, the performance of 20 healthy long-term cannabis users without acute THC intoxication and 20 control subjects were examined in four basic visuomotor paradigms to search for specific long-term impairments. Subjects were asked to perform: 1) reflexive saccades to visual targets (prosaccades), including gap and overlap conditions, 2) voluntary antisaccades, 3) memory-guided saccades and 4) double-step saccades. Spatial and temporal parameters of the saccades were subsequently analysed. THC subjects exhibited a significant increase of latency in the prosaccade and antisaccade tasks, as well as prolonged saccade amplitudes in the antisaccade and memory-guided task, compared with the control subjects. The results point to substantial and specific long-term deficits in basic temporal processing of saccades and impaired visuo-spatial working memory. We suggest that these impairments are a major contributor to degraded performance of chronic users in a vital everyday task like visual search, and they might potentially also affect spatial navigation and reading. |
Amelia R. Hunt; P. Cavanagh Looking ahead: The perceived direction of gaze shifts before the eyes move Journal Article In: Journal of Vision, vol. 9, no. 9, pp. 1–7, 2009. @article{Hunt2009, How do we know where we are looking? Our direction of gaze is commonly thought to be assigned to the location in the world that falls on our fovea, but this may not always hold, especially, as we report here, just before an eye movement. Observers shifted their gaze to a clock with a fast-moving hand and reported the time perceived to be on the clock when their eyes first landed. The reported time was 39 ms earlier than the actual time the eyes arrived. In a control condition, the clock moved to the eyes, mimicking the retinal motion but without the eye movement. Here the reported time lagged 27 ms behind the actual time on the clock when it arrived. The timing of perceived fixation in our experiment is similar to that for the predictive activation observed in visual cortex neurons at the time of eye movements. |
Alex D. Hwang; Emily C. Higgins; Marc Pomplun A model of top-down attentional control during visual search in complex scenes Journal Article In: Journal of Vision, vol. 9, no. 5, pp. 1–18, 2009. @article{Hwang2009, Recently, there has been great interest among vision researchers in developing computational models that predict the distribution of saccadic endpoints in naturalistic scenes. In many of these studies, subjects are instructed to view scenes without any particular task in mind so that stimulus-driven (bottom-up) processes guide visual attention. However, whenever there is a search task, goal-driven (top-down) processes tend to dominate guidance, as indicated by attention being systematically biased toward image features that resemble those of the search target. In the present study, we devise a top-down model of visual attention during search in complex scenes based on similarity between the target and regions of the search scene. Similarity is defined for several feature dimensions such as orientation or spatial frequency using a histogram-matching technique. The amount of attentional guidance across visual feature dimensions is predicted by a previously introduced informativeness measure. We use eye-movement data gathered from participants' search of a set of naturalistic scenes to evaluate the model. The model is found to predict the distribution of saccadic endpoints in search displays nearly as accurately as do other observers' eye-movement data in the same displays. |
Lucica Iordanescu; Marcia Grabowecky; Satoru Suzuki Demand-based dynamic distribution of attention and monitoring of velocities during multiple-object tracking Journal Article In: Journal of Vision, vol. 9, no. 4, pp. 1–12, 2009. @article{Iordanescu2009, The ability to track multiple moving objects with attention has been the focus of much research. However, the literature is relatively inconclusive regarding two key aspects of this ability, (1) whether the distribution of attention among the tracked targets is fixed during a period of tracking or is dynamically adjusted, and (2) whether motion information (direction and/or speed) is used to anticipate target locations even when velocities constantly change due to inter-object collisions. These questions were addressed by analyzing target-localization errors. Targets in crowded situations (i.e., those in danger of being lost) were localized more precisely than were uncrowded targets. Furthermore, the response vector (pointing from the target location to the reported location) was tuned to the direction of target motion, and observers with stronger direction tuning localized targets more precisely. Overall, our results provide evidence that multiple-object tracking mechanisms dynamically adjust the spatial distribution of attention in a demand-based manner (allocating more resources to targets in crowded situations) and utilize motion information (especially direction information) to anticipate target locations. |
Rachael E. Jack; Caroline Blais; Christoph Scheepers; Philippe G. Schyns; Roberto Caldara Cultural confusions show that facial expressions are not universal Journal Article In: Current Biology, vol. 19, no. 18, pp. 1543–1548, 2009. @article{Jack2009, Central to all human interaction is the mutual understanding of emotions, achieved primarily by a set of biologically rooted social signals evolved for this purpose-facial expressions of emotion. Although facial expressions are widely considered to be the universal language of emotion [1-3], some negative facial expressions consistently elicit lower recognition levels among Eastern compared to Western groups (see [4] for a meta-analysis and [5, 6] for review). Here, focusing on the decoding of facial expression signals, we merge behavioral and computational analyses with novel spatiotemporal analyses of eye movements, showing that Eastern observers use a culture-specific decoding strategy that is inadequate to reliably distinguish universal facial expressions of "fear" and "disgust." Rather than distributing their fixations evenly across the face as Westerners do, Eastern observers persistently fixate the eye region. Using a model information sampler, we demonstrate that by persistently fixating the eyes, Eastern observers sample ambiguous information, thus causing significant confusion. Our results question the universality of human facial expressions of emotion, highlighting their true complexity, with critical consequences for cross-cultural communication and globalization. |
Michal Jacob; Shaul Hochstein Comparing eye movements to detected vs. undetected target stimuli in an Identity Search task Journal Article In: Journal of Vision, vol. 9, no. 5, pp. 1–16, 2009. @article{Jacob2009, Why do we perceive some elements in a visual scene, while others remain undetected? To learn about the sequence of events leading to detection, we directly compared fixations on detected vs. undetected items. Our novel Identity Search task display comprised twelve cards, all different except for two pairs of identical cards. Participants search for one pair. Task properties allow us to monitor fixations on distinct card regions and study search dynamics. We find that detected pair cards were fixated more often and for longer times than undetected pair cards. Within the search sequence, there are fewer intervening fixations between detected than undetected pair cards. Only at an advanced stage of the search do fixations on pair cards become closer. We suggest that both the absolute number of fixations and their temporal proximity influence detection. In the dynamics of search, a bifurcation point is observed, when these differential characteristics begin. Analysis of the break point in the sequence of fixations on to-be-detected cards suggests that there is an early–perhaps unconscious–recognition stage, followed by more fixations and only later by detection. We suggest that several target fixations are needed for processing visual information to achieve recognition. |
Lina Jansen; Selim Onat; Peter König Influence of disparity on fixation and saccades in free viewing of natural scenes Journal Article In: Journal of Vision, vol. 9, no. 1, pp. 1–19, 2009. @article{Jansen2009, Humans select relevant locations in a scene by means of stimulus-driven bottom-up and context-dependent top-down mechanisms. These have mainly been investigated by recording eye movements under 2D natural or 3D artificial stimulation conditions. Here we try to close that obvious gap and presented 2D and 3D versions of natural, pink, and white noise images to human subjects. Importantly, ground truth distance was obtained for all image pairs by laser scanning. Recording eye movements, we investigated the influence of disparity information and of higher order scene correlations on basic saccade properties and on saliency of bottom-up information. Our results show that the removal of higher order correlations changed saccadic rate, length and main sequence, and the subjects' explorative behavior. Introduction of disparity information countered these effects and alleviated differences between image categories. Disparity information had no effect on the saliency of monocular image features like luminance and texture contrast; however, without higher order correlations these features were uncorrelated to fixation locations. An analysis of binocular image features revealed that participants fixated closer locations earlier than more distant locations. Importantly, this also held for 2D natural images. Taken together, we conclude that depth information changes basic eye movement properties and provides a salient image feature. |
Georgiana Juravle; Heiner Deubel Action preparation enhances the processing of tactile targets Journal Article In: Experimental Brain Research, vol. 198, no. 2-3, pp. 301–311, 2009. @article{Juravle2009, We present two experiments in which we investigated whether tactile attention is modulated by action preparation. In Experiment 1, participants prepared a saccade toward either the left or right index finger, depending on the pitch of a non-predictive auditory cue. In Experiment 2, participants prepared to lift the left or right index finger in response to the auditory cue. In half of the trials in both experiments, a suprathreshold vibratory stimulus was presented with equal probability to either finger, to which the participants made a speeded foot response. The results showed facilitation in the processing of targets delivered at the goal location of the prepared movement (Experiment 1), as well as at the effector of the prepared movement (Experiment 2). These results are discussed within the framework of theories on motor preparation and spatial attention. |
Min Jeong Kang; Ming Hsu; Ian M. Krajbich; George Loewenstein; Samuel M. McClure; Joseph Tao-yi Wang; Colin F. Camerer The wick in the candle of learning: Epistemic curiosity activates reward circuitry and enhances memory Journal Article In: Psychological Science, vol. 20, no. 8, pp. 963–974, 2009. @article{Kang2009, Curiosity has been described as a desire for learning and knowledge, but its underlying mechanisms are not well understood. We scanned subjects with functional magnetic resonance imaging while they read trivia questions. The level of curiosity when reading questions was correlated with activity in caudate regions previously suggested to be involved in anticipated reward. This finding led to a behavioral study, which showed that subjects spent more scarce resources (either limited tokens or waiting time) to find out answers when they were more curious. The functional imaging also showed that curiosity increased activity in memory areas when subjects guessed incorrectly, which suggests that curiosity may enhance memory for surprising new information. This prediction about memory enhancement was confirmed in a behavioral study: Higher curiosity in an initial session was correlated with better recall of surprising answers 1 to 2 weeks later. |
Timothy L. Hodgson; Benjamin A. Parris; Nicola J. Gregory; Tracey Jarvis The saccadic Stroop effect: Evidence for involuntary programming of eye movements by linguistic cues Journal Article In: Vision Research, vol. 49, no. 5, pp. 569–574, 2009. @article{Hodgson2009, The effect of automatic priming of behaviour by linguistic cues is well established. However, as yet these effects have not been directly demonstrated for eye movement responses. We investigated the effect of linguistic cues on eye movements using a modified version of the Stroop task in which a saccade was made to the location of a peripheral colour patch which matched the "ink" colour of a centrally presented word cue. The words were either colour words ("red", "green", "blue", "yellow") or location words ("up", "down", "left", "right"). As in the original version of the Stroop task the identity of the word could be either congruent or incongruent with the response location. The results showed that oculomotor programming was influenced by word identity, even though the written word provided no task relevant information. Saccade latency was increased on incongruent trials and an increased frequency of error saccades was observed in the direction congruent with the word identity. The results argue against traditional distinctions between reflexive and voluntary programming of saccades and suggest that linguistic cues can also influence eye movement programming in an automatic manner. |
Lee Hogarth; Anthony Dickinson; Theodora Duka Detection versus sustained attention to drug cues have dissociable roles in mediating drug seeking behavior Journal Article In: Experimental and Clinical Psychopharmacology, vol. 17, no. 1, pp. 21–30, 2009. @article{Hogarth2009, It is commonly thought that attentional bias for drug cues plays an important role in motivating human drug-seeking behavior. To assess this claim, two groups of smokers were trained in a discrimination task in which a tobacco-seeking response was rewarded only in the presence of 1 particular stimulus (the S+). The key manipulation was that whereas 1 group could control the duration of S+ presentation, for the second group, this duration was fixed. The results showed that the fixed-duration group acquired a sustained attentional bias to the S+ over training, indexed by greater dwell time and fixation count, which emerged in parallel with the control exerted by the S+ over tobacco-seeking behavior. By contrast, the controllable-duration group acquired no sustained attentional bias for S+ and instead used efficient detection of the S+ to achieve a comparable level of control over tobacco seeking. These data suggest that detection and sustained attention to drug cues have dissociable roles in enabling drug cues to motivate drug-seeking behavior, which has implications for attentional retraining as a treatment for addiction. |
Andrew Hollingworth; Steven J. Luck The role of visual working memory in the control of gaze during visual search Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 4, pp. 936–949, 2009. @article{Hollingworth2009, We investigated the interactions among visual working memory (VWM), attention, and gaze control in a visual search task that was performed while a color was held in VWM for a concurrent discrimination task. In the search task, participants were required to foveate a cued item within a circular array of colored objects. During the saccade to the target, the array was sometimes rotated so that the eyes landed midway between the target object and an adjacent distractor object, necessitating a second saccade to foveate the target. When the color of the adjacent distractor matched a color being maintained in VWM, execution of this secondary saccade was impaired, indicating that the current contents of VWM bias saccade targeting mechanisms that direct gaze toward target objects during visual search. |
Janet H. Hsiao; Garrison W. Cottrell Not all visual expertise is holistic, but it may be leftist the case of Chinese character recognition Journal Article In: Psychological Science, vol. 20, no. 4, pp. 455–463, 2009. @article{Hsiao2009, We examined whether two purportedly face-specific effects, holistic processing and the left-side bias, can also be observed in expert-level processing of Chinese characters, which are logographic and share many properties with faces. Non-Chinese readers (novices) perceived these characters more holistically than Chinese readers (experts). Chinese readers had a better awareness of the components of characters, which were not clearly separable to novices. This finding suggests that holistic processing is not a marker of general visual expertise; rather, holistic processing depends on the features of the stimuli and the tasks typically performed on them. In contrast, results for the left-side bias were similar to those obtained in studies of face perception. Chinese readers exhibited a left-side bias in the perception of mirror-symmetric characters, whereas novices did not; this effect was also reflected in eye fixations. Thus, the left-side bias may be a marker of visual expertise. |
P. -J. Hsieh; P. U. Tse Motion fading and the motion aftereffect share a common process of neural adaptation Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 4, pp. 724–733, 2009. @article{Hsieh2009, After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. Here, we show that this motion fading occurs not only for slowly moving stimuli, but also for stimuli moving at high speed; after prolonged viewing of high-speed stimuli, the stimuli appear to slow down but not to stop. We report psychophysical evidence that the same neural adaptation process likely gives rise to motion fading and to the motion aftereffect. |
P. -J. Hsieh; P. U. Tse Feature mixing rather than feature replacement during perceptual filling-in Journal Article In: Vision Research, vol. 49, no. 4, pp. 439–450, 2009. @article{Hsieh2009a, 'Filling-in' occurs when a retinally stabilized object subjectively appears to vanish following perceptual fading of its boundaries. The term 'filling-in' literally means that information about the apparently vanished object is lost and replaced solely by information arising from the surrounding background. However, we find evidence that the mechanism of 'filling-in' can actually involve a process of 'feature mixing' rather than 'feature replacement,' whereby features on either side of a perceptually faded boundary merge. Here we investigate the properties of feature mixing and specify certain conditions under which such mixing occurs. Our results show that, when using visual stimuli composed of spatially alternating stripes containing different luminances or motion signals, and when using the neon-color-spreading paradigm, the filled-in luminance, motion, or color is approximately the area and magnitude weighted average of the background and the foreground luminance, motion, or color, respectively. Together, these results demonstrate that, under at least certain conditions, 'filling-in' may involve a process of feature mixing or feature averaging rather than one of feature replacement. |
Ute Leonards; Christine Mohr Schizotypal personality traits influence idiosyncratic initiation of saccadic face exploration Journal Article In: Vision Research, vol. 49, no. 19, pp. 2404–2413, 2009. @article{Leonards2009, Visual face exploration is usually biased to the left half of a presented face. Recent findings now indicate that the first saccade in face exploration has a strong idiosyncratic component with around 30% of healthy individuals showing a consistent rightward bias. We investigated in a random sample of 64 right-handed healthy participants whether this rightward bias might relate to individual differences, i.e. a psychotic-like thinking style (schizotypy). Elevated positive (magical ideation) but not negative (physical anhedonia) schizotypy scores accounted for a pronounced left-face preference for first saccades. Furthermore, when using magical ideation and physical anheonia to group individuals according to their median scale scores into four groups (either both scores elevated or low, or mixed with one score elevated, one low), participants with both scores elevated exhibited the most pronounced left-face preference and participants with both scores low the least. The same participant groups did not differ with respect to their side preference in exploring fractals nor for other exploration parameters such as first fixation duration, number of saccades or scanpath length. These findings indicate pronounced right-hemispheric dominance for face exploration in healthy individuals with elevated positive schizotypal thought. These findings contrast with expectations from studies with schizophrenic patients, and point to the relevance of individual differences in lateralized face processing. |
Bettina Olk; Alan Kingstone A new look at aging and performance in the antisaccade task: The impact of response selection Journal Article In: European Journal of Cognitive Psychology, vol. 21, no. 2-3, pp. 406–427, 2009. @article{Olk2009, Aged adults respond more slowly and less accurately in the antisaccade task, in which a saccade away from a visual stimulus is required. This decreased performance has been attributed to a decline in the ability to inhibit prepotent responses with age. Considering that antisaccades also involve response selection, the present experiment investigated the contribution of inhibition and response selection. Young and aged adults were compared between conditions that required varying percentages of prosaccades, antisaccades, and no-go trials. The comparison between no-go (inhibition of a prosaccade) and antisaccade trials (inhibition of a prosaccade 和 selection of an antisaccade) showed significantly worse performance in the antisaccade task, especially for the older group, suggesting that they failed to select the antisaccade in a situation in which a competing, prepotent response is available. The impact of this response selection failure was underlined by an equivalent ability of both groups to impose inhibition. |
Alper Açik; Selim Onat; Frank Schumann; Wolfgang Einhäuser; Peter König Effects of luminance contrast and its modifications on fixation behavior during free viewing of images from different categories Journal Article In: Vision Research, vol. 49, no. 12, pp. 1541–1553, 2009. @article{Acik2009, During viewing of natural scenes, do low-level features guide attention, and if so, does this depend on higher-level features? To answer these questions, we studied the image category dependence of low-level feature modification effects. Subjects fixated contrast-modified regions often in natural scene images, while smaller but significant effects were observed for urban scenes and faces. Surprisingly, modifications in fractal images did not influence fixations. Further analysis revealed an inverse relationship between modification effects and higher-level, phase-dependent image features. We suggest that high- and mid-level features - such as edges, symmetries, and recursive patterns - guide attention if present. However, if the scene lacks such diagnostic properties, low-level features prevail. We posit a hierarchical framework, which combines aspects of bottom-up and top-down theories and is compatible with our data. |
Korbinian Moeller; Martin H. Fischer; Hans-Christoph Nuerk; Klaus Willmes Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 2, pp. 323–334, 2009. @article{Moeller2009a, While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed. |
Korbinian Moeller; S. Neuburger; L. Kaufmann; K. Landerl; Hans-Christoph Nuerk Basic number processing deficits in developmental dyscalculia: Evidence from eye tracking Journal Article In: Cognitive Development, vol. 24, no. 4, pp. 371–386, 2009. @article{Moeller2009, Recent research suggests that developmental dyscalculia is associated with a subitizing deficit (i.e., the inability to quickly enumerate small sets of up to 3 objects). However, the nature of this deficit has not previously been investigated. In the present study the eye-tracking methodology was employed to clarify whether (a) the subitizing deficit of two boys with dyscalculia resulted from a general slowing in the access to magnitude representation, or (b) children with dyscalculia resort to a back-up counting strategy even for small object sets. In a dot-counting task, a standard problem size effect for the number of fixations required to encode the presented numerosity within the subitizing range was observed. Together with the finding that problem size had no impact on the average fixation duration, this result suggested that children with dyscalculia may indeed have to count, while typically developing controls are able to enumerate the number of dots in parallel, i.e., subitize. Implications for the understanding of developmental dyscalculia are considered. |
Arash Afraz; Patrick Cavanagh The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations Journal Article In: Journal of Vision, vol. 9, no. 10, pp. 1–17, 2009. @article{Afraz2009, In four experiments, we measured the gender-specific face-aftereffect following subject's eye movement, head rotation, or head movement toward the display and following movement of the adapting stimulus itself to a new test location. In all experiments, the face aftereffect was strongest at the retinal position, orientation, and size of the adaptor. There was no advantage for the spatiotopic location in any experiment nor was there an advantage for the location newly occupied by the adapting face after it moved in the final experiment. Nevertheless, the aftereffect showed a broad gradient of transfer across location, orientation and size that, although centered on the retinotopic values of the adapting stimulus, covered ranges far exceeding the tuning bandwidths of neurons in early visual cortices. These results are consistent with a high-level site of adaptation (e.g. FFA) where units of face analysis have modest coverage of visual field, centered in retinotopic coordinates, but relatively broad tolerance for variations in size and orientation. |
Ozgur E. Akman; Richard A. Clement; David S. Broomhead; Sabira K. Mannan; Ian Moorhead; Hugh R. Wilson Probing bottom-up processing with multistable images Journal Article In: Journal of Eye Movement Research, vol. 1, no. 3, pp. 1–7, 2009. @article{Akman2009, The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models. |
Celia J. A. Morgan; Vyv Huddy; Michelle Lipton; H. Valerie Curran; Eileen M. Joyce Is persistent ketamine use a valid model of the cognitive and oculomotor deficits in schizophrenia? Journal Article In: Biological Psychiatry, vol. 65, no. 12, pp. 1099–1102, 2009. @article{Morgan2009, Background: Acute ketamine has been shown to model features of schizophrenia such as psychotic symptoms, cognitive deficits and smooth pursuit eye movement dysfunction. There have been suggestions that chronic ketamine may also produce an analogue of the disorder. In this study, we investigated the effect of persistent recreational ketamine use on tests of episodic and working memory and on oculomotor tasks of smooth pursuit and pro- and antisaccades. Methods: Twenty ketamine users were compared with 1) 20 first-episode schizophrenia patients, 2) 17 polydrug control subjects who did not use ketamine but were matched to the ketamine users for other drug use, and 3) 20 non-drug-using control subjects. All groups were matched for estimated premorbid IQ. Results: Ketamine users made more antisaccade errors than both control groups but did not differ from patients. Ketamine users performed better than schizophrenia patients on smooth pursuit, antisaccade metrics, and both memory tasks but did not differ from control groups. Conclusions: Problems inhibiting reflexive eye movements may be a consequence of repeated ketamine self-administration. The absence of any other oculomotor or cognitive deficit present in schizophrenia suggests that chronic self-administration of ketamine may not be a good model of these aspects of the disorder. |
Weimin Mou; Xianyun Liu; Timothy P. McNamara Layout geometry in encoding and retrieval of spatial memory Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 1, pp. 83–93, 2009. @article{Mou2009, Two experiments investigated whether the spatial reference directions that are used to specify objects' locations in memory can be solely determined by layout geometry. Participants studied a layout of objects from a single viewpoint while their eye movements were recorded. Subsequently, participants used memory to make judgments of relative direction (e.g., "Imagine you are standing at X, facing Y, please point to Z"). When the layout had a symmetric axis that was different from participants' viewing direction, the sequence of eye fixations on objects during learning and the preferred directions in pointing judgments were both determined by the direction of the symmetric axis. These results provide further evidence that interobject spatial relations are represented in memory with intrinsic frames of reference. |
Jérôme Munuera; Pierre Morel; Jean-Rene Duhamel; Sophie Deneve Optimal sensorimotor control in eye movement sequences Journal Article In: Journal of Neuroscience, vol. 29, no. 10, pp. 3026–3035, 2009. @article{Munuera2009, Fast and accurate motor behavior requires combining noisy and delayed sensory information with knowledge of self-generated body motion; much evidence indicates that humans do this in a near-optimal manner during arm movements. However, it is unclear whether this principle applies to eye movements. We measured the relative contributions of visual sensory feedback and the motor efference copy (and/or proprioceptive feedback) when humans perform two saccades in rapid succession, the first saccade to a visual target and the second to a memorized target. Unbeknownst to the subject, we introduced an artificial motor error by randomly "jumping" the visual target during the first saccade. The correction of the memory-guided saccade allowed us to measure the relative contributions of visual feedback and efferent copy (and/or proprioceptive feedback) to motor-plan updating. In a control experiment, we extinguished the target during the saccade rather than changing its location to measure the relative contribution of motor noise and target localization error to saccade variability without any visual feedback. The motor noise contribution increased with saccade amplitude, but remained <30% of the total variability. Subjects adjusted the gain of their visual feedback for different saccade amplitudes as a function of its reliability. Even during trials where subjects performed a corrective saccade to compensate for the target-jump, the correction by the visual feedback, while stronger, remained far below 100%. In all conditions, an optimal controller predicted the visual feedback gain well, suggesting that humans combine optimally their efferent copy and sensory feedback when performing eye movements. |
René M. Müri; D. Cazzoli; Thomas Nyffeler; Tobias Pflugshaupt Visual exploration pattern in hemineglect Journal Article In: Psychological Research, vol. 73, no. 2, pp. 147–157, 2009. @article{Mueri2009, The analysis of eye movement parameters in visual neglect such as cumulative fixation duration, saccade amplitude, or the numbers of saccades has been used to probe attention deficits in neglect patients, since the pattern of exploratory eye movements has been taken as a strong index of attention distribution. The current overview of the literature of visual neglect has its emphasis on studies dealing with eye movement and exploration analysis. We present our own results in 15 neglect patients. The free exploration behavior was analyzed in these patients presenting 32 naturalistic color photographs of everyday scenes. Cumulative fixation duration, spatial distribution of fixations in the horizontal and vertical plane, the number and amplitude of exploratory saccades was analyzed and compared with the results of an age-matched control group. A main result of our study was that in neglect patients, fixation distribution of free exploration of natural scenes is not only influenced by the left-right bias in the horizontal direction but also by the vertical direction. |
Erhardt Barth; Eleonora Vig; Michael Dorr Efficient visual coding and the predictability of eye movements on natural movies Journal Article In: Spatial Vision, vol. 22, no. 5, pp. 397–408, 2009. @article{Barth2009, We deal with the analysis of eye movements made on natural movies in free-viewing conditions. Saccades are detected and used to label two classes of movie patches as attended and non-attended. Machine learning techniques are then used to determine how well the two classes can be separated, i.e., how predictable saccade targets are. Although very simple saliency measures are used and then averaged to obtain just one average value per scale, the two classes can be separated with an ROC score of around 0.7, which is higher than previously reported results. Moreover, predictability is analysed for different representations to obtain indirect evidence for the likelihood of a particular representation. It is shown that the predictability correlates with the local intrinsic dimension in a movie. |
Sarah Bate; Catherine Haslam; Timothy L. Hodgson Angry faces are special too: Evidence From the visual scanpath Journal Article In: Neuropsychology, vol. 23, no. 5, pp. 658–667, 2009. @article{Bate2009a, Traditional models of face processing posit independent pathways for the processing of facial identity and facial expression (e.g., Bruce & Young, 1986). However, such models have been questioned by recent reports that suggest positive expressions may facilitate recognition (e.g., Baudouin et al., 2000), although little attention has been paid to the role of negative expressions. The current study used eye movement indicators to examine the influence of emotional expression (angry, happy, neutral) on the recognition of famous and novel faces. In line with previous research, the authors found some evidence that only happy expressions facilitate the processing of famous faces. However, the processing of novel faces was enhanced by the presence of an angry expression. Contrary to previous findings, this paper suggests that angry expressions also have an important role in the recognition process, and that the influence of emotional expression is modulated by face familiarity. The implications of this finding are discussed in relation to (1) current models of face processing, and (2) theories of oculomotor control in the viewing of facial stimuli. |
Sarah Bate; Catherine Haslam; Ashok Jansari; Timothy L. Hodgson Covert face recognition relies on affective valence in congenital prosopagnosia Journal Article In: Cognitive Neuropsychology, vol. 26, no. 4, pp. 391–411, 2009. @article{Bate2009, Dominant accounts of covert recognition in prosopagnosia assume subthreshold activation of face representations created prior to onset of the disorder. Yet, such accounts cannot explain covert recognition in congenital prosopagnosia, where the impairment is present from birth. Alternatively, covert recognition may rely on affective valence, yet no study has explored this possibility. The current study addressed this issue in 3 individuals with congenital prosopagnosia, using measures of the scanpath to indicate recognition. Participants were asked to memorize 30 faces paired with descriptions of aggressive, nice, or neutral behaviours. In a later recognition test, eye movements were monitored while participants discriminated studied from novel faces. Sampling was reduced for studied–nice compared to studied–aggressive faces, and performance for studied–neutral and novel faces fell between these two conditions. This pattern of findings suggests that (a) positive emotion can facilitate processing in prosopagnosia, and (b) covert recognition may rely on emotional valence rather than familiarity. |
Paul M. Bays; R. F. G. Catalao; Masud Husain The precision of visual working memory is set by allocation of a shared resource Journal Article In: Journal of Vision, vol. 9, no. 10, pp. 7–7, 2009. @article{Bays2009, The mechanisms underlying visual working memory have recently become controversial. One account proposes a small number of memory "slots," each capable of storing a single visual object with fixed precision. A contrary view holds that working memory is a shared resource, with no upper limit on the number of items stored; instead, the more items that are held in memory, the less precisely each can be recalled. Recent findings from a color report task have been taken as crucial new evidence in favor of the slot model. However, while this task has previously been thought of as a simple test of memory for color, here we show that performance also critically depends on memory for location. When errors in memory are considered for both color and location, performance on this task is in fact well explained by the resource model. These results demonstrate that visual working memory consists of a common resource distributed dynamically across the visual scene, with no need to invoke an upper limit on the number of objects represented. |
Mark W. Becker; Brian Detweiler-Bedell Early detection and avoidance of threatening faces during passive viewing Journal Article In: Quarterly Journal of Experimental Psychology, vol. 62, no. 7, pp. 1257–1264, 2009. @article{Becker2009, To evaluate whether there is an early attentional bias towards negative stimuli, we tracked participants' eyes while they passively viewed displays composed of four Ekman faces. In Experiment 1 each display consisted of three neutral faces and one face depicting fear or happiness. In half of the trials, all faces were inverted. Although the passive viewing task should have been very sensitive to attentional biases, we found no evidence that overt attention was biased towards fearful faces. Instead, people tended to actively avoid looking at the fearful face. This avoidance was evident very early in scene viewing, suggesting that the threat associated with the faces was evaluated rapidly. Experiment 2 replicated this effect and extended it to angry faces. In sum, our data suggest that negative facial expressions are rapidly analysed and influence visual scanning, but, rather than attract attention, such faces are actively avoided. |
Stefanie I. Becker; Ulrich Ansorge; Massimo Turatto Saccades reveal that allocentric coding of the moving object causes mislocalization in the flash-lag effect Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 6, pp. 1313–1324, 2009. @article{Becker2009a, The flash-lag effect is a visual misperception of a position of a flash relative to that of a moving object: Even when both are at the same position, the flash is reported to lag behind the moving object. In the present study, the flash-lag effect was investigated with eye-movement measurements: Subjects were required to saccade to either the flash or the moving object. The results showed that saccades to the flash were precise, whereas saccades to the moving object showed an offset in the direction of motion. A further experiment revealed that this offset in the saccades to the moving object was eliminated when the whole background flashed. This result indicates that saccadic offsets to the moving stimulus critically depend on the spatially distinctive flash in the vicinity of the moving object. The results are incompatible with current theoretical explanations of the flash-lag effect, such as the motion extrapolation account. We propose that allocentric coding of the position of the moving object could account for the flash-lag effect. |
Artem V. Belopolsky; Jan Theeuwes When are attention and saccade preparation dissociated? Journal Article In: Psychological Science, vol. 20, no. 11, pp. 1340–1347, 2009. @article{Belopolsky2009, To understand the mechanisms of visual attention, it is crucial to know the relationship between attention and saccades. Some theories propose a close relationship, whereas others view the attention and saccade systems as completely independent. One possible way to resolve this controversy is to distinguish between the maintenance and shifting of attention. The present study used a novel paradigm that allowed simultaneous measurement of attentional allocation and saccade preparation. Saccades toward the location where attention was maintained were either facilitated or suppressed depending on the probability of making a saccade to that location and the match between the attended location and the saccade location on the previous trial. Shifting attention to another location was always associated with saccade facilitation. The findings provide a new view, demonstrating that the maintenance of attention and shifting of attention differ in their relationship to the oculomotor system. |
Jeroen S. Benjamins; Ignace T. C. Hooge; Jacco C. Elst; Alexander H. Wertheim; Frans A. J. Verstraten Search time critically depends on irrelevant subset size in visual search Journal Article In: Vision Research, vol. 49, pp. 398–406, 2009. @article{Benjamins2009, In order for our visual system to deal with the massive amount of sensory input, some of this input is discarded, while other parts are processed [Wolfe, J. M. (1994). Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238]. From the visual search literature it is unclear how well one set of items can be selected that differs in only one feature from target (a 1F set), while another set of items can be ignored that differs in two features from target (a 2F set). We systematically varied the percentage of 2F non-targets to determine the contribution of these non-targets to search behaviour. Increasing the percentage 2F non-targets, that have to be ignored, was expected to result in increasingly faster search, since it decreases the size of 1F set that has to be searched. Observers searched large displays for a target in the 1F set with a variable percentage of 2F non-targets. Interestingly, when the search displays contained 5% 2F non-targets, the search time was longer compared to the search time in other conditions. This effect of 2F non-targets on performance was independent of set size. An inspection of the saccades revealed that saccade target selection did not contribute to the longer search times in displays with 5% 2F non-targets. Occurrence of longer search times in displays containing 5% 2F non-targets might be attributed to covert processes related to visual analysis of the fixated part of the display. Apparently, visual search performance critically depends on the percentage of irrelevant 2F non-targets. |
Tanja C. W. Nijboer; Stefan Van der Stigchel Is attention essential for inducing synesthetic colors? Evidence from oculomotor distractors Journal Article In: Journal of Vision, vol. 9, no. 6, pp. 1–9, 2009. @article{Nijboer2009, In studies investigating visual attention in synesthesia, the targets usually induce a synesthetic color. To measure to what extent attention is necessary to induce synesthetic color experiences, one needs a task in which the synesthetic color is induced by a task-irrelevant distractor. In the current study, an oculomotor distractor task was used in which an eye movement was to be made to a physically colored target while ignoring a single physically colored or synesthetic distractor. Whereas many erroneous eye movements were made to distractors with an identical hue as the target (i.e., capture), much less interference was found with synesthetic distractors. The interference of synesthetic distractors was comparable with achromatic non-digit distractors. These results suggest that attention and hence overt recognition of the inducing stimulus are essential for the synesthetic color experience to occur. |
Satoshi Nishida; Tomohiro Shibata; Kazushi Ikeda Prediction of human eye movements in facial discrimination tasks Journal Article In: Artificial Life and Robotics, vol. 14, no. 3, pp. 348–351, 2009. @article{Nishida2009, Under natural viewing conditions, human observers selectively allocate their attention to subsets of the visual input. Since overt allocation of attention appears as eye movements, the mechanism of selective attention can be uncovered through computational studies of eyemovement predictions. Since top-down attentional control in a task is expected to modulate eye movements significantly, the models that take a bottom-up approach based on low-level local properties are not expected to suffice for prediction. In this study, we introduce two representative models, apply them to a facial discrimination task with morphed face images, and evaluate their performance by comparing them with the human eye-movement data. The result shows that they are not good at predicting eye movements in this task. |
Atsushi Noritake; Bob Uttl; Masahiko Terao; Masayoshi Nagai; Junji Watanabe; Akihiro Yagi Saccadic compression of rectangle and Kanizsa figures: Now you see it, now you don't Journal Article In: PLoS ONE, vol. 4, no. 7, pp. e6383, 2009. @article{Noritake2009, BACKGROUND: Observers misperceive the location of points within a scene as compressed towards the goal of a saccade. However, recent studies suggest that saccadic compression does not occur for discrete elements such as dots when they are perceived as unified objects like a rectangle. METHODOLOGY/PRINCIPAL FINDINGS: We investigated the magnitude of horizontal vs. vertical compression for Kanizsa figure (a collection of discrete elements unified into single perceptual objects by illusory contours) and control rectangle figures. Participants were presented with Kanizsa and control figures and had to decide whether the horizontal or vertical length of stimulus was longer using the two-alternative force choice method. Our findings show that large but not small Kanizsa figures are perceived as compressed, that such compression is large in the horizontal dimension and small or nil in the vertical dimension. In contrast to recent findings, we found no saccadic compression for control rectangles. CONCLUSIONS: Our data suggest that compression of Kanizsa figure has been overestimated in previous research due to methodological artifacts, and highlight the importance of studying perceptual phenomena by multiple methods. |
Lauri Nummenmaa; Jukka Hyönä; Manuel G. Calvo Emotional scene content drives the saccade generation system reflexively Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 2, pp. 305–323, 2009. @article{Nummenmaa2009, The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster when the cue pointed toward the emotional picture rather than toward the neutral picture. Experiment 2 replicated these findings with a reflexive saccade task, in which abrupt luminosity changes were used as exogenous saccade cues. In Experiment 3, participants performed vertical reflexive saccades that were orthogonal to the emotional-neutral picture locations. Saccade endpoints and trajectories deviated away from the visual field in which the emotional scenes were presented. Experiment 4 showed that computationally modeled visual saliency does not vary as a function of scene content and that inversion abolishes the rapid orienting toward the emotional scenes. Visual confounds cannot thus explain the results. The authors conclude that early saccade target selection and execution processes are automatically influenced by emotional picture content. This reveals processing of meaningful scene content prior to overt attention to the stimulus. |
Lauri Nummenmaa; Jukka Hyönä; Jari K. Hietanen I'll walk this way: Eyes reveal the direction of locomotion and make passersby look and go the other way Journal Article In: Psychological Science, vol. 20, no. 12, pp. 1454–1458, 2009. @article{Nummenmaa2009a, This study shows that humans (a) infer other people's movement trajectories from their gaze direction and (b) use this information to guide their own visual scanning of the environment and plan their own movement. In two eye-tracking experiments, participants viewed an animated character walking directly toward them on a street. The character looked constantly to the left or to the right (Experiment 1) or suddenly shifted his gaze from direct to the left or to the right (Experiment 2). Participants had to decide on which side they would skirt the character. They shifted their gaze toward the direction in which the character was not gazing, that is, away from his gaze, and chose to skirt him on that side. Gaze following is not always an obligatory social reflex; social-cognitive evaluations of gaze direction can lead to reversed gaze-following behavior. |
Kate Janse Van Rensburg; Adrian Taylor; Timothy L. Hodgson The effects of acute exercise on attentional bias towards smoking-related stimuli during temporary abstinence from smoking Journal Article In: Addiction, vol. 104, no. 11, pp. 1910–1917, 2009. @article{VanRensburg2009, RATIONALE: Attentional bias towards smoking-related cues is increased during abstinence and can predict relapse after quitting. Exercise has been found to reduce cigarette cravings and desire to smoke during temporary abstinence and attenuate increased cravings in response to smoking cues. OBJECTIVE: To assess the acute effects of exercise on attentional bias to smoking-related cues during temporary abstinence from smoking. METHOD: In a randomized cross-over design, on separate days regular smokers (n = 20) undertook 15 minutes of exercise (moderate intensity stationary cycling) or passive seating following 15 hours of nicotine abstinence. Attentional bias was measured at baseline and post-treatment. The percentage of dwell time and direction of initial fixation was assessed during the passive viewing of a series of paired smoking and neutral images using an Eyelink II eye-tracking system. Self-reported desire to smoke was recorded at baseline, mid- and post-treatment and post-eye-tracking task. RESULTS: There was a significant condition x time interaction for desire to smoke, F((1,18)) = 10.67 |
Eric D. Vidoni; Jason S. McCarley; Jodi D. Edwards; Lara A. Boyd Manual and oculomotor performance develop contemporaneously but independently during continuous tracking Journal Article In: Experimental Brain Research, vol. 195, no. 4, pp. 611–620, 2009. @article{Vidoni2009, The coordination of the oculomotor and manual effector systems is an important component of daily motor behavior. Previous work has primarily examined oculomotor/manual coordination in discrete targeting tasks. Here we extend this work to learning a tracking task that requires continuous response and movement update. Over two sessions, participants practiced controlling a computer mouse with movements of their arm to follow a target moving in a repeated sequence. Eye movements were also recorded. In a retention test, participants demonstrated sequence-specific learning with both effector systems, but differences between effectors also were apparent. Time series analysis and multiple linear regression were employed to probe spatial and temporal contributions to overall tracking accuracy within each effector system. Sequence-specific oculomotor learning occurred only in the spatial domain. By contrast, sequence-specific learning at the arm was evident only in the temporal domain. There was minimal interdependence in error rates for the two effector systems, underscoring their independence during tracking. These findings suggest that the oculomotor and manual systems learn contemporaneously, but performance improvements manifest differently and rely on different elements of motor execution. The results may in part be a function of what the motor learning system values for each effector as a function of its effector's inertial properties. |
Sébastien Tremblay; Jean Saint-Aubin Evidence of anticipatory eye movements in the spatial Hebb repetition effect: Insights for modeling sequence learning Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 35, no. 5, pp. 1256–1265, 2009. @article{Tremblay2009, In the present study, the authors offer a window onto the mechanisms that drive the Hebb repetition effect through the analysis of eye movement and recall performance. In a spatial serial recall task in which sequences of dots are to be remembered in order, when one particular series is repeated every 4 trials, memory performance markedly improves over repetitions. This is known as the Hebb repetition effect. Eye movement recorded during the presentation of the to-be-remembered (TBR) information revealed that for the repeated sequence, participants fixated the location of the next TBR location before the actual presentation of the dot. The extent to which a TBR location was anticipated increased over repetition and occurred only for post-initial positions of the repeated sequence. Eye movement-based rehearsal activity was related to recall performance but not to sequence learning. The findings provide further evidence of anticipatory behavior in sequence learning and place key constraints on modeling the Hebb repetition effect. |
Po-He Tseng; Ran Carmi; Ian G. M. Cameron; Douglas P. Munoz; Laurent Itti Quantifying center bias of observers in free viewing of dynamic natural scenes Journal Article In: Journal of Vision, vol. 9, no. 7, pp. 4–4, 2009. @article{Tseng2009, Human eye-tracking studies have shown that gaze fixations are biased toward the center of natural scene stimuli ("center bias"). This bias contaminates the evaluation of computational models of attention and oculomotor behavior. Here we recorded eye movements from 17 participants watching 40 MTV-style video clips (with abrupt scene changes every 2-4 s), to quantify the relative contributions of five causes of center bias: photographer bias, motor bias, viewing strategy, orbital reserve, and screen center. Photographer bias was evaluated by five naive human raters and correlated with eye movements. The frequently changing scenes in MTV-style videos allowed us to assess how motor bias and viewing strategy affected center bias across time. In an additional experiment with 5 participants, videos were displayed at different locations within a large screen to investigate the influences of orbital reserve and screen center. Our results demonstrate quantitatively for the first time that center bias is correlated strongly with photographer bias and is influenced by viewing strategy at scene onset, while orbital reserve, screen center, and motor bias contribute minimally. We discuss methods to account for these influences to better assess computational models of visual attention and gaze using natural scene stimuli. |
Naotsugu Tsuchiya; Farshad Moradi; Csilla Felsen; Madoka Yamazaki; Ralph Adolphs Intact rapid detection of fearful faces in the absence of the amygdala Journal Article In: Nature Neuroscience, vol. 12, no. 10, pp. 1224–1225, 2009. @article{Tsuchiya2009, The amygdala is thought to process fear-related stimuli rapidly and nonconsciously. We found that an individual with complete bilateral amygdala lesions, who cannot recognize fear from faces, nonetheless showed normal rapid detection and nonconscious processing of those same fearful faces. We conclude that the amygdala is not essential for early stages of fear processing but, instead, modulates recognition and social judgment. |
Ilse Tydgat; Jonathan Grainger Serial position effects in the identification of letters, digits, and symbols Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 2, pp. 480–498, 2009. @article{Tydgat2009, In 6 experiments, the authors investigated the form of serial position functions for identification of letters, digits, and symbols presented in strings. The results replicated findings obtained with the target search paradigm, showing an interaction between the effects of serial position and type of stimulus, with symbols generating a distinct serial position function compared with letters and digits. When the task was 2-alternative forced choice, this interaction was driven almost exclusively by performance at the first position in the string, with letters and digits showing much higher levels of accuracy than symbols at this position. A final-position advantage was reinstated in Experiment 6 by placing the two alternative responses below the target string. The end-position (first and last positions) advantage for letters and digits compared with symbol stimuli was further confirmed with the bar-probe technique (postcued partial report) in Experiments 5 and 6. Overall, the results further support the existence of a specialized mechanism designed to optimize processing of strings of letters and digits by modifying the size and shape of retinotopic character detectors' receptive fields. |
Geoffrey Underwood; Tom Foulsham; Katherine Humphrey Saliency and scan patterns in the inspection of real-world scenes: Eye movements during encoding and recognition Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 812–834, 2009. @article{Underwood2009, How do sequences of eye fixations match each other when viewing a picture during encoding and again during a recognition test, and to what extent are fixation sequences (scan patterns) determined by the low-level visual features of the picture rather than the domain knowledge of the viewer? The saliency map model of visual attention was tested in two experiments to ask whether the rank ordering of regions by their saliency values can be used to predict the sequence of fixations made when first looking at an image. Experiment 1 established that the sequence of fixations on first inspection during encoding was similar to that made when looking at the picture the second time, in the recognition test. Experiment 2 confirmed this similarity of fixation sequences at encoding and recognition, and also found a similarity between scan patterns made during the initial recognition test and during a second recognition test 1 week later. The fixation scan patterns were not similar to those predicted by the saliency map model in either experiment, however. These conclusions are qualified by interactions involving the match between the content of the image and the domain of interest of the viewers. How do sequences of eye fixations match each other when viewing a picture during encoding and again during a recognition test, and to what extent are fixation sequences (scan patterns) determined by the low-level visual features of the picture rather than the domain knowledge of the viewer? The saliency map model of visual attention was tested in two experiments to ask whether the rank ordering of regions by their saliency values can be used to predict the sequence of fixations made when first looking at an image. Experiment 1 established that the sequence of fixations on first inspection during encoding was similar to that made when looking at the picture the second time, in the recognition test. Experiment 2 confirmed this similarity of fixation sequences at encoding and recognition, and also found a similarity between scan patterns made during the initial recognition test and during a second recognition test 1 week later. The fixation scan patterns were not similar to those predicted by the saliency map model in either experiment, however. These conclusions are qualified by interactions involving the match between the content of the image and the domain of interest of the viewers. |
Carolin Wienrich; Uta Heße; Gisela Müller-Plath Eye movements and attention in visual feature search with graded target-distractor-similarity Journal Article In: Journal of Eye Movement Research, vol. 3, no. 1, pp. 1–19, 2009. @article{Wienrich2009, We conducted a visual feature search experiment in which we varied the target-distractor- similarity in four steps, the number of items (4, 6, and 8), and the presence of the target. In addition to classical search parameters like error rate and reaction time (RT), we analyzed saccade amplitudes, fixation durations, and the portion of reinspections (recurred fixation on an item with at least one different item fixated in between) and refixations (recurred fixation on an item without a different item fixated in between) per trial. When target- distractor-similarity was increased, more errors and longer RTs were observed, accompa- nied by shorter saccade amplitudes, longer fixation durations, and more reinspec- tions/refixations. An increasing set size resulted in longer saccade amplitudes and shorter fixation durations. Finally, in target-absent trials we observed more reinspections than refixations, whereas in target-present trials refixations were more frequent than reinspec- tions. The results on saccade amplitude and fixation duration support saliency-based search theo- ries that assume an attentional focus variable in size according to task demands and a vari- able attentional dwell time. Reinspections and refixations seem to be rather a sign of in- complete perceptual processing of items than being due to memory failure. |
Carrick C. Williams; Alexander Pollatsek; Kyle R. Cave; Michael J. Stroud More than just finding color: Strategy in global visual search is shaped by learned target probabilities Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 3, pp. 688–699, 2009. @article{Williams2009, In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue T) was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though clusters, the use of color was determined by the probability that the target would appear in a cluster of a certain color type: When the target was equally likely to be in any cluster containing the target color, fixations were directed to those clusters approximately equally, but when targets were more likely to appear in clusters with more target-color items, those clusters were likely to be fixated sooner. (The target probabilities guided search without explicit instruction.) Once fixated, the time spent within a cluster depended on the number of target-color elements, consistent with a search of only those elements. Thus, between-cluster search was influenced by global target probabilities signaled by amount of color or color ratios, whereas within-cluster search was directly driven by presence of the target color. |
Tamara L. Watson; Bart Krekelberg The relationship between saccadic suppression and perceptual stability Journal Article In: Current Biology, vol. 19, no. 12, pp. 1040–1043, 2009. @article{Watson2009, Introspection makes it clear that we do not see the visual motion generated by our saccadic eye movements. We refer to the lack of awareness of the motion across the retina that is generated by a saccade as saccadic omission [1]: the visual stimulus generated by the saccade is omitted from our subjective awareness. In the laboratory, saccadic omission is often studied by investigating saccadic suppression, the reduction in visual sensitivity before and during a saccade (see Ross et al. [2] and Wurtz [3] for reviews). We investigated whether perceptual stability requires that a mechanism like saccadic suppression removes perisaccadic stimuli from visual processing to prevent their presumed harmful effect on perceptual stability [4, 5]. Our results show that a stimulus that undergoes saccadic omission can nevertheless generate a shape contrast illusion. This illusion can be generated when the inducer and test stimulus are separated in space and is therefore thought to be generated at a later stage of visual processing [6]. This shows that perceptual stability is attained without removing stimuli from processing and suggests a conceptually new view of perceptual stability in which perisaccadic stimuli are processed by the early visual system, but these signals are prevented from reaching awareness at a later stage of processing. |
Andrew E. Welchman; Julie M. Harris; Eli Brenner Extra-retinal signals support the estimation of 3D motion Journal Article In: Vision Research, vol. 49, no. 7, pp. 782–789, 2009. @article{Welchman2009, In natural settings, our eyes tend to track approaching objects. To estimate motion, the brain should thus take account of eye movements, perhaps using retinal cues (retinal slip of static objects) or extra-retinal signals (motor commands). Previous work suggests that extra-retinal ocular vergence signals do not support the perceptual judgments. Here, we re-evaluate this conclusion, studying motion judgments based on retinal slip and extra-retinal signals. We find that (1) each cue can be sufficient, and, (2) retinal and extra-retinal signals are combined, when estimating motion-in-depth. This challenges the accepted view that observers are essentially blind to eye vergence changes. |
Hyejin Yang; Xin Chen; Gregory J. Zelinsky A new look at novelty effects: Guiding search away from old distractors Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 3, pp. 554–564, 2009. @article{Yang2009c, We examined whether search is guided to novel distractors. In Experiment 1, subjects searched for a target among one new and a variable number of old distractors. Search displays in Experiment 2 consisted of an equal number of new, old, and familiar distractors (the latter repeated occasionally). We found that eye movements were preferentially directed to a new distractor on target-absent trials and that subjects tended to immediately fixate a new distractor after leaving the target on target-present trials. In both cases, first fixations on old distrac- tors were consistently less frequent than could be explained by chance. We interpret these patterns as evidence for negative guidance: Subjects learn the visual features associated with the set of old distractors and then guide their search away from these features, ultimately resulting in the preferential fixation of novel distractors. |
Hyejin Yang; Gregory J. Zelinsky Visual search is guided to categorically defined targets Journal Article In: Vision Research, vol. 49, no. 16, pp. 2095–2103, 2009. @article{Yang2009b, To determine whether categorical search is guided we had subjects search for teddy bear targets either with a target preview (specific condition) or without (categorical condition). Distractors were random realistic objects. Although subjects searched longer and made more eye movements in the categorical condition, targets were fixated far sooner than was expected by chance. By varying target repetition we also determined that this categorical guidance was not due to guidance from specific previously viewed targets. We conclude that search is guided to categorically-defined targets, and that this guidance uses a categorical model composed of features common to the target class. |
Weilei Yi; Dana Ballard Recognizing behavior in hand-eye coordination patterns Journal Article In: International Journal of Humanoid Robotics, vol. 06, no. 03, pp. 337–359, 2009. @article{Yi2009, Modeling human behavior is important for the design of robots as well as human-computer interfaces that use humanoid avatars. Constructive models have been built, but they have not captured all of the detailed structure of human behavior such as the moment-to-moment deployment and coordination of hand, head and eye gaze used in complex tasks. We show how this data from human subjects performing a task can be used to program a dynamic Bayes network (DBN) which in turn can be used to recognize new performance instances. As a specific demonstration we show that the steps in a complex activity such as sandwich making can be recognized by a DBN in real time. |
Jan Zwickel; Hermann J. Müller Eye movements as a means to evaluate and improve robots Journal Article In: International Journal of Social Robotics, vol. 1, no. 4, pp. 357–366, 2009. @article{Zwickel2009, Abstract With an increase in their capabilities, robots start to play a role in everyday settings. This necessitates a step from a robot-centered (i.e., teaching humans to adapt to robots) to a more human-centered approach (where robots integrate naturally into human activities). Achieving this will increase the effectiveness of robot usage (e.g., shortening the time required for learning), reduce errors, and increase user acceptance. Robotic camera control will play an important role for a more natural and easier-to-interpret behavior, owing to the central importance of gaze in human communication. This study is intended to provide a first step towards improving camera control by a better understanding of human gaze behavior in social situations. To this end, we registered the eye movements of humans watching different types of movies. In all movies, the same two triangles moved around in a self-propelled fashion. However, crucially, some of the movies elicited the attribution of mental states to the triangles, while others did not. This permitted us to directly distinguish eye movement patterns relating to the attribution of mental states in (perceived) social situations, from the patterns in non-social situations. We argue that a better understanding of what characterizes human gaze patterns in social situations will help shape robotic behavior, make it more natural for humans to communicate with robots, and establish joint attention (to certain objects) between humans and robots. In addition, a better understanding of human gaze in social situations will provide a measure for evaluating whether robots are perceived as social agents rather than non-intentional machines. This could help decide which behaviors a robot should display in order to be perceived as a social interaction partner. |
Melissa L. -H. Võ; John M. Henderson Does gravity matter? Effects of semantic and syntactic inconsistencies on the allocation of attention during scene perception Journal Article In: Journal of Vision, vol. 9, no. 3, pp. 24–24, 2009. @article{Vo2009, It has been shown that attention and eye movements during scene perception are preferentially allocated to semantically inconsistent objects compared to their consistent controls. However, there has been a dispute over how early during scene viewing such inconsistencies are detected. In the study presented here, we introduced syntactic object–scene inconsistencies (i.e., floating objects) in addition to semantic inconsistencies to investigate the degree to which they attract attention during scene viewing. In Experiment 1 participants viewed scenes in preparation for a subsequent memory task, while in Experiment 2 participants were instructed to search for target objects. In neither experiment were we able to find evidence for extrafoveal detection of either type of inconsistency. However, upon fixation both semantically and syntactically inconsistent objects led to increased object processing as seen in elevated gaze durations and number of fixations. Interestingly, the semantic inconsistency effect was diminished for floating objects, which suggests an interaction of semantic and syntactic scene processing. This study is the first to provide evidence for the influence of syntactic in addition to semantic object–scene inconsistencies on eye movement behavior during real-world scene viewing. WABBLE: |
Hsueh-Cheng Wang; Alex D. Hwang; Marc Pomplun Object frequency and predictability effects on eye fixation durations in real-world scene viewing Journal Article In: Journal of Eye Movement Research, vol. 3, no. 3, pp. 1–10, 2009. @article{Wang2009c, During text reading, the durations of eye fixations decrease with greater frequency and predictability of the currently fixated word (Rayner, 1998; 2009). However, it has not been tested whether those results also apply to scene viewing. We computed object frequency and predictability from both linguistic and visual scene analysis (LabelMe, Russell et al., 2008), and Latent Semantic Analysis (Landauer et al., 1998) was applied to estimate predictability. In a scene-viewing experiment, we found that, for small objects, linguistics-based frequency, but not scene-based frequency, had effects on first fixation duration, gaze duration, and total time. Both linguistic and scene-based predictability affected total time. Similar to reading, fixation duration decreased with higher frequency and predictability. For large objects, we found the direction of effects to be the inverse of those found in reading studies. These results suggest that the recognition of small objects in scene viewing shares some characteristics with the recognition of words in reading. |
Gregory J. Zelinsky; Joseph Schmidt An effect of referential scene constraint on search implies scene segmentation Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1004–1028, 2009. @article{Zelinsky2009, Subjects searched aerial images for a UFO target, which appeared hovering over one of five scene regions: Water, fields, foliage, roads, or buildings. Prior to search scene onset, subjects were either told the scene region where the target could be found (specified condition) or not (unspecified condition). Search times were faster and fewer eye movements were needed to acquire targets when the target region was specified. Subjects also distributed their fixations disproportionately in this region and tended to fixate the cued region sooner. We interpret these patterns as evidence for the use of referential scene constraints to partially confine search to a specified scene region. Importantly, this constraint cannot be due to learned associations between the scene and its regions, as these spatial relationships were unpredictable. These findings require the modification of existing theories of scene constraint to include segmentation processes that can rapidly bias search to cued regions. |
Joost C. Dessing; Leonie Oostwoud Wijdenes; C. E. Peper; Peter J. Beek Visuomotor transformation for interception: Catching while fixating Journal Article In: Experimental Brain Research, vol. 196, no. 4, pp. 511–527, 2009. @article{Dessing2009, Catching a ball involves a dynamic transformation of visual information about ball motion into motor commands for moving the hand to the right place at the right time. We previously formulated a neural model for this transformation to account for the consistent leftward movement biases observed in our catching experiments. According to the model, these biases arise within the representation of target motion as well as within the transformation from a gaze-centered to a body-centered movement command. Here, we examine the validity of the latter aspect of our model in a catching task involving gaze fixation. Gaze fixation should systematically influence biases in catching movements, because in the model movement commands are only generated in the direction perpendicular to the gaze direction. Twelve participants caught balls while gazing at a fixation point positioned either straight ahead or 14 degrees to the right. Four participants were excluded because they could not adequately maintain fixation. We again observed a consistent leftward movement bias, but the catching movements were unaffected by fixation direction. This result refutes our proposal that the leftward bias partly arises within the visuomotor transformation, and suggests instead that the bias predominantly arises within the early representation of target motion, specifically through an imbalance in the represented radial and azimuthal target motion. |
Christel Devue; Stefan Van der Stigchel; Serge Brédart; Jan Theeuwes You do not find your own face faster; you just look at it longer Journal Article In: Cognition, vol. 111, no. 1, pp. 114–122, 2009. @article{Devue2009, Previous studies investigating the ability of high priority stimuli to grab attention reached contradictory outcomes. The present study used eye tracking to examine the effect of the presence of the self-face among other faces in a visual search task in which the face identity was task-irrelevant. We assessed whether the self-face (1) received prioritized selection (2) caused a difficulty to disengage attention, and (3) whether its status as target or distractor had a differential effect. We included another highly familiar face to control whether possible effects were self-face specific or could be explained by high familiarity. We found that the self-face interfered with the search task. This was not due to a prioritized processing but rather to a difficulty to disengage attention. Crucially, this effect seemed due to the self-face's familiarity, as similar results were obtained with the other familiar face, and was modulated by the status of the face since it was stronger for targets than for distractors. |
Christopher A. Dickinson; Helene Intraub Spatial asymmetries in viewing and remembering scenes: Consequences of an attentional bias? Journal Article In: Attention, Perception, and Psychophysics, vol. 71, no. 6, pp. 1251–1262, 2009. @article{Dickinson2009, Given a single fixation, memory for scenes containing salient objects near both the left and right view boundaries exhibited a rightward bias in boundary extension (Experiment 1). On each trial, a 500-msec picture and 2.5-sec mask were followed by a boundary adjustment task. Observers extended boundaries 5% more on the right than on the left. Might this reflect an asymmetric distribution of attention? In Experiments 2A and 2B, free viewing of pictures revealed that first saccades were more often leftward (62%) than rightward (38%). In Experiment 3, 500-msec pictures were interspersed with 2.5-sec masks. A subsequent object recognition memory test revealed better memory for left-side objects. Scenes were always mirror reversed for half the observers, thus ruling out idiosyncratic scene compositions as the cause of these asymmetries. Results suggest an unexpected leftward bias of attention that selectively enhanced the representations, causing a smaller boundary extension error and better object memory on the views' left sides. |
Michael D. Dodd; Stefan Van Stigchel; Andrew Hollingworth Novelty is not always the best policy. Inhibition of return and facilitation of return as a function of visual task Journal Article In: Psychological Science, vol. 20, no. 3, pp. 333–339, 2009. @article{Dodd2009, We report a study that examined whether inhi- bition of return (IOR) is specific to visual search or a general characteristic of visual behavior. Participants were shown a series of scenes and were asked to (a) search each scene for a target, (b) memorize each scene, (c) rate how pleasant each scene was, or (d) view each scene freely. An examination of saccadic reaction times to probes provided evidence of IOR during search: Participants were slower to look at probes at previously fixated locations than to look at probes at novel locations. For the other three conditions, however, the opposite pattern of results was observed: Participants were faster to look at probes at previously fixated locations than to look at probes at novel locations, a facilitation-of-return effect that has not been reported previously. These results demonstrate that IOR is a search-specific strategy and not a general characteristic of visual attention. |
Markus Bindemann; Christoph Scheepers; A. Mike Burton Viewpoint and center of gravity affect eye movements to human faces Journal Article In: Journal of Vision, vol. 9, no. 2, pp. 1–16, 2009. @article{Bindemann2009, In everyday life, human faces are encountered in many different views. Despite this fact, most psychological research has focused on the perception of frontal faces. To address this shortcoming, the current study investigated how different face views are processed, by measuring eye movements to frontal, mid-profile and profile faces during a gender categorization (Experiment 1) and a free-viewing task (Experiment 2). In both experiments observers initially fixated the geometric center of a face, independent of face view. This center-of-gravity effect induced a qualitative shift in the features that were sampled across different face views in the time period immediately after stimulus onset. Subsequent eye fixations focused increasingly on specific facial features. At this stage, the eye regions were targeted predominantly in all face views, and to a lesser extent also the nose and the mouth. These findings show that initial saccades to faces are driven by general stimulus properties, before eye movements are redirected to the specific facial features in which observers take an interest. These findings are illustrated in detail by plotting the distribution of fixations, first fixations, and percentage fixations across time. |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Saliency does not account for fixations to eyes within social scenes Journal Article In: Vision Research, vol. 49, pp. 2992–3000, 2009. @article{Birmingham2009, We assessed the role of saliency in driving observers to fixate the eyes in social scenes. Saliency maps (Itti & Koch, 2000) were computed for the scenes from three previous studies. Saliency provided a poor account of the data. The saliency values for the first-fixated locations were extremely low and no greater than what would be expected by chance. In addition, the saliency values for the eye regions were low. Furthermore, whereas saliency was no better at predicting early saccades than late saccades, the average latency to fixate social areas of the scene (e.g., the eyes) was very fast (within 200 ms). Thus, visual saliency does not account for observers' bias to select the eyes within complex social scenes, nor does it account for fixation behavior in general. Instead, it appears that observers' fixations are driven largely by their default interest in social information. |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Get real! Resolving the debate about equivalent social stimuli Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 904–924, 2009. @article{Birmingham2009a, Gaze and arrow studies of spatial orienting have shown that eyes and arrows produce nearly identical effects on shifts of spatial attention. This has led some researchers to suggest that the human attention system considers eyes and arrows as equivalent social stimuli. However, this view does not fit with the general intuition that eyes are unique social stimuli nor does it agree with a large body of work indicating that humans possess a neural system that is preferentially biased to process information regarding human gaze. To shed light on this discrepancy we entertained the idea that the model cueing task may fail to measure some of the ways that eyes are special. Thus rather than measuring the orienting of attention to a location cued by eyes and arrows, we measured the selection of eyes and arrows embedded in complex real-world scenes. The results were unequivocal: People prefer to look at other people and their eyes; they rarely attend to arrows. This outcome was not predicted by visual saliency but it was predicted by the idea that eyes are social stimuli that are prioritized by the attention system. These data, and the paradigm from which they were derived, shed new light on past cueing studies of social attention, and they suggest a new direction for future investigations of social attention. Gaze and arrow studies of spatial orienting have shown that eyes and arrows produce nearly identical effects on shifts of spatial attention. This has led some researchers to suggest that the human attention system considers eyes and arrows as equivalent social stimuli. However, this view does not fit with the general intuition that eyes are unique social stimuli nor does it agree with a large body of work indicating that humans possess a neural system that is preferentially biased to process information regarding human gaze. To shed light on this discrepancy we entertained the idea that the model cueing task may fail to measure some of the ways that eyes are special. Thus rather than measuring the orienting of attention to a location cued by eyes and arrows, we measured the selection of eyes and arrows embedded in complex real-world scenes. The results were unequivocal: People prefer to look at other people and their eyes; they rarely attend to arrows. This outcome was not predicted by visual saliency but it was predicted by the idea that eyes are social stimuli that are prioritized by the attention system. These data, and the paradigm from which they were derived, shed new light on past cueing studies of social attention, and they suggest a new direction for future investigations of social attention. |
Christoph Bledowski; Benjamin Rahm; James B. Rowe What "works" in working memory? Separate systems for selection and updating of critical information Journal Article In: Journal of Neuroscience, vol. 29, no. 43, pp. 13735–13741, 2009. @article{Bledowski2009, Cognition depends critically on working memory, the active representation of a limited number of items over short periods of time. In addition to the maintenance of information during the course of cognitive processing, many tasks require that some of the items in working memory become transiently more important than others. Based on cognitive models of working memory, we hypothesized two complementary essential cognitive operations to achieve this: a selection operation that retrieves the most relevant item, and an updating operation that changes the focus of attention onto it. Using functional magnetic resonance imaging, high-resolution oculometry, and behavioral analysis, we demonstrate that these two operations are functionally and neuroanatomically dissociated. Updating the attentional focus elicited transient activation in the caudal superior frontal sulcus and posterior parietal cortex. In contrast, increasing demands on selection selectively modulated activation in rostral superior frontal sulcus and posterior cingulate/precuneus. We conclude that prioritizing one memory item over others invokes independent mechanisms of mnemonic retrieval and attentional focusing, each with its distinct neuroanatomical basis within frontal and parietal regions. These support the developing understanding of working memory as emerging from the interaction between memory and attentional systems. |
Walter R. Boot; Ensar Becic; Arthur F. Kramer Stable individual differences in search strategy?: The effect of task demands and motivational factors on scanning strategy in visual search Journal Article In: Journal of Vision, vol. 9, no. 3, pp. 1–16, 2009. @article{Boot2009, Previous studies have demonstrated large individual differences in scanning strategy during a dynamic visual search task (E. Becic, A. F. Kramer, & W. R. Boot, 2007; W. R. Boot, A. F. Kramer, E. Becic, D. A. Wiegmann, & T. Kubose, 2006). These differences accounted for substantial variance in performance. Participants who chose to search covertly (without eye movements) excelled, participants who searched overtly (with eye movements) performed poorly. The aim of the current study was to investigate the stability of scanning strategies across different visual search tasks in an attempt to explain why a large percentage of observers might engage in maladaptive strategies. Scanning strategy was assessed for a group of observers across a variety of search tasks without feedback (efficient search, inefficient search, change detection, dynamic search). While scanning strategy was partly determined by task demands, stable individual differences emerged. Participants who searched either overtly or covertly tended to adopt the same strategy regardless of the demands of the search task, even in tasks in which such a strategy was maladaptive. However, when participants were given explicit feedback about their performance during search and performance incentives, strategies across tasks diverged. Thus it appears that observers by default will favor a particular search strategy but can modify this strategy when it is clearly maladaptive to the task. |
Francis Colas; Fabien Flacher; Thomas Tanner; Pierre Bessière; Benoît Girard Bayesian models of eye movement selection with retinotopic maps Journal Article In: Biological Cybernetics, vol. 100, no. 3, pp. 203–214, 2009. @article{Colas2009, Among the various possible criteria guiding eye movement selection, we investigate the role of position uncertainty in the peripheral visual field. In particular, we suggest that, in everyday life situations of object tracking, eye movement selection probably includes a principle of reduction of uncertainty. To evaluate this hypothesis, we confront the movement predictions of computational models with human results from a psychophysical task. This task is a freely moving eye version of the multiple object tracking task, where the eye movements may be used to compensate for low peripheral resolution. We design several Bayesian models of eye movement selection with increasing complexity, whose layered structures are inspired by the neurobiology of the brain areas implied in this process. Finally, we compare the relative performances of these models with regard to the prediction of the recorded human movements, and show the advantage of taking explicitly into account uncertainty for the prediction of eye movements. |
Geoff G. Cole; Gustav Kuhn Appearance matters: Attentional orienting by new objects in the precueing paradigm Journal Article In: Visual Cognition, vol. 17, no. 5, pp. 755–776, 2009. @article{Cole2009, Five experiments examined whether the appearance of a new object is able to orient attention in the absence of an accompanying sensory transient. A variant of the precueing paradigm (Posner & Cohen, 1984) was employed in which the cue was the onset of a new object. Crucially, the new object's appearance was not associated with any unique sensory transient. This was achieved by using a variant of the ‘‘annulus'' procedure recently developed by Franconeri, Hollingworth, and Simons (2005). Results showed that unless observers had an attentional set explicitly biased against onset, a validity effect was observed such that response times were shorter for targets occurring at the location of the new object relative to when targets occurred at the location of the 'old' object. We conclude that new onsets do not need to be associated with a unique sensory transient in order to orient attention. |
Charles A. Collin; Patricia A. McMullen; Julie Anne Séguin A significant bilateral field advantage for shapes defined by static and motion cues Journal Article In: Perception, vol. 38, no. 8, pp. 1132–1143, 2009. @article{Collin2009, Matching performance is better when pairs of visual stimuli are presented in bilateral conditions—in which one stimulus is presented to each side of the visual field—than in unilateral presentations—when both stimuli are presented to one side of the field. This is called the bilateral field advantage (BFA). The processing of visual motion has also been found to be more strongly integrated across the cerebral hemispheres than is processing of static cues. However, in these studies higher-order motion tasks, such as processing motion-defined form, have not been examined. To determine if the BFA generalises to such tasks, we measured the magnitude of the effect using a shape-matching task in which the stimuli were random polygons that were either in motion, motion-defined, or static. The polygon pairs were presented either: (i) bilaterally, one to either side of the vertical meridian; (ii) unilaterally, both to one side of the vertical meridian (left or right visual fields); or (iii) centrally, vertically separated across the horizontal meridian (a control condition). An equal advantage of bilateral conditions over unilateral ones was found for all three types of polygon shape cues, showing that the BFA generalises to conditions where shapes are in motion and where shape is defined by motion. These findings are compatible with the notion that motion processing is strongly integrated across the cerebral hemispheres, and with the idea that this integration manifests itself with simple motion information, rather than with higher-order motion processing such as matching shapes defined by motion. |
Julien Cotti; Muriel T. N. Panouillères; Douglas P. Munoz; Jean-Louis Vercher; Denis Pélisson; Alain Guillaume Adaptation of reactive and voluntary saccades: different patterns of adaptation revealed in the antisaccade task Journal Article In: Journal of Physiology, vol. 587, no. 1, pp. 127–138, 2009. @article{Cotti2009, Sensorimotor adaptation restores and maintains the accuracy of goal-directed movements. It remains unclear whether these adaptive mechanisms modify actions by controlling peripheral premotor stages that send commands to the effectors and/or earlier processing stages involved in registration of target location. Here, we studied the effect of adaptation of saccadic eye movements, a well-established model of sensorimotor adaptation, in an antisaccade task. This task introduces a clear spatial dissociation between the actual target direction and the requested saccade direction because the correct movement direction is in the opposite direction from the target location. We used this requirement of a vector inversion to assess the level(s) of saccadic adaptation for two different types of adapted saccades. In two different experiments, we tested the transfer to antisaccades of the adaptation in one direction of reactive saccades to jumping targets and of scanning voluntary saccades within a target array. In the first experiment, we found that adaptation of reactive saccades transferred only to antisaccades in the adapted direction. In contrast, in the second experiment, adaptation of scanning voluntary saccades transferred to antisaccades in both the adapted and non-adapted directions. We conclude that adaptation of reactive saccades acts only downstream of the vector inversion required in the antisaccade task, whereas adaptation of voluntary saccades has a distributed influence, acting both upstream and downstream of vector inversion. |
Kim Joris Boström; Anne Kathrin Warzecha Ocular following response to sampled motion Journal Article In: Vision Research, vol. 49, no. 13, pp. 1693–1701, 2009. @article{Bostroem2009, We investigate the impact of monitor frame rate on the human ocular following response (OFR) and find that the response latency considerably depends on the frame rate in the range of 80-160 Hz, which is far above the flicker fusion limit. From the lowest to the highest frame rate the latency declines by roughly 10 ms. Moreover, the relationship between response latency and stimulus speed is affected by the frame rate, compensating and even inverting the effect at lower frame rates. In contrast to that, the initial response acceleration is not affected by the frame rate and its expected dependence on stimulus speed remains stable. The nature of these phenomena reveals insights into the neural mechanism of low-level motion detection underlying the ocular following response. |
Christian Boucheny; Georges Pierre Bonneau; Jacques Droulez; Guillaume Thibault; Stephane Ploix A perceptive evaluation of volume rendering techniques Journal Article In: ACM Transactions on Applied Perception, vol. 5, no. 4, pp. 1–24, 2009. @article{Boucheny2009, The display of space filling data is still a challenge for the community of visualization. Direct volume rendering (DVR) is one of the most important techniques developed to achieve direct perception of such volumetric data. It is based on semitransparent representations, where the data are accumulated in a depth-dependent order. However, it produces images that may be difficult to understand, and thus several techniques have been proposed so as to improve its effectiveness, using for instance lighting models or simpler representations (e.g., maximum intensity projection). In this article, we present three perceptual studies that examine how DVR meets its goals, in either static or dynamic context. We show that a static representation is highly ambiguous, even in simple cases, but this can be counterbalanced by use of dynamic cues (i.e., motion parallax) provided that the rendering parameters are correctly tuned. In addition, perspective projections are demonstrated to provide relevant information to disambiguate depth perception in dynamic displays. |
Julie A. Brefczynski-Lewis; Ritobrato Datta; James W. Lewis; Edgar A. DeYoe The topography of visuospatial attention as revealed by a novel visual field mapping technique Journal Article In: Journal of Cognitive Neuroscience, vol. 21, no. 7, pp. 1447–1460, 2009. @article{BrefczynskiLewis2009, Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style." |
Eli Brenner; Jeroen B. J. Smeets Sources of variability in interceptive movements Journal Article In: Experimental Brain Research, vol. 195, no. 1, pp. 117–133, 2009. @article{Brenner2009, In order to successfully intercept a moving target one must be at the right place at the right time. But simply being there is seldom enough. One usually needs to make contact in a certain manner, for instance to hit the target in a certain direction. How this is best achieved depends on the exact task, but to get an idea of what factors may limit performance we asked people to hit a moving virtual disk through a virtual goal, and analysed the spatial and temporal variability in the way in which they did so. We estimated that for our task the standard deviations in timing and spatial accuracy are about 20 ms and 5 mm. Additional variability arises from individual movements being planned slightly differently and being adjusted during execution. We argue that the way that our subjects moved was precisely tailored to the task demands, and that the movement accuracy is not only limited by the muscles and their activation, but also-and probably even mainly-by the resolution of visual perception. |
Leonard A. Breslow; J. Gregory Trafton; Raj M. Ratwani A perceptual process approach to selecting color scales for complex visualizations Journal Article In: Journal of Experimental Psychology: Applied, vol. 15, no. 1, pp. 25–34, 2009. @article{Breslow2009, Previous research has shown that multicolored scales are superior to ordered brightness scales for supporting identification tasks on complex visualizations (categorization, absolute numeric value judgments, etc.), whereas ordered brightness scales are superior for relative comparison tasks (greater/less). We examined the processes by which such tasks are performed. By studying eye movements and by comparing performance on scales of different sizes, we argued that (a) people perform identification tasks by conducting a serial visual search of the legend, whose speed is sensitive to the number of scale colors and the discriminability of the colors; and (b) people perform relative comparison tasks using different processes for multicolored versus brightness scales. With multicolored scales, they perform a parallel search of the legend, whose speed is relatively insensitive to the size of the scale, whereas with brightness scales, people usually directly compare the target colors in the visualization, while making little reference to the legend. Performance of comparisons was relatively robust against increases in scale size, whereas performance of identifications deteriorated markedly, especially with brightness scales, once scale sizes reached 10 colors or more. |
James R. Brockmole; Walter R. Boot Should I stay or should I go? Attentional disengagement from visually unique and unexpected items at fixation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 35, no. 3, pp. 808–815, 2009. @article{Brockmole2009, Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience. |
Anne-Marie M. Brouwer; Volker H. Franz; Karl R. Gegenfurtner Differences in fixations between grasping and viewing objects Journal Article In: Journal of Vision, vol. 9, no. 1, pp. 1–24, 2009. @article{Brouwer2009, Where exactly do people look when they grasp an object? An object is usually contacted at two locations, whereas the gaze can only be at one location at the time. We investigated participants' fixation locations when they grasp objects with the contact positions of both index finger and thumb being visible and compared these to fixation locations when they only viewed the objects. Participants grasped with the index finger at the top and the thumb at the bottom of a flat shape. The main difference between grasping and viewing was that after a saccade roughly directed to the object's center of gravity, participants saccaded more upward and more into the direction of a region that was difficult to contact during grasping. A control experiment indicated that it was not the upper part of the shape that attracted fixation, while the results were consistent with an attraction by the index finger. Participants did not try to fixate both contact locations. Fixations were closer to the object's center of gravity in the viewing than in the grasping task. In conclusion, participants adapt their eye movements to the need of the task, such as acquiring information about regions with high required contact precision in grasping, even with small (graspable) objects. We suggest that in grasping, the main function of fixations is to acquire visual feedback of the approaching digits. |
Claudio Brozzoli; Francesco Pavani; Christian Urquizar; Lucilla Cardinali; Alessandro Farnè Grasping actions remap peripersonal space Journal Article In: NeuroReport, vol. 20, no. 10, pp. 913–917, 2009. @article{Brozzoli2009, The portion of space that closely surrounds our body parts is termed peripersonal space, and it has been shown to be represented in the brain through multisensory processing systems. Here, we tested whether voluntary actions, such as grasping an object, may remap such multisensory spatial representation. Participants discriminated touches on the hand they used to grasp an object containing task-irrelevant visual distractors. Compared with a static condition, reach-to-grasp movements increased the interference exerted by visual distractors over tactile targets. This remapping of multisensory space was triggered by action onset and further enhanced in real time during the early action execution phase. Additional experiments showed that this phenomenon is hand-centred. These results provide the first evidence of a functional link between voluntary object-oriented actions and multisensory coding of the space around us. |
Stephen H. Butler; Stéphanie Rossit; Iain D. Gilchrist; Casimir J. H. Ludwig; Bettina Olk; Keith Muir; Ian Reeves; Monika Harvey Non-lateralised deficits in anti-saccade performance in patients with hemispatial neglect Journal Article In: Neuropsychologia, vol. 47, no. 12, pp. 2488–2495, 2009. @article{Butler2009, We tested patients suffering from hemispatial neglect on the anti-saccade paradigm to assess voluntary control of saccades. In this task participants are required to saccade away from an abrupt onset target. As has been previously reported, in the pro-saccade condition neglect patients showed increased latencies towards targets presented on the left and their accuracy was reduced as a result of greater undershoot. To our surprise though, in the anti-saccade condition, we found strong bilateral effects: the neglect patients produced large numbers of erroneous pro-saccades to both left and right stimuli. This deficit in voluntary control was present even in patients whose lesions spared the frontal lobes. These results suggest that the voluntary control of action is supported by an integrated network of cortical regions, including more posterior areas. Damage to one or more components within this network may result in impaired voluntary control. Crown Copyright © 2009. |
Manuel G. Calvo; M. Dolores Castillo Semantic word priming in the absence of eye fixations: Relative contributions of overt and covert attention Journal Article In: Psychonomic Bulletin & Review, vol. 16, no. 1, pp. 51–56, 2009. @article{Calvo2009, In the present study, we investigated the role of covert and overt attention in word identification. In repetition and semantic priming paradigms, prime words were followed by a probe for lexical decision. To make the primes available only to covert attention, we presented them for 150 msec, parafoveally (2.2 degrees away from fixation), and under gaze-contingent foveal masking. To make the primes available to overt attention, we presented them for 150 msec, at fixation, with no masking. Results showed both repetition and semantic priming in the absence of eye fixations on the primes: There was facilitation for identical and semantically related probe words, relative to an unrelated prime-probe condition. This revealed that both word form and meaning can be processed by covert attention alone. The pattern of relative contributions of covert (approximately 25%) and overt (approximately 75%) attention was similar for repetition and semantic priming. |
Manuel G. Calvo; Lauri Nummenmaa Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 9, no. 4, pp. 398–411, 2009. @article{Calvo2009a, Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition. © 2009 The Psychonomic Society, Inc. |
Manuel G. Calvo; Lauri Nummenmaa Lateralised covert attention in word identification Journal Article In: Laterality, vol. 14, no. 2, pp. 178–195, 2009. @article{Calvo2009b, The right visual field superiority in word recognition has been attributed to an attentional advantage by the left brain hemisphere. We investigated whether such advantage involves lateralised covert attention, in the absence of overt fixations on prime words. In a lexical decision task target words were preceded by an identical or an unrelated prime word. Eye movements were monitored. In Experiment 1 lateralised (to the left or right of fixation) prime words were parafoveally visible but foveally masked, thus allowing for covert attention but preventing overt attention. In Experiment 2 prime words were presented at fixation, thus allowing for both overt and covert attention. Results revealed positive priming in the absence of fixations on the primes when these were presented in the right visual field. The effects of covertly attended primes were nevertheless significantly reduced in comparison with those of overtly attended primes. It is concluded that word identification can be accomplished to a significant extent by lateralised covert attention alone, with right visual field advantage. |
Moran Cerf; E. Paxon Frady; Christof Koch Faces and text attract gaze independent of the task: Experimental data and computer model Journal Article In: Journal of Vision, vol. 9, no. 12, pp. 1–15, 2009. @article{Cerf2009, Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones-as a suitable control-attract attention by tracking the eye movements of subjects in two types of tasks-free viewing and search. We observed that subjects in free-viewing conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difficult to avoid looking at faces and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We refine a well-known bottom-up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding high-level semantic information (i.e., the location of faces or text) and demonstrate that this significantly improves the ability to predict eye fixations in natural images. Our enhanced model's predictions yield an area under the ROC curve over 84% for images that contain faces or text when compared against the actual fixation pattern of subjects. This suggests that the primate visual system allocates attention using such an enhanced saliency map. |
George Chahine; Bart Krekelberg Cortical contributions to saccadic suppression Journal Article In: PLoS ONE, vol. 4, no. 9, pp. e6900, 2009. @article{Chahine2009, The stability of visual perception is partly maintained by saccadic suppression: the selective reduction of visual sensitivity that accompanies rapid eye movements. The neural mechanisms responsible for this reduced perisaccadic visibility remain unknown, but the Lateral Geniculate Nucleus (LGN) has been proposed as a likely site. Our data show, however, that the saccadic suppression of a target flashed in the right visual hemifield increased with an increase in background luminance in the left visual hemifield. Because each LGN only receives retinal input from a single hemifield, this hemifield interaction cannot be explained solely on the basis of neural mechanisms operating in the LGN. Instead, this suggests that saccadic suppression must involve processing in higher level cortical areas that have access to a considerable part of the ipsilateral hemifield. |
Filipe Cristino; Roland J. Baddeley The nature of the visual representations involved in eye movements when walking down the street Journal Article In: Visual Cognition, vol. 17, no. 6/7, pp. 880–903, 2009. @article{Cristino2009, In this paper we set out to answer two questions. The first aims to discover whether saccades are driven by low level image features (as suggested by a number of recent and influential models), a position we call image salience, or whether they are driven by the meaning and reward associated with the world, a position we call world salience. The second question concerns the reference frame in which the eye movements are planned. To answer these questions, we recorded six videos, using a head mounted camera, with the viewer walking down a popular shopping street in Bristol. As well as showing these videos to our participants, we also showed spatially and temporally filtered versions of them. We found that, at a coarse spatial scale, subjects viewed similar locations in the image, irrespective of the filtering, and that fixation distributions found when viewing videos with similar filtering were no more alike than if the filtering varied widely. Using a novel mixture modelling technique, we also showed that the most important reference frame was world-centred rather than head or body-based. This was confirmed by a second experiment where the fixation distributions to identical videos was systematically changed by using a swivelling tent that only altered subjects' perception of the gravitational vertical. We conclude that eye movements should not be understood in terms of image salience, or even information maximization, but in terms of the more flexible concept of reward maximization. |
Kirsten A. Dalrymple; Walter F. Bischof; David Cameron; Jason J. S. Barton; Alan Kingstone Global perception in simultanagnosia is not as simple as a game of connect-the-dots Journal Article In: Vision Research, vol. 49, no. 14, pp. 1901–1908, 2009. @article{Dalrymple2009, Simultanagnosia is a neuropsychological disorder characterized by a restriction of visuospatial attention. In addition, patients are able to identify local elements of a scene, but not the global whole. This may be due to a failure to scan and assemble local elements into a global whole (i.e. connect-the-dots). We monitored the eye movements of a simultanagnosic patient while she identified local and global elements of hierarchical letters. Scanning each local element was not necessary, nor sufficient, for successful global level identification. Our results argue against a connect-the-dots strategy of global identification and suggest that residual global processing may be occurring. |
Rong-Fuh Day; Chien-Huang Lin; Wen-Hung Huang; Sheng-Hsiung Chuang Effects of music tempo and task difficulty on multi-attribute decision-making: An eye-tracking approach Journal Article In: Computers in Human Behavior, vol. 25, no. 1, pp. 130–143, 2009. @article{Day2009, This study examined the effects of music tempo and task difficulty on the performance of multi-attribute decision-making according two alternative perspectives: background music as the arousal inducer vs. the distractor. An eye-tracking based experiment was conducted. Our results supported the arousal inducer perspective that, with the same level of decision time, participants made decisions more accurately with the presentation of faster than slower tempo music. Further, faster tempo music was found to improve the accuracy of harder decision-making only, not that of easier decision-making. More interestingly, our exploratory analysis on eye fixations found the occurrence of adaptive behavior, namely, that the search pattern of participants became more intra-dimensional under the faster tempo music as compared with the slower tempo music. |
Sebastian Pannasch; Boris M. Velichkovsky Distractor effect and saccade amplitudes: Further evidence on different modes of processing in free exploration of visual images Journal Article In: Visual Cognition, vol. 17, no. 6-7, pp. 1109–1131, 2009. @article{Pannasch2009, In view of a variety of everyday tasks, it is highly implausible that all visual fixations fulfil the same role. Earlier we demonstrated that a combination of fixation duration and amplitude of related saccades strongly correlates with the probability of correct recognition of objects and events both in static and in dynamic scenes (Velichkovsky, Joos, Helmert, Velichkovsky, Rothert, Kopf, Dornhoefer, see Pannasch, Dornhoefer, Unema, & Velichkovsky, 2001) in relation to amplitudes of the preceding saccade. In Experiment 1, it is shown that retinotopically identical visual events occurring 100 ms after the onset of a fixation have significantly less influence on fixation duration if the amplitude of the previous saccade exceeds the parafoveal range (set on 5° of arc). Experiment 2 demonstrates that this difference diminishes for distractors of obvious biological value such as looming motion patterns. In Experiment 3, we show that saccade amplitudes influence visual but not acoustic or haptic distractor effects. These results suggest an explanation in terms of a shifting balance of at least two modes of visual processing in free viewing of complex visual images. |
Yoni Pertzov; Galia Avidan; Ehud Zohary Accumulation of visual information across multiple fixations Journal Article In: Journal of Vision, vol. 9, no. 10, pp. 1–12, 2009. @article{Pertzov2009, Humans often redirect their gaze to the same objects within a scene, even without being consciously aware of it. Here, we investigated what type of visual information is accumulated across recurrent fixations on the same object. On each trial, subjects viewed an array comprised of several objects and were subsequently asked to report on various visual aspects of a randomly chosen target object from that array. Memory performance decreased as more fixations were directed to other objects, following the last fixation on the target object (i.e. post-target fixations). In contrast, performance was enhanced with increasing number of fixations on the target object. However, since the number of post-target fixations and the number of target fixations are usually anti-correlated, memory gain may simply reflect fewer post-target fixations, rather than true accumulation of information. To rule this out, we conducted a second experiment, in which the stimulus disappeared immediately after performing a predefined number of target fixations. Additional fixations on the target object resulted in improved memory performance even under these strict conditions. We conclude that, under the present conditions, various aspects of memory monotonically improve with repeated sampling of the same object. |
Yoni Pertzov; Ehud Zohary; Galia Avidan Implicitly perceived objects attract gaze during later free viewing Journal Article In: Journal of Vision, vol. 9, no. 6, pp. 1–12, 2009. @article{Pertzov2009a, Everyday life frequently requires searching for objects in the visual scene. Visual search is typically accompanied by a series of eye movements. In an effort to explain subjects' scanning patterns, models of visual search propose that a template of the target is used, to guide gaze (and attention) to locations which exhibit "suspicious" similarity to this template. We show here that the scanning patterns are also clearly influenced by implicit (unrecognized) cues: A backward masked object, presented before the scene display, automatically attracts gaze to its corresponding location in the following inspected image. Interestingly, subliminally observed words describing objects do not have the same effect. This demonstrates that visual search can be unconsciously guided by activated target representations at the perceptual level, but it is much less affected by implicit information at the semantic level. Implications on search models are discussed. |
Jeffrey M. Peterson; Paul Dassonville Differential latencies sculpt the time course of contextual effects on spatial perception Journal Article In: Journal of Cognitive Neuroscience, vol. 34, no. 11, pp. 2168–2188, 2009. @article{Peterson2009, The ability to judge an object's orientation with respect to gravitational vertical relies on an egocentric reference frame that is maintained using not only vestibular cues but also con- textual cues provided in the visual scene. Although much is known about how static contextual cues are incorporated into the egocentric reference frame, it is also important to under- stand how changes in these cues affect perception, since we move about in a world that is itself dynamic. To explore these temporal factors, we used a variant of the rod-and-frame illu- sion, in which participants indicated the perceived orientation of a briefly flashed rod (5-msec duration) presented before or after the onset of a tilted frame. The frame was found to bias the perceived orientation of rods presented as much as 185 msec before frame onset. To explain this postdictive effect, we propose a differential latency model, where the latency of the orientation judgment is greater than the latency of the con- textual cues' initial impact on the egocentric reference frame. In a subsequent test of this model, we decreased the luminance of the rod, which is known to increase visual afferent delays and slow decision processes. This further slowing of the orientation judgment caused the frame-induced bias to affect the perceived orientation of rods presented even further in advance of the frame. These findings indicate that the brain fails to compen- sate for a mismatch between the timing of orientation judg- ments and the incorporation of visual cues into the egocentric reference frame. ■ |
Sabine Loeber; Theodora Duka In: Addiction, vol. 104, no. 12, pp. 2013–2022, 2009. @article{Loeber2009, AIMS: To investigate whether acute alcohol would affect performance of a conditioned behavioural response to obtain a reward outcome and impair performance in a task measuring inhibitory control to provide new knowledge of how the acute effects of alcohol might contribute to the transition from alcohol use to dependence. DESIGN: A randomized controlled between-subjects design was employed. SETTINGS: The laboratory of experimental psychology at the University of Sussex. PARTICIPANTS: Thirty-two light to moderate social drinkers recruited from the undergraduate and postgraduate population. MEASUREMENTS: After the administration of alcohol (0.8 g/kg) or placebo participants underwent an instrumental reward-seeking procedure, with abstract stimuli serving as S+ (always predicting a win of 10 pence) and S- (always predicting a loss of 10 pence). In addition, a Stop Signal task was administered before and after the administration of alcohol. FINDINGS: Participants of the alcohol group performed the behavioural response to obtain the reward outcome more often than placebo subjects in trials associated with loss of money. This finding was observed, although alcohol was not affecting explicit knowledge of stimulus-response outcome contingencies and acquisition of conditioned attentional and emotional responses. In addition, alcohol increased Stop Signal reaction time indicating disinhibiting effects of alcohol, and this was associated positively with response probability to the S-. CONCLUSIONS: These results demonstrate that alcohol is affecting inhibitory control of behavioural responses to external signals even when associated with punishment, contributing in this way to the transition from alcohol use to dependence. |
Sabine Loeber; Theodora Duka Extinction learning of stimulus reward contingencies: The acute effects of alcohol Journal Article In: Drug and Alcohol Dependence, vol. 102, no. 1-3, pp. 56–62, 2009. @article{Loeber2009a, Background: Recent theories suggest that extinction is, at least partly, new learning suppressing original associations between a conditioned stimulus and a conditioned response without severing those associations. During extinction alcohol via its effects on inhibitory control may reduce the ability to suppress the original associations between a conditioned stimulus and a conditioned response leading to an impairment of extinction learning. Thus, the present study is set out to examine the effects of alcohol on extinction learning to enhance current knowledge on mechanisms of extinction and conditions that might hamper extinction, which is an important aspect for the treatment of alcohol-dependent patients. Methods: Light to moderate social drinkers (N = 32) acquired an instrumental reward seeking response. Extinction training of the reward seeking response was performed after administration of a dose of 0.8 g/kg alcohol resulting in a peak blood alcohol concentration ranging from 112 to 184 mg/dL. In addition, we assessed subjective alcohol effects and administered a Stop-Signal task which measures the ability to inhibit a pre-potent motor response. Results: Alcohol influenced subjective ratings of light-headedness and increased the Stop-Signal reaction time indicating disinhibiting effects. However, our results did not show any impairment of learning of extinction after the administration of alcohol. Behavioural as well as attentional responses indicated extinction of conditioned responses for both experimental groups. Conclusions: These findings suggest that alcohol at a dose that impairs performance in a task of inhibitory control does not impair learning of extinction. |
Sabine Loeber; Theodora Duka Acute alcohol decreases performance of an instrumental response to avoid aversive consequences in social drinkers Journal Article In: Psychopharmacology, vol. 205, no. 4, pp. 577–587, 2009. @article{Loeber2009b, BACKGROUND: Recent studies demonstrated that alcohol impairs inhibitory control of behavioural responses.$backslash$n$backslash$nAIMS: We questioned whether alcohol via its disinhibiting effects would also impair the inhibition of an instrumental avoidance response in the presence of a safety signal. DESIGN: Thirty-six moderate social drinkers were randomly allocated to receiving either alcohol (0.8 g/kg) or placebo before performing an instrumental avoidance procedure. White noise of 102 db was used as aversive outcome presented at a variable interval schedule in S+ trials, while no noise was presented in S- trials. An instrumental response (repeated space bar presses to avoid the noise presented at a variable interval) abolished the noise. The Stop Signal task and the affective Go/No-Go task were administered as inhibitory control tasks. RESULTS: Alcohol did not change the avoidance response rate in the presence of S- (safety signal). However, participants under alcohol performed the avoidance response to a lower extent than placebo subjects in S+ trials. Alcohol impaired performance in the Stop Signal task and increased the number of commission errors in the affective Go/No-Go task. Conditioned attentional and emotional responses to the S+ as well as knowledge of stimulus-response outcome contingencies were not affected by alcohol. CONCLUSIONS: Acute alcohol may decrease the motivation to avoid negative consequences and thus might contribute to risky behaviour and binge drinking. |
Xiu-Hong Li; Jin Jing; Xiao-Bing Zou; Xu Huang; Yu Jin; Qing-Xiong Wang; Xue-Bin Chen; Bin-Rang Yang; Si-Yuan Yang Picture perception in Chinese dyslexic children: An eye-movement study Journal Article In: Chinese Medical Journal, vol. 122, no. 3, pp. 267–271, 2009. @article{Li2009, BACKGROUND: Currently, whether or not there is visuospatial impairments in Chinese dyslexic children is still a matter of discussion. The relatively recent application of an eye-tracking paradigm may offer an opportunity to address this issue. In China, in comparison with reading studies, there have not been nearly as many eye movement studies dealing with nonreading tasks such as picture identification and whether Chinese children with dyslexia have a picture processing deficit is not clear. The purposes of the present study were to determine whether or not there is visuospatial impairments in Chinese dyslexic children. Moreover, we attempted to discuss whether or not the abnormal eye movement pattern that dyslexic subjects show during reading of text appropriate for their age is a consequence of their linguistic difficulties. METHODS: An eye-link II High-Speed Eye Tracker was used to track the series of eye-movement of 19 Chinese dyslexic children and 19 Chinese normal children. All of the subjects were presented with three pictures for this eye-tracking task and 6 relative eye-movement parameters, first fixation duration, average fixation duration, average saccade amplitude, mean saccade distance, fixation frequency and saccade frequency were recorded for analysis. RESULTS: Analyzing the relative parameter among three pictures, except for the fixation frequency and the saccade frequency, other eye-movement parameters were significantly different among the three pictures (P<0.05). Among the three pictures, the first fixation duration was longer, and the average fixation duration, the average saccade amplitude and the mean saccade distance were shorter from picture 2 to picture 3. Comparing all eye-movement parameter between the two groups, the scores of average saccade amplitude (P=0.017) and the mean saccade distance (P=0.02) were less in the dyslexia group than in the normal group (P<0.05), other parameters were the same in the two different groups (P>0.05). CONCLUSIONS: The characteristics of the pictures can significantly influence the visuospatial cognitive processing capability of the Chinese children. There is a detectable disability for the Chinese dyslexic children in the visuospatial cognitive processing: their saccade amplitude and mean saccade distance are shorter, which may be interpreted as specific for their reading disability. |
Tobias Pflugshaupt; Roman Wartburg; Pascal Wurtz; Silvia Chaves; Anouk Déruaz; Thomas Nyffeler; Sebastian Arx; Mathias Luethi; Dario Cazzoli; René M. Mueri Linking physiology with behaviour: Functional specialisation of the visual field is reflected in gaze patterns during visual search Journal Article In: Vision Research, vol. 49, no. 2, pp. 237–248, 2009. @article{Pflugshaupt2009a, Based on neurophysiological findings and a grid to score binocular visual field function, two hypotheses concerning the spatial distribution of fixations during visual search were tested and confirmed in healthy participants and patients with homonymous visual field defects. Both groups showed significant biases of fixations and viewing time towards the centre of the screen and the upper screen half. Patients displayed a third bias towards the side of their field defect, which represents oculomotor compensation. Moreover, significant correlations between the extent of these three biases and search performance were found. Our findings suggest a new, more dynamic view of how functional specialisation of the visual field influences behaviour. |
Ayelet McKyton; Yoni Pertzov; Ehud Zohary Pattern matching is assessed in retinotopic coordinates Journal Article In: Journal of Vision, vol. 9, no. 13, pp. 1–10, 2009. @article{McKyton2009, We typically examine scenes by performing multiple saccades to different objects of interest within the image. Therefore, an extra-retinotopic representation, invariant to the changes in the retinal image caused by eye movements, might be useful for high-level visual processing. We investigate here, using a matching task, whether the representation of complex natural images is retinotopic or screen-based. Subjects observed two simultaneously presented images, made a saccadic eye movement to a new fixation point, and viewed a third image. Their task was to judge whether the third image was identical to one of the two earlier images or different. Identical images could appear either in the same retinotopic position, in the same screen position, or in totally different locations. Performance was best when the identical images appeared in the same retinotopic position and worst when they appeared in the opposite hemifield. Counter to commonplace intuition, no advantage was conferred from presenting the identical images in the same screen position. This, together with performance sensitivity for image translation of a few degrees, suggests that image matching, which can often be judged without overall recognition of the scene, is mostly determined by neuronal activity in earlier brain areas containing a strictly retinotopic representation and small receptive fields. |
Patricia A. McMullen; Lesley E. MacSween; Charles A. Collin Behavioral effects of visual field location on processing motion- and luminance-defined form Journal Article In: Journal of Vision, vol. 9, no. 6, pp. 1–11, 2009. @article{McMullen2009, Traditional theories posit a ventral cortical visual pathway subserving object recognition regardless of the information defining the contour. However, functional magnetic resonance imaging (fMRI) studies have shown dorsal cortical activity during visual processing of static luminance-defined (SL) and motion-defined form (MDF). It is unknown if this activity is supported behaviorally, or if it depends on central or peripheral vision. The present study compared behavioral performance with two types of MDF [one without translational motion (MDF) and another with (TM)] and SL shapes in a shape matching task where shape pairs appeared in the upper or lower visual fields or along the horizontal meridian of central or peripheral vision. MDF matching was superior to the other contour types regardless of location in central vision. Both MDF and TM matching was superior to SL matching for presentations in peripheral vision. Importantly, there was an advantage for MDF and TM matching in the lower peripheral visual field that was not present for SL forms. These results are consistent with previous behavioral findings that show no field advantage for static form processing and a lower field advantage for motion processing. They are also suggestive of more dorsal cortical involvement in the processing of shapes defined by motion than luminance. |