EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2013 |
Kiwon Yun; Yifan Peng; Dimitris Samaras; Gregory J. Zelinsky; Tamara L. Berg Exploring the role of gaze behavior and object detection in scene understanding Journal Article In: Frontiers in Psychology, vol. 4, pp. 917, 2013. @article{Yun2013, We posit that a person's gaze behavior while freely viewing a scene contains an abundance of information, not only about their intent and what they consider to be important in the scene, but also about the scene's content. Experiments are reported, using two popular image datasets from computer vision, that explore the relationship between the fixations that people make during scene viewing, how they describe the scene, and automatic detection predictions of object categories in the scene. From these exploratory analyses, we then combine human behavior with the outputs of current visual recognition methods to build prototype human-in-the-loop applications for gaze-enabled object detection and scene annotation. |
Michael Zehetleitner; Anja Isabel Koch; Harriet Goschy; Hermann J. Müller Salience-based selection: Attentional capture by distractors less salient than the target Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e52595, 2013. @article{Zehetleitner2013, Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. |
Gregory J. Zelinsky; Hossein Adeli; Yifan Peng; Dimitris Samaras Modelling eye movements in a categorical search task Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–13, 2013. @article{Zelinsky2013, We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. |
En Zhang; Gong-Liang Zhang; Wu Li Spatiotopic perceptual learning mediated by retinotopic processing and attentional remapping Journal Article In: European Journal of Neuroscience, vol. 38, no. 12, pp. 3758–3767, 2013. @article{Zhang2013, Visual processing takes place in both retinotopic and spatiotopic frames of reference. Whereas visual perceptual learning is usually specific to the trained retinotopic location, our recent study has shown spatiotopic specificity of learning in motion direction discrimination. To explore the mechanisms underlying spatiotopic processing and learning, and to examine whether similar mechanisms also exist in visual form processing, we trained human subjects to discriminate an orientation difference between two successively displayed stimuli, with a gaze shift in between to manipulate their positional relation in the spatiotopic frame of reference without changing their retinal locations. Training resulted in better orientation discriminability for the trained than for the untrained spatial relation of the two stimuli. This learning-induced spatiotopic preference was seen only at the trained retinal location and orientation, suggesting experience-dependent spatiotopic form processing directly based on a retinotopic map. Moreover, a similar but weaker learning-induced spatiotopic preference was still present even if the first stimulus was rendered irrelevant to the orientation discrimination task by having the subjects judge the orientation of the second stimulus relative to its mean orientation in a block of trials. However, if the first stimulus was absent, and thus no attention was captured before the gaze shift, the learning produced no significant spatiotopic preference, suggesting an important role of attentional remapping in spatiotopic processing and learning. Taken together, our results suggest that spatiotopic visual representation can be mediated by interactions between retinotopic processing and attentional remapping, and can be modified by perceptual training. |
Ruyuan Zhang; Oh-Sang Kwon; Duje Tadin Illusory movement of stationary stimuli in the visual periphery: Evidence for a strong centrifugal prior in motion processing Journal Article In: Journal of Neuroscience, vol. 33, no. 10, pp. 4415–4423, 2013. @article{Zhang2013c, Visual input is remarkably diverse. Certain sensory inputs are more probable than others, mirroring statistical regularities of the visual environment. The visual system exploits many of these regularities, resulting, on average, in better inferences about visual stimuli. However, by incorporating prior knowledge into perceptual decisions, visual processing can also result in perceptions that do not match sensory inputs. Such perceptual biases can often reveal unique insights into underlying mechanisms and computations. For example, a prior assumption that objects move slowly can explain a wide range of motion phenomena. The prior on slow speed is usually rationalized by its match with visual input, which typically includes stationary or slow moving objects. However, this only holds for foveal and parafoveal stimulation. The visual periphery tends to be exposed to faster motions, which are biased toward centrifugal directions. Thus, if prior assumptions derive from experience, peripheral motion processing should be biased toward centrifugal speeds. Here, in experiments with human participants, we support this hypothesis and report a novel visual illusion where stationary objects in the visual periphery are perceived as moving centrifugally, while objects moving as fast as 7°/s toward fovea are perceived as stationary. These behavioral results were quantitatively explained by a Bayesian observer that has a strong centrifugal prior. This prior is consistent with both the prevalence of centrifugal motions in the visual periphery and a centrifugal bias of direction tuning in cortical area MT, supporting the notion that visual processing mirrors its input statistics. |
Melaina T. Vinski; Scott Watter Being a grump only makes things worse: A transactional account of acute stress on mind wandering Journal Article In: Frontiers in Psychology, vol. 4, pp. 730, 2013. @article{Vinski2013, The current work investigates the influence of acute stress on mind wandering. Participants completed the Positive and Negative Affect Schedule as a measure of baseline negative mood, and were randomly assigned to either the high-stress or low-stress version of the Trier Social Stress Test. Participants then completed the Sustained Attention to Response Task as a measure of mind-wandering behavior. In Experiment 1, participants reporting a high degree of negative mood that were exposed to the high-stress condition were more likely to engage in a variable response time, make more errors, and were more likely to report thinking about the stressor relative to participants that report a low level of negative mood. These effects diminished throughout task performance, suggesting that acute stress induces a temporary mind-wandering state in participants with a negative mood. The temporary affect-dependent deficits observed in Experiment 1 were replicated in Experiment 2, with the high negative mood participants demonstrating limited resource availability (indicated by pupil diameter) immediately following stress induction. These experiments provide novel evidence to suggest that acute psychosocial stress briefly suppresses the availability of cognitive resources and promotes an internally oriented focus of attention in participants with a negative mood. |
Jayalakshmi Viswanathan; Jason J. S. Barton The global effect for antisaccades Journal Article In: Experimental Brain Research, vol. 225, no. 2, pp. 247–259, 2013. @article{Viswanathan2013, In the global effect, prosaccades are deviated to a position intermediate between two targets or between a distractor and a target, which may reflect spatial averaging in a map encoded by the superior colliculus. Antisaccades differ from prosaccades in that they dissociate the locations of the stimulus and goal and generate weaker collicular activity. We used these antisaccade properties to determine whether the global effect was generated in stimulus or goal computations, and whether the global effect would be larger for antisaccades, as predicted by collicular averaging. In the first two experiments, human subjects performed antisaccades while distractors were placed in the vicinity of either the stimulus or the saccadic goal. Global effects occurred only for goal-related and not for stimulus-related distractors, indicating that this effect emerges from interactions with motor representations. In the last experiment, subjects performed prosaccades and antisaccades with and without goal-related distractors. When the results were adjusted for differences in response latency, the global effect for rapid responses was three to four times larger for antisaccades than for prosaccades. Finally, we compared our findings with predictions from collicular models, to quantitatively test the spatial averaging hypothesis: we found that our results were consistent with the predictions of a collicular model. We conclude that the antisaccade global effect shows properties compatible with spatial averaging in collicular maps and likely originates in layers with neural activity related to goal rather than stimulus representations. |
Melissa L. -H. Võ; Jeremy M. Wolfe The interplay of episodic and semantic memory in guiding repeated search in scenes. Journal Article In: Cognition, vol. 126, no. 2, pp. 198–212, 2013. @article{Vo2013, It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers performed as many as 15 searches for different objects in the same, unchanging scene, the speed of search did not decrease much over the course of these multiple searches (Vo & Wolfe, 2012). Only when observers were asked to search for the same object again did search become considerably faster. We argued that our naturalistic scenes provided such strong "semantic" guidance-e.g., knowing that a faucet is usually located near a sink-that guidance by incidental episodic memory-having seen that faucet previously-was rendered less useful. Here, we directly manipulated the availability of semantic information provided by a scene. By monitoring observers' eye movements, we found a tight coupling of semantic and episodic memory guidance: Decreasing the availability of semantic information increases the use of episodic memory to guide search. These findings have broad implications regarding the use of memory during search in general and particularly during search in naturalistic scenes. |
Adrian Mühlenen; Derrick G. Watson; Daniel O. A. Gunnell Blink and you won't miss it: The preview benefit in visual marking survives internally generated eyeblinks Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1279–1290, 2013. @article{Muehlenen2013, People are able to ignore old (previewed) stimuli in order to prioritize the processing of newly appearing items–the preview benefit (D. G. Watson & G. W. Humphreys, 1997, "Visual marking: Prioritizing selection for new objects by top-down attentional inhibition of old objects," Psychological Review, Vol. 104, pp. 90-122). According to the inhibitory visual marking account, this is achieved by the top-down and capacity-limited inhibition of old stimuli already in the field, which leads to a selection advantage for new items when they appear. In contrast, according to the abrupt luminance onset account (M. Donk & J. Theeuwes, 2001, "Visual marking beside the mark: Prioritizing selection by abrupt onsets," Perception & Psychophysics, Vol. 63, pp. 891-900), new items capture attention automatically simply because they generate luminance onset signals. Here, we demonstrate that new items can be partially prioritized over old items even when they appear during an eyeblink and so have no unique luminance transients associated with their appearance. Overall, the findings suggest that both the inhibition of old items and attention capture by luminance changes contribute to time-based selection. |
Lisa Stockhausen; Sara Koeser; Sabine Sczesny The gender typicality of faces and its impact on visual processing and on hiring decisions Journal Article In: Experimental Psychology, vol. 60, no. 6, pp. 444–452, 2013. @article{Stockhausen2013, Past research has shown that the gender typicality of applicants' faces affects leadership selection irrespective of a candidate's gender: A masculine facial appearance is congruent with masculine-typed leadership roles, thus masculine-looking applicants are hired more certainly than feminine-looking ones. In the present study, we extended this line of research by investigating hiring decisions for both masculine- and feminine-typed professional roles. Furthermore, we used eye tracking to examine the visual exploration of applicants' portraits. Our results indicate that masculine-looking applicants were favored for the masculine-typed role (leader) and feminine-looking applicants for the feminine-typed role (team member). Eye movement patterns showed that information about gender category and facial appearance was integrated during first fixations of the portraits. Hiring decisions, however, were not based on this initial analysis, but occurred at a second stage, when the portrait was viewed in the context of considering the applicant for a specific job. |
Julian M. Wallace; Michael K. Chiu; Anirvan S. Nandy; Bosco S. Tjan Crowding during restricted and free viewing Journal Article In: Vision Research, vol. 84, pp. 50–59, 2013. @article{Wallace2013, Crowding impairs the perception of form in peripheral vision. It is likely to be a key limiting factor of form vision in patients without central vision. Crowding has been extensively studied in normally sighted individuals, typically with a stimulus duration of a few hundred milliseconds to avoid eye movements. These restricted testing conditions do not reflect the natural behavior of a patient with central field loss. Could unlimited stimulus duration and unrestricted eye movements change the properties of crowding in any fundamental way? We studied letter identification in the peripheral vision of normally sighted observers in three conditions: (i) a fixation condition with a brief stimulus presentation of 250. ms, (ii) another fixation condition but with an unlimited viewing time, and (iii) an unrestricted eye movement condition with an artificial central scotoma and an unlimited viewing time. In all conditions, contrast thresholds were measured as a function of target-to-flanker spacing, from which we estimated the spatial extent of crowding in terms of critical spacing. We found that presentation duration beyond 250. ms had little effect on critical spacing with stable gaze. With unrestricted eye movements and a simulated central scotoma, we found a large variability in critical spacing across observers, but more importantly, the variability in critical spacing was well correlated with the variability in target eccentricity. Our results assure that the large body of findings on crowding made with briefly presented stimuli remains relevant to conditions where viewing time is unconstrained. Our results further suggest that impaired oculomotor control associated with central vision loss can confound peripheral form vision beyond the limits imposed by crowding. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr Spatial position information accumulates steadily over time Journal Article In: Journal of Neuroscience, vol. 33, no. 47, pp. 18396–18401, 2013. @article{Zimmermann2013, One of the more enduring mysteries of neuroscience is how the visual system constructs robust maps of the world that remain stable in the face of frequent eye movements. Here we show that encoding the position of objects in external space is a relatively slow process, building up over hundreds of milliseconds. We display targets to which human subjects saccade after a variable preview duration. As they saccade, the target is displaced leftwards or rightwards, and subjects report the displacement direction. When subjects saccade to targets without delay, sensitivity is poor; but if the target is viewed for 300-500 ms before saccading, sensitivity is similar to that during fixation with a strong visual mask to dampen transients. These results suggest that the poor displacement thresholds usually observed in the "saccadic suppression of displacement" paradigm are a result of the fact that the target has had insufficient time to be encoded in memory, and not a result of the action of special mechanisms conferring saccadic stability. Under more natural conditions, trans-saccadic displacement detection is as good as in fixation, when the displacement transients are masked. |
Xiao Lin Zhu; Shu Ping Tan; Fu De Yang; Wei Sun; Chong Sheng Song; Jie Feng Cui; Yan Li Zhao; Feng Mei Fan; Ya Jun Li; Yun Long Tan; Yi Zhuang Zou Visual scanning of emotional faces in schizophrenia Journal Article In: Neuroscience Letters, vol. 552, pp. 46–51, 2013. @article{Zhu2013a, This study investigated eye movement differences during facial emotion recognition between 101 patients with chronic schizophrenia and 101 controls. Independent of facial emotion, patients with schizophrenia processed facial information inefficiently; they showed significantly more direct fixations that lasted longer to interest areas (IAs), such as the eyes, nose, mouth, and nasion. The total fixation number, mean fixation duration, and total fixation duration were significantly increased in schizophrenia. Additionally, the number of fixations per second to IAs (IA fixation number/s) was significantly lower in schizophrenia. However, no differences were found between the two groups in the proportion of number of fixations to IAs or total fixation number (IA fixation number %). Interestingly, the negative symptoms of patients with schizophrenia negatively correlated with IA fixation number %. Both groups showed significantly greater attention to positive faces. Compared to controls, patients with schizophrenia exhibited significantly more fixations directed to IAs, a higher total fixation number, and lower IA fixation number/s for negative faces. These results indicate that facial processing efficiency is significantly decreased in schizophrenia, but no difference was observed in processing strategy. Patients with schizophrenia may have special deficits in processing negative faces, and negative symptoms may affect visual scanning parameters. |
Weina Zhu; Jan Drewes; Karl R. Gegenfurtner Animal detection in natural images: effects of color and image database Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75816, 2013. @article{Zhu2013, The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. |
Melonie Williams; Pierre Pouget; Leanne Boucher; Geoffrey F. Woodman Visual-spatial attention aids the maintenance of object representations in visual working memory Journal Article In: Memory & Cognition, vol. 41, no. 5, pp. 698–715, 2013. @article{Williams2013, Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers' eye movements while they remembered a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval should impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy, even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. |
Tracey A. Williams; Melanie A. Porter; Robyn Langdon Viewing social scenes: A visual scan-path study comparing fragile X syndrome and williams syndrome Journal Article In: Journal of Autism and Developmental Disorders, vol. 43, no. 8, pp. 1880–1894, 2013. @article{Williams2013a, Fragile X syndrome (FXS) and Williams syndrome (WS) are both genetic disorders which present with similar cognitive-behavioral problems, but distinct social phenotypes. Despite these social differences both syndromes display poor social relations which may result from abnormal social processing. This study aimed to manipulate the location of socially salient information within scenes to investigate the visual attentional mechanisms of: capture, disengagement, and/or general engagement. Findings revealed that individuals with FXS avoid social information presented centrally, at least initially. The WS findings, on the other hand, provided some evidence that difficulties with attentional disengagement, rather than attentional capture, may play a role in the WS social phenotype. These findings are discussed in relation to the distinct social phenotypes of these two disorders. |
Vickie M. Williamson; Mary Hegarty; Ghislain Deslongchamps; Kenneth C. Williamson; Mary Jane Shultz Identifying student use of ball-and-stick images versus electrostatic potential map images via eye tracking Journal Article In: Journal of Chemical Education, vol. 90, no. 2, pp. 159–164, 2013. @article{Williamson2013, This pilot study examined students' use of ball-and-stick images versus electrostatic potential maps when asked questions about electron density, positive charge, proton attack, and hydroxide attack with six different molecules (two alcohols, two carboxylic acids, and two hydroxycarboxylic acids). Students' viewing of these dual images was measured by monitoring eye fixations of the students while they read and answered questions. Results showed that students spent significantly more time with the ball-and-stick image when asked questions about proton or hydroxide attack, but equal time on the images when asked about electron density or positive charge. When comparing accuracy and time spent on the images, students who spent more time on the ball-and-stick when asked about positive charge were less likely to be correct, while those who spent more time with the potential map were more likely to be correct. The paper serves to introduce readers to eye-tracker data and calls for replication with a larger subject pool and for the inclusion of eye tracking as a chemical education research tool. |
Paula Winke; Susan M. Gass; Tetyana Sydorenko Factors influencing the use of captions by foreign language learners: An eye-tracking study Journal Article In: The Modern Language Journal, vol. 97, no. 1, pp. 254–275, 2013. @article{Winke2013, This study investigates caption-reading behavior by foreign language (L2) learners and, through eye-tracking methodology, explores the extent to which the relationship between the native and target language affects that behavior. Second-year (4th semester) English-speaking learners of Arabic, Chinese, Russian, and Spanish watched 2 videos differing in content familiarity, each dubbed and captioned in the target language. Results indicated that time spent on captions differed significantly by language: Arabic learners spent more time on captions than learners of Spanish and Russian. A significant interaction between language and content familiarity occurred: Chinese learners spent less time on captions in the unfamiliar content video than the familiar, while others spent comparable times on each. Based on dual‐processing and cognitive load theories, we posit that the Chinese learners experienced a split‐attention effect when verbal processing was difficult and that, overall, captioning benefits during the 4th semester of language learning are constrained by L2 differences, including differences in script, vocabulary knowledge, concomitant L2 proficiency, and instructional methods. Results are triangulated with qualitative findings from interviews |
Stephanie C. Wissig; Carlyn A. Patterson; Adam Kohn Adaptation improves performance on a visual search task Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 15, 2013. @article{Wissig2013, Temporal context, or adaptation, profoundly affects visual perception. Despite the strength and prevalence of adaptation effects, their functional role in visual processing remains unclear. The effects of spatial context and their functional role are better understood: these effects highlight features that differ from their surroundings and determine stimulus salience. Similarities in the perceptual and physiological effects of spatial and temporal context raise the possibility that they serve similar functions. We therefore tested the possibility that adaptation can enhance stimulus salience. We measured the effects of prolonged (40 s) adaptation to a counterphase grating on performance in a search task in which targets were defined by an orientation offset relative to a background of distracters. We found that, for targets with small orientation offsets, adaptation reduced reaction times and decreased the number of saccades made to find targets. Our results provide evidence that adaptation may function to highlight features that differ from the temporal context in which they are embedded. |
Felicity D. A. Wolohan; Sarah J. V. Bennett; Trevor J. Crawford Females and attention to eye gaze: Effects of the menstrual cycle Journal Article In: Experimental Brain Research, vol. 227, no. 3, pp. 379–386, 2013. @article{Wolohan2013, It is well known that an observer will attend to the location cued by another's eye gaze and that in some circumstances, this effect is enhanced when the emotion expressed is threat-related. This study explored whether attention to the gaze of threat-related faces is potentiated in the luteal phase of the menstrual cycle when detection of threat is suggested to be enhanced, compared to the follicular phase. Female participants were tested on a gaze cueing task in their luteal (N = 13) or follicular phase (N = 15). Participants were presented with various emotional expressions with an averted eye gaze that was either spatially congruent or incongruent with a forthcoming target. Females in the luteal phase responded faster overall to targets on trials with a 200-ms stimulus onset asynchrony interval. The results suggest that during the luteal phase, females show a general and automatic hypersensitivity to respond to stimuli associated with socially and emotionally relevant cues. This may be a part of an adaptive biological mechanism to protect foetal development. |
Jason H. Wong; Matthew S. Peterson What we remember affects how we see: Spatial working memory steers saccade programming Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 2, pp. 308–321, 2013. @article{Wong2013, Relationships between visual attention, saccade programming, and visual working memory have been hypothesized for over a decade. Awh, Jonides, and Reuter-Lorenz (Journal of Experimental Psychology: Human Perception and Performance 24(3):780-90, 1998) and Awh et al. (Psychological Science 10(5):433-437, 1999) proposed that rehearsing a location in memory also leads to enhanced attentional processing at that location. In regard to eye movements, Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) found that holding a location in working memory affects saccade programming, albeit negatively. In three experiments, we attempted to replicate the findings of Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) and determine whether the spatial memory effect can occur in other saccade-cuing paradigms, including endogenous central arrow cues and exogenous irrelevant singletons. In the first experiment, our results were the opposite of those in Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009), in that we found facilitation (shorter saccade latencies) instead of inhibition when the saccade target matched the region in spatial working memory. In Experiment 2, we sought to determine whether the spatial working memory effect would generalize to other endogenous cuing tasks, such as a central arrow that pointed to one of six possible peripheral locations. As in Experiment 1, we found that saccade programming was facilitated when the cued location coincided with the saccade target. In Experiment 3, we explored how spatial memory interacts with other types of cues, such as a peripheral color singleton target or irrelevant onset. In both cases, the eyes were more likely to go to either singleton when it coincided with the location held in spatial working memory. On the basis of these results, we conclude that spatial working memory and saccade programming are likely to share common overlapping circuitry. |
Heather Cleland Woods; Christoph Scheepers; K. A. Ross; Colin A. Espie; Stephany M. Biello What are you looking at? Moving toward an attentional timeline in insomnia: A novel semantic eye tracking study Journal Article In: Sleep, vol. 36, no. 10, pp. 1491–1499, 2013. @article{Woods2013, STUDY OBJECTIVES: To date, cognitive probe paradigms have been used in different guises to obtain reaction time measurements suggestive of an attention bias towards sleep in insomnia. This study adopts a methodology which is novel to sleep research to obtain a continual record of where the eyes-and therefore attention-are being allocated with regard to sleep and neutral stimuli.$backslash$n$backslash$nDESIGN: A head mounted eye tracker (Eyelink II,SR Research, Ontario, Canada) was used to monitor eye movements in respect to two words presented on a computer screen, with one word being a sleep positive, sleep negative, or neutral word above or below a second distracter pseudoword. Probability and reaction times were the outcome measures.$backslash$n$backslash$nPARTICIPANTS: Sleep group classification was determined by screening interview and PSQI (> 8 = insomnia, < 3 = good sleeper) score.$backslash$n$backslash$nMEASUREMENTS AND RESULTS: Those individuals with insomnia took longer to fixate on the target word and remained fixated for less time than the good sleep controls. Word saliency had an effect with longer first fixations on positive and negative sleep words in both sleep groups, with largest effect sizes seen with the insomnia group.$backslash$n$backslash$nCONCLUSIONS: This overall delay in those with insomnia with regard to vigilance and maintaining attention on the target words moves away from previous attention bias work showing a bias towards sleep, particularly negative, stimuli but is suggestive of a neurocognitive deficit in line with recent research. |
Nicola M. Wöstmann; Désirée S. Aichert; Anna Costa; Katya Rubia; Hans-Jürgen Möller; Ulrich Ettinger Reliability and plasticity of response inhibition and interference control Journal Article In: Brain and Cognition, vol. 81, no. 1, pp. 82–94, 2013. @article{Woestmann2013, This study investigated the internal reliability, temporal stability and plasticity of commonly used measures of inhibition-related functions. Stop-signal, go/no-go, antisaccade, Simon, Eriksen flanker, Stroop and Continuous Performance tasks were administered twice to 23 healthy participants over a period of approximately 11. weeks in order to assess test-retest correlations, internal consistency (Cronbach's alpha), and systematic between as well as within session performance changes. Most of the inhibition-related measures showed good test-retest reliabilities and internal consistencies, with the exception of the stop-signal reaction time measure, which showed poor reliability. Generally no systematic performance changes were observed across the two assessments with the exception of four variables of the Eriksen flanker, Simon and Stroop task which showed reduced variability of reaction time and an improvement in the response time for incongruent trials at second assessment. Predominantly stable performance within one test session was shown for most measures. Overall, these results are informative for studies with designs requiring temporally stable parameters e.g. genetic or longitudinal treatment studies. |
Timothy J. Wright; Walter R. Boot; Chelsea S. Morgan Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness Journal Article In: Acta Psychologica, vol. 144, no. 1, pp. 6–11, 2013. @article{Wright2013, Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB. |
Chia-Chien Wu; Eileen Kowler Timing of saccadic eye movements during visual search for multiple targets Journal Article In: Journal of Vision, vol. 13, no. 11, pp. 11–11, 2013. @article{Wu2013, Visual search requires sequences of saccades. Many studies have focused on spatial aspects of saccadic decisions, while relatively few (e.g., Hooge & Erkelens, 1999) consider timing.We studied saccadic timing during search for targets (thin circles containing tilted lines) located among nontargets (thicker circles). Tasks required either (a) estimating the mean tilt of the lines, or (b) looking at targets without a concurrent psychophysical task. The visual similarity of targets and nontargets affected both the probability of hitting a target and the saccade rate in both tasks. Saccadic timing also depended on immediate conditions, specifically, (a) the type of currently fixated location (dwell time was longer on targets than nontargets), (b) the type of goal (dwell time was shorter prior to saccades that hit targets), and (c) the ordinal position of the saccade in the sequence. The results show that timing decisions take into account the difficulty of finding targets, as well as the cost of delays. Timing strategies may be a compromise between the attempt to find and locate targets, or other suitable landing locations, using eccentric vision (at the cost of increased dwell times) versus a strategy of exploring less selectively at a rapid rate. |
Esther X. W. Wu; Syed O. Gilani; Jeroen J. A. Boxtel; Ido Amihai; Fook K. Chua; Shih-Cheng Yen Parallel programming of saccades during natural scene viewing: Evidence from eye movement positions Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 17–17, 2013. @article{Wu2013a, Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis. |
Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner Selection of visual information for lightness judgements by eye movements Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–8, 2013. @article{Toscani2013, When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance. |
Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner Optimal sampling of visual information for lightness judgments Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 27, pp. 11163–11168, 2013. @article{Toscani2013a, The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting showthat higher values in an object's luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. |
R. Blythe Towal; Milica Mormann; Christof Koch Simultaneous modeling of visual saliency and value computation improves predictions of economic choice Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 40, pp. E3858–E3867, 2013. @article{Towal2013, Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. |
Yusuke Uchida; Daisuke Kudoh; Takatoshi Higuchi; Masaaki Honda; Kazuyuki Kanosue Dynamic visual acuity in baseball players is due to superior tracking abilities Journal Article In: Medicine and Science in Sports and Exercise, vol. 45, no. 2, pp. 319–325, 2013. @article{Uchida2013, PURPOSE: Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in baseball players than that in nonplayers. Although the better DVA of baseball players has been attributed to a better ability to track moving objects, it might be derived from the ability to perceive an object even in the presence of a great distance between the image on the retina and the fovea (retinal error). However, the ability to perceive moving visual stimuli has not been compared between baseball players and nonplayers. METHODS: To clarify this, we quantitatively measured abilities of eye movement and visual perception using moving Landolt C rings in baseball players and nonplayers. RESULTS: Baseball players could achieve high DVA with significantly faster eye movement at shorter latencies than nonplayers. There was no difference in the ability to perceive moving object's images projected onto the retina between baseball players and nonplayers. CONCLUSIONS: These results suggest that the better DVA of baseball players is primarily due to a better ability to track moving objects with their eyes rather than to improved perception of moving images on the retina. This skill is probably obtained through baseball training. |
Matteo Valsecchi; Matteo Toscani; Karl R. Gegenfurtner Perceived numerosity is reduced in peripheral vision Journal Article In: Journal of Vision, vol. 13, no. 13, pp. 7–7, 2013. @article{Valsecchi2013, In four experiments we investigated the perception of numerosity in the peripheral visual field. We found that the perceived numerosity of a peripheral cloud of dots was judged to be inferior to the one of a central cloud of dots, particularly when the dots were highly clustered. Blurring the stimuli accordingly to peripheral spatial frequency sensitivity did not abolish the effect and had little impact on numerosity judgments. In a dedicated control experiment we ruled out that the reduction in peripheral perceived numerosity is secondary to a reduction of perceived stimulus size. We suggest that visual crowding might be at the origin of the observed reduction in peripheral perceived numerosity, implying that numerosity could be partly estimated through the individuation of the elements populating the array. |
Marlies E. Bochove; Lise Van Der Haegen; Wim Notebaert; Tom Verguts Blinking predicts enhanced cognitive control Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 13, no. 2, pp. 346–354, 2013. @article{Bochove2013, Recent models have suggested an important role for neuromodulation in explaining trial-to-trial adaptations in cognitive control. The adaptation-by-binding model (Verguts & Notebaert, Psychological review, 115(2), 518-525, 2008), for instance, suggests that increased cognitive control in response to conflict (e.g., incongruent flanker stimulus) is the result of stronger binding of stimulus, action, and context representations, mediated by neuromodulators like dopamine (DA) and/or norepinephrine (NE). We presented a flanker task and used the Gratton effect (smaller congruency effect following incongruent trials) as an index of cognitive control. We investigated the Gratton effect in relation to eye blinks (DA related) and pupil dilation (NE related). The results for pupil dilation were not unequivocal, but eye blinks clearly modulated the Gratton effect: The Gratton effect was enhanced after a blink trial, relative to after a no-blink trial, even when controlling for correlated variables. The latter suggests an important role for DA in cognitive control on a trial-to-trial basis. © 2012 Psychonomic Society, Inc. |
Jessica Werthmann; Anne Roefs; Chantal Nederkoorn; Anita Jansen Desire lies in the eyes: Attention bias for chocolate is related to craving and self-endorsed eating permission Journal Article In: Appetite, vol. 70, pp. 81–89, 2013. @article{Werthmann2013, The present study tested the impact of experimentally manipulated perceived availability of chocolate on attention for chocolate stimuli, momentary (state) craving for chocolate and consumption of chocolate in healthy weight female students. It was hypothesized that eating forbiddance would be related to attentional avoidance (thus diminished attention focus on food cues in an attempt to prevent oneself from processing food cues) and that eating motivation would be related to attentional approach (thus maintained attentional focus on food cues). High chronic chocolate cravers (n= 40) and low cravers (n= 40) participated in one of four perceived availability contexts (required to eat, forbidden to eat, individual choice to eat, and 50% chance to eat) following a brief chocolate exposure. Attention for chocolate was measured using eye-tracking; momentary craving from self-report; and the consumption of chocolate was assessed from direct observation. The perceived availability of chocolate did not significantly influence attention allocation for chocolate stimuli, momentary craving or chocolate intake. High chocolate cravers reported significantly higher momentary craving for chocolate (d= 1.29, p<. .001), and showed longer initial duration of gaze on chocolate, than low cravers (d= 0.63, p<. .01). In contrast, participants who indicated during the manipulation check that they would not have permitted themselves to eat chocolate, irrespective of the availability instruction they received, showed significantly less craving (d= 0.96, p<. .01) and reduced total dwell time for chocolate stimuli than participants who permitted themselves to eat chocolate (d= 0.53, p<. .05). Thus, this study provides evidence that attention biases for food stimuli reflect inter-individual differences in eating motivation, - such as chronic chocolate craving, and self-endorsed eating permission. |
Jessica Werthmann; Anne Roefs; Chantal Nederkoorn; Karin Mogg; Brendan P. Bradley; Anita Jansen Attention bias for food is independent of restraint in healthy weight individuals-An eye tracking study Journal Article In: Eating Behaviors, vol. 14, no. 3, pp. 397–400, 2013. @article{Werthmann2013a, Objective: Restrained eating style and weight status are highly correlated. Though both have been associated with an attentional bias for food cues, in prior research restraint and BMI were often confounded. The aim of the present study was to determine the existence and nature of an attention bias for food cues in healthy-weight female restrained and unrestrained eaters, when matching the two groups on BMI. Method: Attention biases for food cues were measured by recordings of eye movements during a visual probe task with pictorial food versus non-food stimuli. Healthy weight high restrained (n=. 24) and low restrained eaters (n=. 21) were matched on BMI in an attempt to unconfound the effects of restraint and weight on attention allocation patterns. Results: All participants showed elevated attention biases for food stimuli in comparison to neutral stimuli, independent of restraint status. Discussion: These findings suggest that attention biases for food-related cues are common for healthy weight women and show that restrained eating (per se) is not related to biased processing of food stimuli, at least not in healthy weight participants. |
Gregory L. West; Naseem Al-Aidroos; Jay Pratt Action video game experience affects oculomotor performance Journal Article In: Acta Psychologica, vol. 142, no. 1, pp. 38–42, 2013. @article{West2013, Action video games have been show to affect a variety of visual and cognitive processes. There is, however, little evidence of whether playing video games can also affect motor action. To investigate the potential link between experience playing action video games and changes in oculomotor action, we tested habitual action video game players (VGPs) and non-video game players (NVGPs) in a saccadic trajectory deviation task. We demonstrate that spatial curvature of a saccadic trajectory towards or away from distractor is profoundly different between VGPs and NVGPs. In addition, task performance accuracy improved over time only in VGPs. Results are discussed in the context of the competing interplay between stimulus-driven motor programming and top-down inhibition during oculomotor execution. |
Alex L. White; Martin Rolfs; Marisa Carrasco Adaptive deployment of spatial and feature-based attention before saccades Journal Article In: Vision Research, vol. 85, pp. 26–35, 2013. @article{White2013, What you see depends not only on where you are looking but also on where you will look next. The pre-saccadic attention shift is an automatic enhancement of visual sensitivity at the target of the next saccade. We investigated whether and how perceptual factors independent of the oculomotor plan modulate pre-saccadic attention within and across trials. Observers made saccades to one (the target) of six patches of moving dots and discriminated a brief luminance pulse (the probe) that appeared at an unpredictable location. Sensitivity to the probe was always higher at the target's location (spatial attention), and this attention effect was stronger if the previous probe appeared at the previous target's location. Furthermore, sensitivity was higher for probes moving in directions similar to the target's direction (feature-based attention), but only when the previous probe moved in the same direction as the previous target. Therefore, implicit cognitive processes permeate pre-saccadic attention, so that-contingent on recent experience-it flexibly distributes resources to potentially relevant locations and features. |
2012 |
Stefan M. Wierda; Hedderik Rijn; Niels A. Taatgen; Sander Martens Pupil dilation deconvolution reveals the dynamics of attention at high temporal resolution Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 22, pp. 8456–8460, 2012. @article{Wierda2012, The size of the human pupil increases as a function of mental effort. However, this response is slow, and therefore its use is thought to be limited to measurements of slow tasks or tasks in which meaningful events are temporally well separated. Here we show that high-temporal-resolution tracking of attention and cognitive processes can be obtained from the slow pupillary response. Using automated dilation deconvolution, we isolated and tracked the dy- namics of attention in a fast-paced temporal attention task, al- lowing us to uncover the amount of mental activity that is critical for conscious perception of relevant stimuli. We thus found evi- dence for specific temporal expectancy effects in attention that have eluded detection using neuroimaging methods such as EEG. Combining this approach with other neuroimaging techniques can open many research opportunities to study the temporal dynamics of the mind's inner eye in great detail. |
Jan M. Wiener; Christoph Hölscher; Simon Büchner; Lars Konieczny Gaze behaviour during space perception and spatial decision making Journal Article In: Psychological Research, vol. 76, no. 6, pp. 713–729, 2012. @article{Wiener2012, A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screen-shots of choice points taken in large virtual environments. Each screen-shot depicted alternative path options. In Experiment 1, participants had to decide between them in order to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 & 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making. |
Dagmar A. Wismeijer; Karl R. Gegenfurtner Orientation of noisy texture affects saccade direction during free viewing Journal Article In: Vision Research, vol. 58, pp. 19–26, 2012. @article{Wismeijer2012, We redirect our eye approximately three times per second to bring a new part of our environment on to our fovea (Findlay & Gilchrist, 2003). How a scanning path is planned is still an unsolved matter. Most research to date has focused on the question of target selection: how is the next fixation location, or saccade target, selected. Here we investigated the direction of spontaneous saccades, rather than fixation locations per se. We measured eye movements, while observers were freely viewing noisy textures: oriented gabors embedded in either pink (1/f) noise or pixel noise, of which they later had to report their orientation. Our results show that a significant percentage of the spontaneous saccades were directed along the orientation of the stimulus. These results suggest that observers may have used an underlying eye movement strategy involving the search for contour endings. |
Felicity D. A. Wolohan; Trevor J. Crawford The anti-orienting phenomenon revisited: Effects of gaze cues on antisaccade performance Journal Article In: Experimental Brain Research, vol. 221, no. 4, pp. 385–392, 2012. @article{Wolohan2012, When the eye gaze of a face is congruent with direction of an upcoming target, saccadic eye movements of the observer towards that target are generated more quickly, in comparison to eye gaze incongruent with the direction of the target. This work examined the conflict in an antisaccade task, when eye gaze points towards the target, but the saccadic eye movement should be triggered in the opposite direction. In a gaze cueing paradigm a central face provided an attentional gaze cue towards the target or away from the target. Participants (N = 38) generated pro- and anti- saccades to peripheral targets that were congruent or incongruent with the previous gaze cue. Paradoxically, facilitatory effects of a gaze cue towards the target were observed for both the pro- and anti- saccade tasks. The results are consistent with the idea that eye gaze cues are processed in the task set that is compatible with the saccade programme. Thus, in an antisaccade paradigm participants may anti-orient with respect to the gaze cue resulting in faster saccades on trials when the gaze cue is towards the target. The results resemble a previous observation by Fischer and Weber (1996) using low level peripheral cues. The current study extends this finding to include central socially communicative cues. |
Daw-An Wu; Shinsuke Shimojo; Stephanie W. Wang; Colin F. Camerer Shared visual attention reduces hindsight bias Journal Article In: Psychological Science, vol. 23, no. 12, pp. 1524–1533, 2012. @article{Wu2012, Hindsight bias is the tendency to retrospectively think of outcomes as being more foreseeable than they actually were. It is a robust judgment bias and is difficult to correct (or "debias"). In the experiments reported here, we used a visual paradigm in which performers decided whether blurred photos contained humans. Evaluators, who saw the photos unblurred and thus knew whether a human was present, estimated the proportion of participants who guessed whether a human was present. The evaluators exhibited visual hindsight bias in a way that matched earlier data from judgments of historical events surprisingly closely. Using eye tracking, we showed that a higher correlation between the gaze patterns of performers and evaluators (shared attention) is associated with lower hindsight bias. This association was validated by a causal method for debiasing: Showing the gaze patterns of the performers to the evaluators as they viewed the stimuli reduced the extent of hindsight bias. |
Lingdan Wu; Jie Pu; John J. B. Allen; Paul Pauli Recognition of facial expressions in individuals with elevated levels of depressive symptoms: An eye-movement study Journal Article In: Depression Research and Treatment, pp. 7, 2012. @article{Wu2012a, Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms) were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition. |
Brad Wyble; Mary C. Potter; Marcelo Mattar RSVP in orbit: Identification of single and dual targets in motion Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 3, pp. 553–562, 2012. @article{Wyble2012, Predicting the binding mode of flexible polypeptides to proteins is an important task that falls outside the domain of applicability of most small molecule and protein−protein docking tools. Here, we test the small molecule flexible ligand docking program Glide on a set of 19 non-α-helical peptides and systematically improve pose prediction accuracy by enhancing Glide sampling for flexible polypeptides. In addition, scoring of the poses was improved by post-processing with physics-based implicit solvent MM- GBSA calculations. Using the best RMSD among the top 10 scoring poses as a metric, the success rate (RMSD ≤ 2.0 Å for the interface backbone atoms) increased from 21% with default Glide SP settings to 58% with the enhanced peptide sampling and scoring protocol in the case of redocking to the native protein structure. This approaches the accuracy of the recently developed Rosetta FlexPepDock method (63% success for these 19 peptides) while being over 100 times faster. Cross-docking was performed for a subset of cases where an unbound receptor structure was available, and in that case, 40% of peptides were docked successfully. We analyze the results and find that the optimized polypeptide protocol is most accurate for extended peptides of limited size and number of formal charges, defining a domain of applicability for this approach. |
Björn N. S. Vlaskamp; Anna Schubö Eye movements during action preparation Journal Article In: Experimental Brain Research, vol. 216, no. 3, pp. 463–472, 2012. @article{Vlaskamp2012, Looking at actions of others activates representations of similar own actions, that is, the action resonates. This may facilitate or interfere with the actions that one intends to make. We asked whether people promote or block those effects by making eye movements to or away from the actions of others. We investigated gaze behavior with a cup-clinking task: An actor shown on a video grabbed a cup and moved it toward the participant who next grabbed his own cup in the 'same' or in a different, 'complementary', way. In the 'same' condition, participants mostly looked at the place where the actor held the cup. In the 'complementary' condition, gaze behavior was similar at the start of the actor's action. To our surprise, as the action reached completion, participants started to look at the cup's site that corresponded to the grabbing instruction for their own action. A second experiment showed that this effect grew with delay of the go-signal. This indicates that a reason for the effect may be to support memorizing the instructed action. The bottom line of the study is that passively viewed scenes (passive in the sense that nothing in the observed scene is manipulated by the viewer) are scanned to support preparation of actions that one intends to make. We discuss how this finding relates to action resonance and how it relates to links between representations of actions and objects. |
Melissa L. -H. Võ; Jeremy M. Wolfe When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, pp. 23–41, 2012. @article{Vo2012, One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches-despite previous encounters with the target objects-demonstrates the dominance of guidance by generic scene knowledge in real-world search. |
H. X. Wang; Jeremy Freeman; Elisha P. Merriam; Uri Hasson; David J. Heeger Temporal eye movement strategies during naturalistic viewing Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 16–16, 2012. @article{Wang2012d, The deployment of eye movements to complex spatiotemporal stimuli likely involves a variety of cognitive factors. However, eye movements to movies are surprisingly reliable both within and across observers. We exploited and manipulated that reliability to characterize observers' temporal viewing strategies while they viewed naturalistic movies. Introducing cuts and scrambling the temporal order of the resulting clips systematically changed eye movement reliability. We developed a computational model that exhibited this behavior and provided an excellent fit to the measured eye movement reliability. The model assumed that observers searched for, found, and tracked a point of interest and that this process reset when there was a cut. The model did not require that eye movements depend on temporal context in any other way, and it managed to describe eye movements consistently across different observers and two movie sequences. Thus, we found no evidence for the integration of information over long time scales (greater than a second). The results are consistent with the idea that observers employ a simple tracking strategy even while viewing complex, engaging naturalistic stimuli. |
Hsueh-Cheng Wang; Marc Pomplun The attraction of visual attention to texts in real-world scenes Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 26–26, 2012. @article{Wang2012a, When we look at real-world scenes, attention seems disproportionately attracted by texts that are embedded in these scenes, for instance, on signs or billboards. The present study was aimed at verifying the existence of this bias and investigating its underlying factors. For this purpose, data from a previous experiment were reanalyzed and four new experiments measuring eye movements during the viewing of real-world scenes were conducted. By pairing text objects with matching control objects and regions, the following main results were obtained: (a) Greater fixation probability and shorter minimum fixation distance of texts confirmed the higher attractiveness of texts; (b) the locations where texts are typically placed contribute partially to this effect; (c) specific visual features of texts, rather than typically salient features (e.g., color, orientation, and contrast), are the main attractors of attention; (d) the meaningfulness of texts does not add to their attentional capture; and (e) the attraction of attention depends to some extent on the observer's familiarity with the writing system and language of a given text. |
Stefan Van der Stigchel; Roderick C. L. L. Reichenbach; Arie J. Wester; Tanja C. W. Nijboer Antisaccade performance in Korsakoff patients reveals deficits in oculomotor inhibition Journal Article In: Journal of Clinical and Experimental Neuropsychology, vol. 34, no. 8, pp. 876–886, 2012. @article{VanderStigchel2012c, Oculomotor inhibition reflects the ability to suppress an unwanted eye movement. The goal of the present study was to assess oculomotor inhibition in patients with Korsakoff's syndrome (KS). To this end, an antisaccade task was employed in which an eye movement towards an onset stimulus has to be inhibited, and a voluntary saccade has to be executed in the opposite direction. Compared to the results of a matched control group, patients showed a higher percentage of intrusive saccades, made more antisaccade errors, and showed longer latencies on prosaccade trials. These results clearly show that oculomotor inhibition is impaired in KS. Part of these deficits in oculomotor inhibition may be explained by neuronal atrophy in the frontal areas, which is generally associated with KS. |
Amanda E. Lamsweerde; Melissa R. Beck Attention shifts or volatile representations: What causes binding deficits in visual working memory? Journal Article In: Visual Cognition, vol. 20, no. 7, pp. 771–792, 2012. @article{Lamsweerde2012, The current study tested two hypotheses of feature binding memory: The attention hypothesis, which suggests that attention is needed to maintain feature bindings in visual working memory (VWM) and the volatile representation hypothesis, which suggests that feature bindings in memory are volatile and easily overwritten, but do not require sustained attention. Experiment 1 tested the attention hypothesis by measuring shifts of overt attention during the study array of a change detection task; serial shifts of attention did not disrupt feature bindings. Experiments 2 and 3 encouraged encoding of more volatile (Experiment 2) or durable (Experiment 3) representations during the study array. Binding change detection performance was impaired in Experiment 2, but not in Experiment 3, suggesting that binding performance is impaired when encoding supports a less durable memory representation. Together, these results suggest that although feature bindings may be volatile and easily overwritten, attention is not required to maintain feature bindings in VWM.$backslash$nThe current study tested two hypotheses of feature binding memory: The attention hypothesis, which suggests that attention is needed to maintain feature bindings in visual working memory (VWM) and the volatile representation hypothesis, which suggests that feature bindings in memory are volatile and easily overwritten, but do not require sustained attention. Experiment 1 tested the attention hypothesis by measuring shifts of overt attention during the study array of a change detection task; serial shifts of attention did not disrupt feature bindings. Experiments 2 and 3 encouraged encoding of more volatile (Experiment 2) or durable (Experiment 3) representations during the study array. Binding change detection performance was impaired in Experiment 2, but not in Experiment 3, suggesting that binding performance is impaired when encoding supports a less durable memory representation. Together, these results suggest that although feature bindings may be volatile and easily overwritten, attention is not required to maintain feature bindings in VWM. |
Hedderik Rijn; Jelle R. Dalenberg; Jelmer P. Borst; Simone A. Sprenger Pupil Dilation Co-Varies with Memory Strength of Individual Traces in a Delayed Response Paired-Associate Task Journal Article In: PLoS ONE, vol. 7, no. 12, pp. e51134, 2012. @article{Rijn2012, Studies on cognitive effort have shown that pupil dilation is a reliable indicator of memory load. However, it is conceivable that there are other sources of effort involved in memory that also affect pupil dilation. One of these is the ease with which an item can be retrieved from memory. Here, we present the results of an experiment in which we studied the way in which pupil dilation acts as an online marker for memory processing during the retrieval of paired associates while reducing confounds associated with motor responses. Paired associates were categorized into sets containing either 4 or 7 items. After learning the paired associates once, pupil dilation was measured during the presentation of the retrieval cue during four repetitions of each set. Memory strength was operationalized as the number of repetitions (frequency) and set-size, since having more items per set results in a lower average recency. Dilation decreased with increased memory strength, supporting the hypothesis that the amplitude of the evoked pupillary response correlates positively with retrieval effort. Thus, while many studies have shown that "memory load" influences pupil dilation, our results indicate that the task-evoked pupillary response is also sensitive to the experimentally manipulated memory strength of individual items. As these effects were observed well before the response had been given, this study also suggests that pupil dilation can be used to assess an item's memory strength without requiring an overt response. |
Wieske Zoest; Mieke Donk; Stefan Van der Stigchel Stimulus-salience and the time-course of saccade trajectory deviations Journal Article In: Journal of Vision, vol. 12, no. 8, pp. 1–16, 2012. @article{Zoest2012, The deviation of a saccade trajectory is a measure of the oculomotor competition evoked by a distractor. The aim of the present study was to investigate the impact of stimulus-salience on the time-course of saccade trajectory deviations to get a better insight into how stimulus-salience influences oculomotor competition over time. Two experiments were performed in which participants were required to make a vertical saccade to a target presented in an array of nontarget line elements and one additional distractor. The distractor varied in salience, where salience was defined by an orientation contrast relative to the surrounding nontargets. In Experiment 2, target-distractor similarity was additionally manipulated. In both Experiments 1 and 2, the results revealed that the eyes deviated towards the irrelevant distractor and did so more when the distractor was salient compared to when it was not salient. Critically, salience influenced performance only when people were fast to elicit an eye movement and had no effect when saccade latencies were long. Target-distractor similarity did not influence this pattern. These results show that the impact of salience in the visual system is transient. |
Preeti Verghese Active search for multiple targets is inefficient Journal Article In: Vision Research, vol. 74, pp. 61–71, 2012. @article{Verghese2012, This study examines saccade strategy in a novel task where observers actively search a display to find multiple targets in a limited time. Theory predicts that the relative merit of different saccade strategies depends on the prior probability of the target at a location: when the target prior is low and multiple-target trials are rare, making a saccade to the most likely target location is close to the optimal strategy, but when the target prior is high and multiple-target trials are frequent, selecting uncertain locations is more informative. The prior probability of the target was varied from 0.17 to 0.67 to determine whether observers adjusted their saccades strategies to maximize information. Observers actively searched a noisy display with six potential target locations. Each location had an independent probability of a target, so the number of targets in a trial ranged from 0 to 6. For all target priors ranging from low to high, a trial-by-trial analysis of saccade strategy indicated that observers made saccades to the most likely target location more often than the most uncertain location. Fixating likely locations is efficient only when multiple targets are rare, as in the case of a low target prior, or in the case of the more standard single-target search task. Yet it is the preferred saccade strategy in all our conditions, even when multiple targets are frequent. These findings indicate that humans are far from ideal searchers in multiple-target search. |
Dorine Vergilino-Perez; Christelle Lemoine; Eric Siéroff; Anne Marie Ergis; Redha Bouhired; Emilie Rigault; Karine Doré-Mazars The role of saccade preparation in lateralized word recognition: Evidence for the attentional bias theory Journal Article In: Neuropsychologia, vol. 50, no. 12, pp. 2796–2804, 2012. @article{VergilinoPerez2012a, Words presented to the right visual field (RVF) are recognized more readily than those presented to the left visual field (LVF). Whereas the attentional bias theory proposes an explanation in terms of attentional imbalance between visual fields, the attentional advantage theory assumes that words presented to the RVF are processed automatically while LVF words need attention. In this study, we exploited coupling between attention and saccadic eye movements to orient spatial attention to one or the other visual field. The first experiment compared conditions wherein participants had to remain fixated centrally or had to make a saccade to the visual field in which subsequent verbal stimuli were displayed. The orienting of attention by saccade preparation improved performance in a lexical decision task in both the LVF and the RVF. In the second experiment, participants had to make a saccade either to the visual field where verbal stimuli were presented subsequently or to the opposite side. For RVF as well as for LVF presentation, saccade preparation toward the opposite side decreased performance compared to the same side condition. These results are better explained by the attentional bias theory, and are discussed in the light of a new attentional theory dissociating two major components of attention, namely preparation and selection. |
Petra Vetter; Grace Edwards; Lars Muckli Transfer of predictive signals across saccades Journal Article In: Frontiers in Psychology, vol. 3, pp. 176, 2012. @article{Vetter2012, Predicting visual information facilitates efficient processing of visual signals. Higher visual areas can support the processing of incoming visual information by generating predictive models that are fed back to lower visual areas. Functional brain imaging has previously shown that predictions interact with visual input already at the level of the primary visual cortex (V1; Harrison et al., 2007; Alink et al., 2010). Given that fixation changes up to four times a second in natural viewing conditions, cortical predictions are effective in V1 only if they are fed back in time for the processing of the next stimulus and at the corresponding new retinotopic position. Here, we tested whether spatio-temporal predictions are updated before, during, or shortly after an inter-hemifield saccade is executed, and thus, whether the predictive signal is transferred swiftly across hemifields. Using an apparent motion illusion, we induced an internal motion model that is known to produce a spatio-temporal prediction signal along the apparent motion trace in V1 (Muckli et al., 2005; Alink et al., 2010). We presented participants with both visually predictable and unpredictable targets on the apparent motion trace. During the task, participants saccaded across the illusion whilst detecting the target. As found previously, predictable stimuli were detected more frequently than unpredictable stimuli. Furthermore, we found that the detection advantage of predictable targets is detectable as early as 50-100 ms after saccade offset. This result demonstrates the rapid nature of the transfer of a spatio-temporally precise predictive signal across hemifields, in a paradigm previously shown to modulate V1. |
Benjamin T. Vincent How do we use the past to predict the future in oculomotor search? Journal Article In: Vision Research, vol. 74, pp. 93–101, 2012. @article{Vincent2012, A variety of findings suggest that when conducting visual search, we can exploit cues that are statistically related to a target's location. But is this the result of heuristic mechanisms or an internal model that tracks the statistics of the environment? Here, connections are made between the two explanations, and four models are assessed to probe the mechanisms underlying prediction in search. Participants conducted a simple gaze-contingent search task with five conditions, each of which consists of different combinations of 1st and 2nd order statistics. People's exploration behaviour adapted to the statistical rules governing target behaviour. Behaviour was most consistent with a model that represents transitions from one location to another, and that makes the underlying assumption that the world is dynamic. This assumption that the world is changeable could not be overridden despite task instruction and nearly 1. h of exposure to unchanging statistics. This means that while people may be suboptimal in some experimental contexts, it may be because their internal mental model makes assumptions that are adaptive in a complex, changeable world. |
Zhou Yang; Todd Jackson; Xiao Gao; Hong Chen Identifying selective visual attention biases related to fear of pain by tracking eye movements within a dot-probe paradigm Journal Article In: Pain, vol. 153, no. 8, pp. 1742–1748, 2012. @article{Yang2012d, This research examined selective biases in visual attention related to fear of pain by tracking eye movements (EM) toward pain-related stimuli among the pain-fearful. EM of 21 young adults scoring high on a fear of pain measure (H-FOP) and 20 lower-scoring (L-FOP) control participants were measured during a dot-probe task that featured sensory pain-neutral, health catastrophe-neutral and neutral-neutral word pairs. Analyses indicated that the H-FOP group was more likely to direct immediate visual attention toward sensory pain and health catastrophe words than was the L-FOP group. The H-FOP group also had comparatively shorter first fixation latencies toward sensory pain and health catastrophe words. Conversely, groups did not differ on EM indices of attentional maintenance (i.e., first fixation duration, gaze duration, and average fixation duration) or reaction times to dot probes. Finally, both groups showed a cycle of disengagement followed by re-engagement toward sensory pain words relative to other word types. In sum, this research is the first to reveal biases toward pain stimuli during very early stages of visual information processing among the highly pain-fearful and highlights the utility of EM tracking as a means to evaluate visual attention as a dynamic process in the context of FOP. |
Kenji Yokoi; Katsumi Watanabe; Shinya Saida Rapid and implicit effects of color category on visual search Journal Article In: Optical Review, vol. 19, no. 4, pp. 276–281, 2012. @article{Yokoi2012, Many studies suggest that the color category influences visual perception. It is also well known that oculomotor control and visual attention are closely linked. In order to clarify temporal characteristics of color categorization, we investigated eye movements during color visual search. Eight color disks were presented briefly for 20–320 ms, and the subject was instructed to gaze at a target shown prior to the trial. We found that the color category of the target modulated eye movements significantly when the stimulus was displayed for more than 40 ms and the categorization could be completed within 80 ms. With the 20 ms presentation, the search performance was at a chance level, however, the first saccadic latency suggested that the color category had an effect on visual attention. These results suggest that color categorization affects the guidance of visual attention rapidly and implicitly. |
Gregory J. Zelinsky TAM: Explaining off-object fixations and central fixation tendencies as effects of population averaging during search Journal Article In: Visual Cognition, vol. 20, no. 4-5, pp. 515–545, 2012. @article{Zelinsky2012a, Understanding how patterns are selected for both recognition and action, in the form of an eye movement, is essential to understanding the mechanisms of visual search. It is argued that selecting a pattern for fixation is time consuming-requiring the pruning of a population of possible saccade vectors to isolate the specific movement to the potential target. To support this position, two experiments are reported showing evidence for off-object fixations, where fixations land between objects rather than directly on objects, and central fixations, where initial saccades land near the center of scenes. Both behaviors were modeled successfully using TAM (Target Acquisition Model; Zelinsky, 2008). TAM interprets these behaviors as expressions of population averaging occurring at different times during saccade target selection. A large population early during search results in the averaging of the entire scene and a central fixation; a smaller population later during search results in averaging between groups of objects and off-object fixations. |
Gregory J. Zelinsky; Yifan Peng; Alexander C. Berg; Dimitris Samaras Modeling guidance and recognition in categorical search: Bridging human and computer object detection Journal Article In: Journal of Vision, vol. 12, no. 9, pp. 957–957, 2012. @article{Zelinsky2012, Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. |
Hang Zhang; Camille Morvan; Louis Alexandre Etezad-Heydari; Laurence T. Maloney Very slow search and reach: Failure to maximize expected gain in an eye-hand coordination task Journal Article In: PLoS Computational Biology, vol. 8, no. 10, pp. e1002718, 2012. @article{Zhang2012a, We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. |
Jun-Yun Zhang; Gong-Liang Zhang; Lei Liu; Cong Yu Whole report uncovers correctly identified but incorrectly placed target information under visual crowding Journal Article In: Journal of Vision, vol. 12, no. 7, pp. 1–11, 2012. @article{Zhang2012b, Multiletter identification studies often find correctly identified letters being reported in wrong positions. However, how position uncertainty impacts crowding in peripheral vision is not fully understood. The observation of a flanker being reported as the central target cannot be taken as unequivocal evidence for position misperception because the observers could be biased to report a more identifiable flanker when failing to identify the central target. In addition, it has never been reported whether a correctly identified central target can be perceived at a flanker position under crowding. Empirical investigation into this possibility holds the key to demonstrating letter-level position uncertainty in crowding, because the position errors of the least identifiable central target cannot be attributed to response bias. We asked normally-sighted observers to report either the central target of a trigram (partial report) or all three characters (whole report). The results showed that, for radially arranged trigrams, the rate of reporting the central target regardless of the reported position in the whole report was significantly higher than the partial report rate, and the extra target reports mostly ended up in flanker positions. Error analysis indicated that target-flanker position swapping and misalignment (lateral shift of the target and one flanker) underlay this target misplacement. Our results thus establish target misplacement as a source of crowding errors and ascertain the role of letter-level position uncertainty in crowding. |
Gu Zhao; Qiang Liu; Jun Jiao; Peiling Zhou; Hong Li; Hong-jin Sun Dual-state modulation of the contextual cueing effect: Evidence from eye movement recordings Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 11–11, 2012. @article{Zhao2012, The repeated configurations of random elements induce a better search performance than that of the displays of novel random configurations. The mechanism of such contextual cueing effect has been investigated through the use of the RT $backslash$texttimes Set Size function. There are divergent views on whether the contextual cueing effect is driven by attentional guidance or facilitation of initial perceptual processing or response selection. To explore this question, we used eye movement recording in this study, which offers information about the substages of the search task. The results suggest that the contextual cueing effect is contributed mainly by attentional guidance, and facilitation of response selection also plays a role. |
Shuo Wang; Masaki Fukuchi; Christof Koch; Naotsugu Tsuchiya Spatial attention is attracted in a sustained fashion toward singular points in the optic flow Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e41040, 2012. @article{Wang2012g, While a single approaching object is known to attract spatial attention, it is unknown how attention is directed when the background looms towards the observer as s/he moves forward in a quasi-stationary environment. In Experiment 1, we used a cued speeded discrimination task to quantify where and how spatial attention is directed towards the target superimposed onto a cloud of moving dots. We found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The effects were less pronounced when the motion was contractive. The more ecologically valid the motion features became (e.g., temporal expansion of each dot, spatial depth structure implied by distribution of the size of the dots), the stronger the attentional effects. Further, the attentional effects were sustained over 1000 ms. Experiment 2 quantified these attentional effects using a change detection paradigm by zooming into or out of photographs of natural scenes. Spatial attention was attracted in a sustained manner such that change detection was facilitated or delayed depending on the location of the FOE only when the motion was expansive. Our results suggest that focal attention is strongly attracted towards singular points that signal the direction of forward ego-motion. |
Zhiguo Wang; Raymond M. Klein Focal spatial attention can eliminate inhibition of return Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 3, pp. 462–469, 2012. @article{Wang2012, Inhibition of return (IOR) is an orienting phenomenon characterized by slower responses to spatially cued than to uncued targets. In Experiment 1, a physically small digit that required identification was presented immediately following a peripheral cue. The digit could appear in the cued peripheral box or in the central box, thus guaranteeing a saccadic response to the cue in one condition and maintenance of fixation in the other. An IOR effect was observed when a saccadic response to the cue was required, but IOR was not generated by the peripheral cue when fixation was maintained in order to process the central digit. In Experiment 2, IOR effects were observed when participants were instructed to ignore the digits, whether those digits were presented in the periphery or at fixation. These findings suggest that behaviorally manifested, cue-induced IOR effects can be eliminated by focal spatial attentional control settings. |
Zhiguo Wang; Jason Satel; Matthew D. Hilchey; Raymond M. Klein Averaging saccades are repelled by prior uninformative cues at both short and long intervals Journal Article In: Visual Cognition, vol. 20, no. 7, pp. 825–847, 2012. @article{Wang2012h, When two spatially proximal stimuli are presented simultaneously, a first saccade is often directed to an intermediate location between the stimuli (averaging saccade). In an earlier study, Watanabe (2001) showed that, at a long cue?target onset asynchrony (CTOA; 600 ms), uninformative cues not only slowed saccadic response times (SRTs) to targets presented at the cued location in single target trials (inhibition of return, IOR), but also biased averaging saccades away from the cue in double target trials. The present study replicatedWatanabe's experimental task with a short CTOA (50 ms), as well as with mixed short (50 ms) and long (600 ms) CTOAs. In all conditions on double target trials, uninformative cues robustly biased averaging saccades away from cued locations. Although SRTs on single target trials were delayed at previously cued locations at both CTOAs when they were mixed, this delay was not observed in the blocked, short CTOA condition.We suggest that top-down factors, such as expectation and attentional control settings, may have asymmetric effects on the temporal and spatial dynamics of oculomotor processing. |
Zhiguo Wang; Jan Theeuwes Dissociable Spatial and Temporal Effects of Inhibition of Return Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e44290, 2012. @article{Wang2012b, Inhibition of return (IOR) refers to the relative suppression of processing at locations that have recently been attended. It is frequently explored using a spatial cueing paradigm and is characterized by slower responses to cued than to uncued locations. The current study investigates the impact of IOR on overt visual orienting involving saccadic eye movements. Using a spatial cueing paradigm, our experiments have demonstrated that at a cue-target onset asynchrony (CTOA) of 400 ms saccades to the vicinity of cued locations are not only delayed (temporal cost) but also biased away (spatial effect). Both of these effects are basically no longer present at a CTOA of 1200 ms. At a shorter 200 ms CTOA, the spatial effect becomes stronger while the temporal cost is replaced by a temporal benefit. These findings suggest that IOR has a spatial effect that is dissociable from its temporal effect. Simulations using a neural field model of the superior colliculus (SC) revealed that a theory relying on short-term depression (STD) of the input pathway can explain most, but not all, temporal and spatial effects of IOR. |
Paul A. Warren; Simon K. Rushton; Andrew J. Foulkes Does optic flow parsing depend on prior estimation of heading? Journal Article In: Journal of Vision, vol. 12, no. 11, pp. 8–8, 2012. @article{Warren2012, We have recently suggested that neural flow parsing mechanisms act to subtract global optic flow consistent with observer movement to aid in detecting and assessing scene-relative object movement. Here, we examine whether flow parsing can occur independently from heading estimation. To address this question we used stimuli comprising two superimposed optic flow fields comprising limited lifetime dots (one planar and one radial). This stimulus gives rise to the so-called optic flow illusion (OFI) in which perceived heading is biased in the direction of the planar flow field. Observers were asked to report the perceived direction of motion of a probe object placed in the OFI stimulus. If flow parsing depends upon a prior estimate of heading then the perceived trajectory should reflect global subtraction of a field consistent with the heading experienced under the OFI. In Experiment 1 we tested this prediction directly, finding instead that the perceived trajectory was biased markedly in the direction opposite to that predicted under the OFI. In Experiment 2 we demonstrate that the results of Experiment 1 are consistent with a positively weighted vector sum of the effects seen when viewing the probe together with individual radial and planar flow fields. These results suggest that flow parsing is not necessarily dependent on prior estimation of heading direction. We discuss the implications of this finding for our understanding of the mechanisms of flow parsing. |
Matthew David Weaver; Dane Aronsen; Johan Lauwereyns A short-lived face alert during inhibition of return Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 3, pp. 510–520, 2012. @article{Weaver2012, In the present study, we explored the role of faces in oculomotor inhibition of return (IOR) using a tightly controlled spatial cuing paradigm. We measured saccadic response latency to targets following peripheral cues that were either faces or objects of lesser sociobiological salience. A recurring influence from cue content was observed across numerous methodological variations. Faces versus other object cues briefly reduced saccade latencies toward subsequently presented targets, independently of attentional allocation and IOR. The results suggest a short-lived priming effect or social facilitation effect from the mere presence of a face. In the present study, we further showed that saccadic responses were unaffected by face versus nonface objects in double-cue presentations. Our findings indicate that peripheral face cues do not influence attentional orienting processes involved in IOR any differently from other objects in a tightly controlled oculomotor IOR paradigm. |
Yang Zhou; Yining Liu; Wangzikang Zhang; Mingsha Zhang Asymmetric influence of egocentric representation onto allocentric perception Journal Article In: Journal of Neuroscience, vol. 32, no. 24, pp. 8354–8360, 2012. @article{Zhou2012, Objects in the visual world can be represented in both egocentric and allocentric coordinates. Previous studies have found that allocentric representation can affect the accuracy of spatial judgment relative to an egocentric frame, but not vice versa. Here we asked whether egocentric representation influenced the processing speed of allocentric perception. We measured the manual reaction time of human subjects in a position discrimination task in which the behavioral response purely relied on the target's allocentric location, independent of its egocentric position. We used two conditions of stimulus location: the compatible condition-allocentric left and egocentric left or allocentric right and egocentric right; the incompatible condition-allocentric left and egocentric right or allocentric right and egocentric left. We found that egocentric representation markedly influenced allocentric perception in three ways. First, in a given egocentric location, allocentric perception was significantly faster in the compatible condition than in the incompatible condition. Second, as the target became more eccentric in the visual field, the speed of allocentric perception gradually slowed down in the incompatible condition but remained unchanged in the compatible condition. Third, egocentric-allocentric incompatibility slowed allocentric perception more in the left egocentric side than the right egocentric side. These results cannot be explained by interhemispheric visuomotor transformation and stimulus-response compatibility theory. Our findings indicate that each hemisphere preferentially processes and integrates the contralateral egocentric and allocentric spatial information, and the right hemisphere receives more ipsilateral egocentric inputs than left hemisphere does. |
Ariel Zylberberg; Manuel Oliva; Mariano Sigman Pupil dilation: A fingerprint of temporal selection during the "Attentional Blink" Journal Article In: Frontiers in Psychology, vol. 3, pp. 316, 2012. @article{Zylberberg2012a, Pupil dilation indexes cognitive events of behavioral relevance, like the storage of information to memory and the deployment of attention. Yet, given the slow temporal response of the pupil dilation, it is not known from previous studies whether the pupil can index cognitive events in the short time scale of ∼100 ms. Here we measured the size of the pupil in the Attentional Blink (AB) experiment, a classic demonstration of attentional limitations in processing rapidly presented stimuli. In the AB, two targets embedded in a sequence have to be reported and the second stimulus is often missed if presented between 200 and 500 ms after the first. We show that pupil dilation can be used as a marker of cognitive processing in AB, revealing both the timing and amount of cognitive processing. Specifically, we found that in the time range where the AB is known to occur: (i) the pupil dilation was delayed, mimicking the pattern of response times in the Psychological Refractory Period (PRP) paradigm, (ii) the amplitude of the pupil was reduced relative to that of larger lags, even for correctly identified targets, and (iii) the amplitude of the pupil was smaller for missed than for correctly reported targets. These results support two-stage theories of the Attentional Blink where a second processing stage is delayed inside the interference regime, and indicate that the pupil dilation can be used as a marker of cognitive processing in the time scale of ∼100 ms. Furthermore, given the known relation between the pupil dilation and the activity of the locus coeruleus, our results also support theories that link the serial stage to the action of a specific neuromodulator, norepinephrine. |
Ariel Zylberberg; Pablo Barttfeld; Mariano Sigman The construction of confidence in a perceptual decision Journal Article In: Frontiers in Integrative Neuroscience, vol. 6, pp. 79, 2012. @article{Zylberberg2012, Decision-making involves the selection of one out of many possible courses of action. A decision may bear on other decisions, as when humans seek a second medical opinion before undergoing a risky surgical intervention. These "meta-decisions" are mediated by confidence judgments-the degree to which decision-makers consider that a choice is likely to be correct. We studied how subjective confidence is constructed from noisy sensory evidence. The psychophysical kernels used to convert sensory information into choice and confidence decisions were precisely reconstructed measuring the impact of small fluctuations in sensory input. This is shown in two independent experiments in which human participants made a decision about the direction of motion of a set of randomly moving dots, or compared the brightness of a group of fluctuating bars, followed by a confidence report. The results of both experiments converged to show that: (1) confidence was influenced by evidence during a short window of time at the initial moments of the decision, and (2) confidence was influenced by evidence for the selected choice but was virtually blind to evidence for the non-selected choice. Our findings challenge classical models of subjective confidence-which posit that the difference of evidence in favor of each choice is the seed of the confidence signal. |
Jan Zwickel; Mathias Hegele; Marc Grosjean Ocular tracking of biological and nonbiological motion: The effect of instructed agency Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 1, pp. 52–57, 2012. @article{Zwickel2012, Recent findings suggest that visuomotor performance is modulated by people's beliefs about the agency (e.g., animate vs. inanimate) behind the events they perceive. This study investigated the effect of instructed agency on ocular tracking of point-light motions with biological and nonbiological velocity profiles. The motions followed either a relatively simple (ellipse) or a more complex (scribble) trajectory, and agency was manipulated by informing the participants that the motions they saw were either human or computer generated. In line with previous findings, tracking performance was better for biological than for nonbiological motions, and this effect was particularly pronounced for the simpler (elliptical) motions. The biological advantage was also larger for the human than for the computer instruction condition, but only for a measure that captured the predictive component of smooth pursuit. These results suggest that ocular tracking is influenced by the internal forward model people choose to adopt. |
Wietske Zuiderbaan; Ben M. Harvey; Serge O. Dumoulin Modeling center – surround configurations in population receptive fields using fMRI Journal Article In: Journal of Vision, vol. 12, no. 3, pp. 1–15, 2012. @article{Zuiderbaan2012, Antagonistic center–surround configurations are a central organizational principle of our visual system. In visual cortex, stimulation outside the classical receptive field can decrease neural activity and also decrease functional Magnetic Resonance Imaging (fMRI) signal amplitudes. Decreased fMRI amplitudes below baseline—0% contrast—are often referred to as “negative” responses. Using neural model-based fMRI data analyses, we can estimate the region of visual space to which each cortical location responds, i.e., the population receptive field (pRF). Current models of the pRF do not account for a center–surround organization or negative fMRI responses. Here, we extend the pRF model by adding surround suppression. Where the conventional model uses a circular symmetric Gaussian function to describe the pRF, the new model uses a circular symmetric difference-of-Gaussians (DoG) function. The DoG model allows the pRF analysis to capture fMRI signals below baseline and surround suppression. Comparing the fits of the models, an increased variance explained is found for the DoG model. This improvement was predominantly present in V1/2/3 and decreased in later visual areas. The improvement of the fits was particularly striking in the parts of the fMRI signal below baseline. Estimates for the surround size of the pRF show an increase with eccentricity and over visual areas V1/2/3. For the suppression index, which is based on the ratio between the volumes of both Gaussians, we show a decrease over visual areas V1 and V2. Using non-invasive fMRI techniques, this method gives the possibility to examine assumptions about center–surround receptive fields in human subjects. |
Heng Zou; Hermann J. Muller; Zhuanghua Shi Non-spatial sounds regulate eye movements and enhance visual search Journal Article In: Journal of Vision, vol. 12, no. 5, pp. 2–2, 2012. @article{Zou2012, Spatially uninformative sounds can enhance visual search when the sounds are synchronized with color changes of the visual target, a phenomenon referred to as "pip-and-pop" effect (van der Burg, Olivers, Bronkhorst, & Theeuwes, 2008). The present study investigated the relationship of this effect to changes in oculomotor scanning behavior induced by the sounds. The results revealed sound events to increase fixation durations upon their occurrence and to decrease the mean number of saccades. More specifically, spatially uninformative sounds facilitated the orientation of ocular scanning away from already scanned display regions not containing a target (Experiment 1) and enhanced search performance even on target-absent trials (Experiment 2). Facilitation was also observed when the sounds were presented 100 ms prior to the target or at random (Experiment 3). These findings suggest that non-spatial sounds cause a general freezing effect on oculomotor scanning behavior, an effect which in turn benefits visual search performance by temporally and spatially extended information sampling. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr Visual motion distorts visual and motor space Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 10–10, 2012. @article{Zimmermann2012, Much evidence suggests that visual motion can cause severe distortions in the perception of spatial position. In this study, we show that visual motion also distorts saccadic eye movements. Landing positions of saccades performed to objects presented in the vicinity of visual motion were biased in the direction of motion. The targeting errors for both saccades and perceptual reports were maximum during motion onset and were of very similar magnitude under the two conditions. These results suggest that visual motion affects a representation of spatial position, or spatial map, in a similar fashion for visuomotor action as for perception. |
Hans A. Trukenbrod; Ralf Engbert Eye movements in a sequential scanning task: Evidence for distributed processing Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 1–12, 2012. @article{Trukenbrod2012, Current models of eye movement control are derived from theories assuming serial processing of single items or from theories based on parallel processing of multiple items at a time. This issue has persisted because most investigated paradigms generated data compatible with both serial and parallel models. Here, we study eye movements in a sequential scanning task, where stimulus n indicates the position of the next stimulus n + 1. We investigate whether eye movements are controlled by sequential attention shifts when the task requires serial order of processing. Our measures of distributed processing in the form of parafoveal-on-foveal effects, long-range modulations of target selection, and skipping saccades provide evidence against models strictly based on serial attention shifts. We conclude that our results lend support to parallel processing as a strategy for eye movement control. |
Marco Turi; David C. Burr Spatiotopic perceptual maps in humans: Evidence from motion adaptation Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 279, no. 1740, pp. 3091–3097, 2012. @article{Turi2012, How our perceptual experience of the world remains stable and continuous despite the frequent repositioning eye movements remains very much a mystery. One possibility is that our brain actively constructs a spatiotopic representation of the world, which is anchored in external–or at least head-centred–coordinates. In this study, we show that the positional motion aftereffect (the change in apparent position after adaptation to motion) is spatially selective in external rather than retinal coordinates, whereas the classic motion aftereffect (the illusion of motion after prolonged inspection of a moving source) is selective in retinotopic coordinates. The results provide clear evidence for a spatiotopic map in humans: one which can be influenced by image motion. |
Yusuke Uchida; Daisuke Kudoh; Akira Murakami; Masaaki Honda; Shigeru Kitazawa Origins of superior dynamic visual acuity in baseball players: Superior eye movements or superior image processing Journal Article In: PLoS ONE, vol. 7, no. 2, pp. e31530, 2012. @article{Uchida2012, Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in athletes than in non-athletes, and the better DVA of athletes has been attributed to a better ability to track moving objects. In the present study, we hypothesized that the better DVA of athletes is partly derived from better perception of moving images on the retina through some kind of perceptual learning. To test this hypothesis, we quantitatively measured DVA in baseball players and non-athletes using moving Landolt rings in two conditions. In the first experiment, the participants were allowed to move their eyes (free-eye-movement conditions), whereas in the second they were required to fixate on a fixation target (fixation conditions). The athletes displayed significantly better DVA than the non-athletes in the free-eye-movement conditions. However, there was no significant difference between the groups in the fixation conditions. These results suggest that the better DVA of athletes is primarily due to an improved ability to track moving targets with their eyes, rather than to improved perception of moving images on the retina. |
Yoshiyuki Ueda; Asuka Komiya Cultural adaptation of visual attention: Calibration of the oculomotor control system in accordance with cultural scenes Journal Article In: PLoS ONE, vol. 7, no. 11, pp. e50282, 2012. @article{Ueda2012a, Previous studies have found that Westerners are more likely than East Asians to attend to central objects (i.e., analytic attention), whereas East Asians are more likely than Westerners to focus on background objects or context (i.e., holistic attention). Recently, it has been proposed that the physical environment of a given culture influences the cultural form of scene cognition, although the underlying mechanism is yet unclear. This study examined whether the physical environment influences oculomotor control. Participants saw culturally neutral stimuli (e.g., a dog in a park) as a baseline, followed by Japanese or United States scenes, and finally culturally neutral stimuli again. The results showed that participants primed with Japanese scenes were more likely to move their eyes within a broader area and they were less likely to fixate on central objects compared with the baseline, whereas there were no significant differences in the eye movements of participants primed with American scenes. These results suggest that culturally specific patterns in eye movements are partly caused by the physical environment. |
Yoshiyuki Ueda; Jun Saiki Characteristics of eye movements in 3-D object learning: Comparison between within-modal and cross-modal object recognition Journal Article In: Perception, vol. 41, no. 11, pp. 1289–1298, 2012. @article{Ueda2012, Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning - visual recognition) and cross-modal learning (egÿvisual learning - haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances. |
Born Born; Ulrich Ansorge; Dirk Kerzel Feature-based effects in the coupling between attention and saccades Journal Article In: Journal of Vision, vol. 12, no. 11, pp. 1–17, 2012. @article{Born2012, Previous research has demonstrated that prior to saccade execution visual attention is imperatively shifted towards the saccade target (e.g., Deubel & Schneider, 1996; Kowler, Anderson, Dosher, & Blaser, 1995). Typically, observers had to make a saccade according to an arrow cue and simultaneously perform a perceptual discrimination task either at the saccade endpoint or elsewhere on the screen. Discrimination performance was poor if the location of the saccade target (ST) and the discrimination target (DT) did not coincide. However, those experiments only investigated shifts of spatial attention. In the current experiments, we examined how feature-based attention is deployed before a saccade. In Experiment 1, we randomly varied the colors of the ST and DT. Results showed that discrimination performance was better when the DT was shown in the same color as the ST. This color congruency effect was slightly larger and more reliable when ST color was relevant and constant across trials (Experiment 2). We conclude that selection of a colored ST can induce display-wide facilitative processing of stimuli sharing this color. Results are discussed in terms of saccade programming and saccade selection, color priming in visual search, color cuing, and color-based top-down contingent attentional capture. We also discuss basic mechanisms of spatial- and feature-based attention and predictive remapping of visual information across saccades. |
Gianfranco Bosco; Sergio Delle Monache; Francesco Lacquaniti Catching what we can't see: Manual interception of occluded fly-ball trajectories Journal Article In: PLoS ONE, vol. 7, no. 11, pp. e49381, 2012. @article{Bosco2012, Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories. |
Jeffrey D. Bower; Zheng Bian; George J. Andersen Effects of retinal eccentricity and acuity on global-motion processing Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 5, pp. 942–949, 2012. @article{Bower2012, The present study assessed direction discrimination with moving random-dot cinematograms at retinal eccentricities of 0, 8, 22, and 40deg. In addition, Landolt-C acuity was assessed at these eccentricities to determine whether changes in motion discrimination performance covaried with acuity in the retinal periphery. The results of the experiment indicated that discrimination thresholds increased with retinal eccentricity and directional variance (noise), independent of acuity. Psychophysical modeling indicated that the results for eccentricity and noise could be explained by an increase in channel bandwidth and an increase in internal multiplicative noise. |
Alison C. Bowling; Emily A. Hindman; James F. Donnelly Prosaccade errors in the antisaccade task: Differences between corrected and uncorrected errors and links to neuropsychological tests Journal Article In: Experimental Brain Research, vol. 216, no. 2, pp. 169–179, 2012. @article{Bowling2012, The relations among spatial memory, Stroop-like colour-word subtests, and errors on antisaccade and memory-guided saccadic eye-movement trials for older and younger adults were tested. Two types of errors in the antisaccade task were identified: short latency prosaccade errors that were immediately corrected and longer latency uncorrected prosaccade errors. The age groups did not differ on percentages of either corrected or uncorrected errors, but the latency and time to correct prosaccade errors were shorter for younger than older adults. Uncorrected prosaccade errors correlated significantly with spatial memory accuracy and errors on the colour-word subtests, but neither of these neuropsychological indices correlated with corrected prosaccade errors. These findings suggest that uncorrected prosaccade errors may be a result of cognitive factors involving a failure to maintain the goal of the antisaccade task in working memory. In contrast, corrected errors may be a consequence of a fixation system involving an initial failure to inhibit a reflexive prosaccade but with active goal maintenance enabling correction to take place. |
Joseph L. Brooks; Sharon Gilaie-Dotan; Geraint Rees; Shlomo Bentin; Jon Driver Preserved local but disrupted contextual figure-ground influences in an individual with abnormal function of intermediate visual areas Journal Article In: Neuropsychologia, vol. 50, no. 7, pp. 1393–1407, 2012. @article{Brooks2012, Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. |
Pernille Bruhn; Claus Bundesen Anticipation of visual form independent of knowing where the form will occur Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 5, pp. 930–941, 2012. @article{Bruhn2012, We investigated how selective preparation for specific forms is affected by concurrent preknowledge of location when upcoming visual stimuli are anticipated. In three experiments, participants performed a two-choice response time (RT) task in which they discriminated between standard upright and rotated alphanumeric characters while fixating a central fixation cross. In different conditions, we gave the participants preknowledge of only form, only location, both location and form, or neither location nor form. We found main effects of both preknowledge of form and preknowledge of location, with significantly lower RTs when preknowledge was present than when it was absent. Our main finding was that the two factors had additive effects on RTs. A strong interaction between the two factors, such that preknowledge of form had little or no effect without preknowledge of location, would have supported the hypothesis that form anticipation relies on depictive, perception-like activations in topographically organized parts of the visual cortex. The results provided no support for this hypothesis. On the other hand, by an additive-factors logic Sternberg (Sternberg, Acta Psychologica 30:276-315, 1969), the additivity of our effects suggested that preknowledge of form and location, respectively, affected two functionally independent, serial stages of processing. We suggest that the two stages were, first, direction of attention to the stimulus location and, subsequently, discrimination between upright and rotated stimuli. Presumably, preknowledge of location advanced the point in time at which attention was directed at the stimulus location, whereas preknowledge of form reduced the time subsequently taken for stimulus discrimination. |
Aneta Brzezicka; Izabela Krejtz; Ulrich Hecker; Jochen Laubrock Eye movement evidence for defocused attention in dysphoria: A perceptual span analysis Journal Article In: International Journal of Psychophysiology, vol. 85, no. 1, pp. 129–133, 2012. @article{Brzezicka2012, The defocused attention hypothesis (von Hecker and Meiser, 2005) assumes that negative mood broadens attention, whereas the analytical rumination hypothesis (Andrews and Thompson, 2009) suggests a narrowing of the attentional focus with depression. We tested these conflicting hypotheses by directly measuring the perceptual span in groups of dysphoric and control subjects, using eye tracking. In the moving window paradigm, information outside of a variable-width gaze-contingent window was masked during reading of sentences. In measures of sentence reading time and mean fixation duration, dysphoric subjects were more pronouncedly affected than controls by a reduced window size. This difference supports the defocused attention hypothesis and seems hard to reconcile with a narrowing of attentional focus. |
Julie N. Buchan; Kevin G. Munhall The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information Journal Article In: Seeing and Perceiving, vol. 25, no. 1, pp. 87–106, 2012. @article{Buchan2012, Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task. |
Simona Buetti; Elsa Juan; Mike Rinck; Dirk Kerzel Affective states leak into movement execution: Automatic avoidance of threatening stimuli in fear of spider is visible in reach trajectories Journal Article In: Cognition and Emotion, vol. 26, no. 7, pp. 1176–1188, 2012. @article{Buetti2012, Approach-like actions are initiated faster with stimuli of positive valence. Conversely, avoidance-like actions are initiated faster with threatening stimuli of negative valence. We went beyond reaction time measures and investigated whether threatening stimuli also affect the way in which an action is carried out. Participants moved their hand either away from the picture of a spider (avoidance) or they moved their hand toward the picture of a spider (approach). We compared spider-fearful participants to non-anxious participants. When reaching away from the threatening spider picture, spider-fearful participants moved more directly to the target than controls. When reaching toward the threatening spider, spider-fearful participants moved less directly to the target than controls. Some conditions that showed clear differences in movement trajectories between spider-fearful and control participants were devoid of differences in reaction time. The deviation away from threatening stimuli provides evidence for the claim that affective states like fear leak into movement programming and produce deviations away from threatening stimuli in movement execution. Avoidance of threatening stimuli is rapidly integrated into ongoing motor behaviour in order to increase the distance between the participant's body and the threatening stimulus. |
Melanie R. Burke; Richard J. Allen; Claudia C. Gonzalez Eye and hand movements during reconstruction of spatial memory Journal Article In: Perception, vol. 41, no. 7, pp. 803–818, 2012. @article{Burke2012, Recent behavioural and biological evidence indicates common mechanisms serving working memory and attention (eg Awh et al, 2006 Neuroscience 139 201–208). This study explored the role of spatial attention and visual search in an adapted Corsi spatial memory task. Eye movements and touch responses were recorded from participants who recalled locations (signalled by colour or shape change) from an array presented either simultaneously or sequentially. The time delay between target presentation and recall (0, 5, or 10 s) and the number of locations to be remembered (2–5) were also manipulated. Analysis of the response phase revealed subjects were less accurate (touch data) and fixated longer (eye data) when responding to sequentially presented targets suggesting higher cognitive effort. Fixation duration on target at recall was also influenced by whether spatial location was initially signalled by colour or shape change. Finally, we found that the sequence tasks encouraged longer fixations on the signalled targets than simultaneous viewing during encoding, but no difference was observed during recall. We conclude that the attentional manipulations (colour/shape) mainly affected the eye movement parameters, whereas the memory manipulation (sequential versus simultaneous, number of items) mainly affected the performance of the hand during recall, and thus the latter is more important for ascertaining if an item is remembered or forgotten. In summary, the nature of the stimuli that is used and how it is presented play key roles in determining subject performance and behaviour during spatial memory tasks. |
Manuel G. Calvo; Aida Gutiérrez-García; Andrés Fernández-Martín Anxiety and deficient inhibition of threat distractors: Spatial attention span and time course Journal Article In: Journal of Cognitive Psychology, vol. 24, no. 1, pp. 66–78, 2012. @article{Calvo2012, We investigated whether anxiety facilitates detection of threat stimuli outside the focus of overt attention, and the time course of the interference produced by threat distractors. Threat or neutral word distractors were presented in attended (foveal) and unattended (parafoveal) locations followed by an unrelated probe word at 300 ms (Experiments 1 and 2) or 1000 ms (Experiment 2) stimulus-onset asynchrony (SOA) in a lexical decision task. Results showed: (1) no effects of trait anxiety on selective saccades to the parafoveal threat distractors; (2) interference with probe processing (i.e., slowed lexical decision times) following a foveal threat distractor at 300 ms SOA for all participants, regardless of anxiety, but only for high-anxiety participants at 1000 ms SOA; and (3) no interference effects of parafoveal threat distractors. These findings suggest that anxiety does not enhance preattentive semantic processing of threat words. Rather, anxiety leads to delays in the inhibitory control of attended task-irrelevant threat stimuli. |
Céline Cavézian; Derick Valadao; Marc Hurwitz; Mohamed Saoud; James Danckert Finding centre: Ocular and fMRI investigations of bisection and landmark task performance Journal Article In: Brain Research, vol. 1437, pp. 89–103, 2012. @article{Cavezian2012, The line bisection task is used as a bedside test of spatial neglect patients who typically bisect lines to the right of true centre. To disambiguate the contribution of perceptual from motor biases in bisection, previous research has used the landmark task in which participants determine whether a transection mark is left or right of centre. One recent study using stimuli that reliably leads to leftward perceptual biases in healthy individuals, found that ocular judgements of centre were biased to the right of centre, whereas manual bisections were biased leftwards. Here we used behavioural measures and functional MRI in healthy individuals to investigate ocular and perceptual judgements of centre. Ocular judgements were made by having participants fixate the centre of a horizontal bar that was dark at one end and light at the other (i.e., a 'greyscale' stimulus), whereas perceptual responses were made by having participants indicate whether a transection mark on the greyscales stimuli was to the left or right of centre. Behavioural data indicated a leftward bias in the first, second and longest fixations for bisection. Moreover, greyscale orientation (i.e., dark extremity to the right or to the left), and stimulus position modulated fixations. In contrast, for the landmark task, initial fixations were attracted towards the transection mark, whereas subsequent fixations were closer to veridical centre. Imaging data showed a large bilateral network, including superior parietal and lingual cortex, that was active for bisection. The landmark task activated a predominantly right hemisphere network including superior and inferior parietal cortices. Taken together these results indicate that very different strategies and underlying neural networks are invoked by the bisection and landmark tasks. |
Jessica P. K. Chan; Jennifer D. Ryan Holistic representations of internal and external face features are used to support recognition Journal Article In: Frontiers in Psychology, vol. 3, pp. 87, 2012. @article{Chan2012, Face recognition is impaired when changes are made to external face features (e.g., hairstyle), even when all internal features (i.e., eyes, nose, mouth) remain the same. Eye movement monitoring was used to determine the extent to which altered hairstyles affect processing of face features, thereby shedding light on how internal and external features are stored in memory. Participants studied a series of faces, followed by a recognition test in which novel, repeated, and manipulated (altered hairstyle) faces were presented. Recognition was higher for repeated than manipulated faces. Although eye movement patterns distinguished repeated from novel faces, viewing of manipulated faces was similar to that of novel faces. Internal and external features may be stored together as one unit in memory; consequently, changing even a single feature alters processing of the other features and disrupts recognition. |
E. Charles Leek; Candy Patterson; Matthew A. Paul; Robert D. Rafal; Filipe Cristino Eye movements during object recognition in visual agnosia Journal Article In: Neuropsychologia, vol. 50, no. 9, pp. 2142–2153, 2012. @article{CharlesLeek2012, This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. |
Lijing Chen; Xingshan Li; Yufang Yang Focus, newness and their combination: Processing of information structure in discourse Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e42533, 2012. @article{Chen2012a, The relationship between focus and new information has been unclear despite being the subject of several information structure studies. Here, we report an eye-tracking experiment that explored the relationship between them in on-line discourse processing in Chinese reading. Focus was marked by the Chinese focus-particle "shi", which is equivalent to the cleft structure "it was... who..." in English. New information was defined as the target word that was not present in previous contexts. Our results show that, in the target region, focused information was processed more quickly than non-focused information, while new information was processed more slowly than given information. These results reveal differences in processing patterns between focus and newness, and suggest that they are different concepts that relate to different aspects of cognitive processing. In addition, the effect of new/given information occurred in the post-target region for the focus condition, but not for the non-focus condition, suggesting a complex relationship between focus and newness in the discourse integration stage. |
Q. Chen; Ralph Weidner; Peter H. Weiss; John C. Marshall; Gereon R. Fink Neural interaction between spatial domain and spatial reference frame in parietal-occipital junction. Journal Article In: Journal of Cognitive Neuroscience, vol. 24, no. 11, pp. 2223–2236, 2012. @article{Chen2012, On the basis of double dissociations in clinical symptoms of patients with unilateral visuospatial neglect, neuropsychological research distinguishes between different spatial domains (near vs. far) and different spatial reference frames (egocentric vs. allocentric). In this fMRI study, we investigated the neural interaction between spatial domains and spatial reference frames by constructing a virtual three-dimensional world and asking participants to perform either allocentric or egocentric judgments on an object located in either near or far space. Our results suggest that the parietal-occipital junction (POJ) not only shows a preference for near-space processing but is also involved in the neural interaction between spatial domains and spatial reference frames. Two dissociable streams of visual processing exist in the human brain: a ventral perception-related stream and a dorsal action-related stream. Consistent with the perception-action model, both far-space processing and allocentric judgments draw upon the ventral stream whereas both near-space processing and egocentric judgments draw upon the dorsal stream. POJ showed higher neural activity during allocentric judgments (ventral) in near space (dorsal) and egocentric judgments (dorsal) in far space (ventral) as compared with egocentric judgments (dorsal) in near space (dorsal) and allocentric judgments (ventral) in far space (ventral). Because representations in the dorsal and ventral streams need to interact during allocentric judgments (ventral) in near space (dorsal) and egocentric judgments (dorsal) in far space (ventral), our results imply that POJ is involved in the neural interaction between the two streams. Further evidence for the suggested role of POJ as a neural interface between the dorsal and ventral streams is provided by functional connectivity analysis. |
Selmaan Chettih; Frank H. Durgin; Daniel J. Grodner Mixing metaphors in the cerebral hemispheres: What happens when careers collide? Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 38, no. 2, pp. 295–311, 2012. @article{Chettih2012, Are processes of figurative comparison and figurative categorization different? An experiment combining alternative-sense and matched-sense metaphor priming with a divided visual field assessment technique sought to isolate processes of comparison and categorization in the 2 cerebral hemispheres. For target metaphors presented in the right visual field/left cerebral hemisphere (RVF/LH), only matched-sense primes were facilitative. Literal primes and alternative-sense primes had no effect on comprehension time compared to the unprimed baseline. The effects of matched-sense primes were additive with the rated conventionality of the targets. For target metaphors presented to the left visual field/right cerebral hemisphere (LVF/RH), matched-sense primes were again additively facilitative. However, alternative-sense primes, though facilitative overall, seemed to eliminate the preexisting advantages of conventional target metaphor senses in the LVF/RH in favor of metaphoric senses similar to those of the primes. These findings are consistent with tightly controlled categorical coding in the LH and coarse, flexible, context-dependent coding in the RH. |
Joseph D. Chisholm; Alan Kingstone Improved top-down control reduces oculomotor capture: The case of action video game players Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 2, pp. 257–262, 2012. @article{Chisholm2012, Action video game players (AVGPs) have been demonstrated to outperform non-video-game players(NVGPs) on a range of cognitive tasks. Evidence to date suggests that AVGPs' enhanced performance in attention based tasks can be accounted for by improved top-down control over the allocation of visuospatial attention. Thus, we propose that AVGPs provide a population that can be used to investigate the role of top-down factors in key models of attention. Previous work using AVGPs has indicated that they experience less interfering effects from a salient but task-irrelevant distractor in an attentional capture paradigm (Chisholm, Hickey, Theeuwes, & Kingstone,2010). Two fundamentally different bottom-up and top-down models of attention can account for this result. In the present study, we compared AVGP and NVGP performance in an oculomotor capture paradigm to address when and how top-down control modulates capture. In tracking eye movements, we acquired an explicit measurement of attention allocation and replicated the covert attention effect that AVGPs are quicker than NVGPs to attend to a target in the presence of a task-irrelevant distractor. Critically, our study reveals that this top-down gain is the result of fewer shifts of attention to the salient distractor, rather than faster disengagement after bottom-up capture has occurred. This supports the theory that top-down control can modulate the involuntary capture of attention. |
Sarah C. Creel; Melanie A. Tumlin Online recognition of music is influenced by relative and absolute pitch information Journal Article In: Cognitive Science, vol. 36, no. 2, pp. 224–260, 2012. @article{Creel2012b, Three experiments explored online recognition in a nonspeech domain, using a novel experimental paradigm. Adults learned to associate abstract shapes with particular melodies, and at test they identified a played melody's associated shape. To implicitly measure recognition, visual fixations to the associated shape versus a distractor shape were measured as the melody played. Degree of similarity between associated melodies was varied to assess what types of pitch information adults use in recognition. Fixation and error data suggest that adults naturally recognize music, like language, incrementally, computing matches to representations before melody offset, despite the fact that music, unlike language, provides no pressure to execute recognition rapidly. Further, adults use both absolute and relative pitch information in recognition. The implicit nature of the dependent measure should permit use with a range of populations to evaluate postulated developmental and evolutionary changes in pitch encoding. |
Bruno Dagnino; Joaquin Navajas; Mariano Sigman Eye fixations indicate men's preference for female breasts or buttocks Journal Article In: Archives of Sexual Behavior, vol. 41, no. 4, pp. 929–937, 2012. @article{Dagnino2012, Evolutionary psychologists have been interested in male preferences for particular female traits that are thought to signal health and reproductive potential. While the majority of studies have focused on what makes specific body traits attractive-such as the waist-to-hip ratio, the body mass index, and breasts shape and size-there is little empirical research that has examined individual differences in male preferences for specific traits (e.g., favoring breasts over buttocks). The current study begins to fill this empirical gap. In the first experiment (Study 1), 184 male participants were asked to report their preference between breasts and buttocks on a continuous scale. We found that (1) the distribution of preference was bimodal, indicating that Argentinean males tended to define themselves as favoring breasts or buttocks but rarely thinking that these traits contributed equally to their choice and (2) the distribution was biased towards buttocks. In a second experiment (Study 2), 19 male participants were asked to rate pictures of female breasts and buttocks. This study was necessary to generate three categories of pictures with statistically different ratings (high, medium, and low). In a third experiment (Study 3), we recorded eye-movements of 25 male participants while they chose the more attractive between two women, only seeing their breasts and buttock. We found that the first and last fixations were systematically directed towards the self-reported preferred trait. |