Cognitive Eye-Tracking Publications
All EyeLink eye tracker cognitive and perception eye tracker research publications up until 2024 (with some early 2025s) are listed below by year. You can search the eye-tracking publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2013 |
Jessica Werthmann; Anne Roefs; Chantal Nederkoorn; Anita Jansen Desire lies in the eyes: Attention bias for chocolate is related to craving and self-endorsed eating permission Journal Article In: Appetite, vol. 70, pp. 81–89, 2013. @article{Werthmann2013, The present study tested the impact of experimentally manipulated perceived availability of chocolate on attention for chocolate stimuli, momentary (state) craving for chocolate and consumption of chocolate in healthy weight female students. It was hypothesized that eating forbiddance would be related to attentional avoidance (thus diminished attention focus on food cues in an attempt to prevent oneself from processing food cues) and that eating motivation would be related to attentional approach (thus maintained attentional focus on food cues). High chronic chocolate cravers (n= 40) and low cravers (n= 40) participated in one of four perceived availability contexts (required to eat, forbidden to eat, individual choice to eat, and 50% chance to eat) following a brief chocolate exposure. Attention for chocolate was measured using eye-tracking; momentary craving from self-report; and the consumption of chocolate was assessed from direct observation. The perceived availability of chocolate did not significantly influence attention allocation for chocolate stimuli, momentary craving or chocolate intake. High chocolate cravers reported significantly higher momentary craving for chocolate (d= 1.29, p<. .001), and showed longer initial duration of gaze on chocolate, than low cravers (d= 0.63, p<. .01). In contrast, participants who indicated during the manipulation check that they would not have permitted themselves to eat chocolate, irrespective of the availability instruction they received, showed significantly less craving (d= 0.96, p<. .01) and reduced total dwell time for chocolate stimuli than participants who permitted themselves to eat chocolate (d= 0.53, p<. .05). Thus, this study provides evidence that attention biases for food stimuli reflect inter-individual differences in eating motivation, - such as chronic chocolate craving, and self-endorsed eating permission. |
Jessica Werthmann; Anne Roefs; Chantal Nederkoorn; Karin Mogg; Brendan P. Bradley; Anita Jansen Attention bias for food is independent of restraint in healthy weight individuals-An eye tracking study Journal Article In: Eating Behaviors, vol. 14, no. 3, pp. 397–400, 2013. @article{Werthmann2013a, Objective: Restrained eating style and weight status are highly correlated. Though both have been associated with an attentional bias for food cues, in prior research restraint and BMI were often confounded. The aim of the present study was to determine the existence and nature of an attention bias for food cues in healthy-weight female restrained and unrestrained eaters, when matching the two groups on BMI. Method: Attention biases for food cues were measured by recordings of eye movements during a visual probe task with pictorial food versus non-food stimuli. Healthy weight high restrained (n=. 24) and low restrained eaters (n=. 21) were matched on BMI in an attempt to unconfound the effects of restraint and weight on attention allocation patterns. Results: All participants showed elevated attention biases for food stimuli in comparison to neutral stimuli, independent of restraint status. Discussion: These findings suggest that attention biases for food-related cues are common for healthy weight women and show that restrained eating (per se) is not related to biased processing of food stimuli, at least not in healthy weight participants. |
Gregory L. West; Naseem Al-Aidroos; Jay Pratt Action video game experience affects oculomotor performance Journal Article In: Acta Psychologica, vol. 142, no. 1, pp. 38–42, 2013. @article{West2013, Action video games have been show to affect a variety of visual and cognitive processes. There is, however, little evidence of whether playing video games can also affect motor action. To investigate the potential link between experience playing action video games and changes in oculomotor action, we tested habitual action video game players (VGPs) and non-video game players (NVGPs) in a saccadic trajectory deviation task. We demonstrate that spatial curvature of a saccadic trajectory towards or away from distractor is profoundly different between VGPs and NVGPs. In addition, task performance accuracy improved over time only in VGPs. Results are discussed in the context of the competing interplay between stimulus-driven motor programming and top-down inhibition during oculomotor execution. |
Alex L. White; Martin Rolfs; Marisa Carrasco Adaptive deployment of spatial and feature-based attention before saccades Journal Article In: Vision Research, vol. 85, pp. 26–35, 2013. @article{White2013, What you see depends not only on where you are looking but also on where you will look next. The pre-saccadic attention shift is an automatic enhancement of visual sensitivity at the target of the next saccade. We investigated whether and how perceptual factors independent of the oculomotor plan modulate pre-saccadic attention within and across trials. Observers made saccades to one (the target) of six patches of moving dots and discriminated a brief luminance pulse (the probe) that appeared at an unpredictable location. Sensitivity to the probe was always higher at the target's location (spatial attention), and this attention effect was stronger if the previous probe appeared at the previous target's location. Furthermore, sensitivity was higher for probes moving in directions similar to the target's direction (feature-based attention), but only when the previous probe moved in the same direction as the previous target. Therefore, implicit cognitive processes permeate pre-saccadic attention, so that-contingent on recent experience-it flexibly distributes resources to potentially relevant locations and features. |
Melonie Williams; Pierre Pouget; Leanne Boucher; Geoffrey F. Woodman Visual-spatial attention aids the maintenance of object representations in visual working memory Journal Article In: Memory & Cognition, vol. 41, no. 5, pp. 698–715, 2013. @article{Williams2013, Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers' eye movements while they remembered a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval should impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy, even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. |
Tracey A. Williams; Melanie A. Porter; Robyn Langdon Viewing social scenes: A visual scan-path study comparing fragile X syndrome and williams syndrome Journal Article In: Journal of Autism and Developmental Disorders, vol. 43, no. 8, pp. 1880–1894, 2013. @article{Williams2013a, Fragile X syndrome (FXS) and Williams syndrome (WS) are both genetic disorders which present with similar cognitive-behavioral problems, but distinct social phenotypes. Despite these social differences both syndromes display poor social relations which may result from abnormal social processing. This study aimed to manipulate the location of socially salient information within scenes to investigate the visual attentional mechanisms of: capture, disengagement, and/or general engagement. Findings revealed that individuals with FXS avoid social information presented centrally, at least initially. The WS findings, on the other hand, provided some evidence that difficulties with attentional disengagement, rather than attentional capture, may play a role in the WS social phenotype. These findings are discussed in relation to the distinct social phenotypes of these two disorders. |
Vickie M. Williamson; Mary Hegarty; Ghislain Deslongchamps; Kenneth C. Williamson; Mary Jane Shultz Identifying student use of ball-and-stick images versus electrostatic potential map images via eye tracking Journal Article In: Journal of Chemical Education, vol. 90, no. 2, pp. 159–164, 2013. @article{Williamson2013, This pilot study examined students' use of ball-and-stick images versus electrostatic potential maps when asked questions about electron density, positive charge, proton attack, and hydroxide attack with six different molecules (two alcohols, two carboxylic acids, and two hydroxycarboxylic acids). Students' viewing of these dual images was measured by monitoring eye fixations of the students while they read and answered questions. Results showed that students spent significantly more time with the ball-and-stick image when asked questions about proton or hydroxide attack, but equal time on the images when asked about electron density or positive charge. When comparing accuracy and time spent on the images, students who spent more time on the ball-and-stick when asked about positive charge were less likely to be correct, while those who spent more time with the potential map were more likely to be correct. The paper serves to introduce readers to eye-tracker data and calls for replication with a larger subject pool and for the inclusion of eye tracking as a chemical education research tool. |
Paula Winke; Susan M. Gass; Tetyana Sydorenko Factors influencing the use of captions by foreign language learners: An eye-tracking study Journal Article In: The Modern Language Journal, vol. 97, no. 1, pp. 254–275, 2013. @article{Winke2013, This study investigates caption-reading behavior by foreign language (L2) learners and, through eye-tracking methodology, explores the extent to which the relationship between the native and target language affects that behavior. Second-year (4th semester) English-speaking learners of Arabic, Chinese, Russian, and Spanish watched 2 videos differing in content familiarity, each dubbed and captioned in the target language. Results indicated that time spent on captions differed significantly by language: Arabic learners spent more time on captions than learners of Spanish and Russian. A significant interaction between language and content familiarity occurred: Chinese learners spent less time on captions in the unfamiliar content video than the familiar, while others spent comparable times on each. Based on dual‐processing and cognitive load theories, we posit that the Chinese learners experienced a split‐attention effect when verbal processing was difficult and that, overall, captioning benefits during the 4th semester of language learning are constrained by L2 differences, including differences in script, vocabulary knowledge, concomitant L2 proficiency, and instructional methods. Results are triangulated with qualitative findings from interviews |
Stephanie C. Wissig; Carlyn A. Patterson; Adam Kohn Adaptation improves performance on a visual search task Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 15, 2013. @article{Wissig2013, Temporal context, or adaptation, profoundly affects visual perception. Despite the strength and prevalence of adaptation effects, their functional role in visual processing remains unclear. The effects of spatial context and their functional role are better understood: these effects highlight features that differ from their surroundings and determine stimulus salience. Similarities in the perceptual and physiological effects of spatial and temporal context raise the possibility that they serve similar functions. We therefore tested the possibility that adaptation can enhance stimulus salience. We measured the effects of prolonged (40 s) adaptation to a counterphase grating on performance in a search task in which targets were defined by an orientation offset relative to a background of distracters. We found that, for targets with small orientation offsets, adaptation reduced reaction times and decreased the number of saccades made to find targets. Our results provide evidence that adaptation may function to highlight features that differ from the temporal context in which they are embedded. |
Felicity D. A. Wolohan; Sarah J. V. Bennett; Trevor J. Crawford Females and attention to eye gaze: Effects of the menstrual cycle Journal Article In: Experimental Brain Research, vol. 227, no. 3, pp. 379–386, 2013. @article{Wolohan2013, It is well known that an observer will attend to the location cued by another's eye gaze and that in some circumstances, this effect is enhanced when the emotion expressed is threat-related. This study explored whether attention to the gaze of threat-related faces is potentiated in the luteal phase of the menstrual cycle when detection of threat is suggested to be enhanced, compared to the follicular phase. Female participants were tested on a gaze cueing task in their luteal (N = 13) or follicular phase (N = 15). Participants were presented with various emotional expressions with an averted eye gaze that was either spatially congruent or incongruent with a forthcoming target. Females in the luteal phase responded faster overall to targets on trials with a 200-ms stimulus onset asynchrony interval. The results suggest that during the luteal phase, females show a general and automatic hypersensitivity to respond to stimuli associated with socially and emotionally relevant cues. This may be a part of an adaptive biological mechanism to protect foetal development. |
Jason H. Wong; Matthew S. Peterson What we remember affects how we see: Spatial working memory steers saccade programming Journal Article In: Attention, Perception, & Psychophysics, vol. 75, no. 2, pp. 308–321, 2013. @article{Wong2013, Relationships between visual attention, saccade programming, and visual working memory have been hypothesized for over a decade. Awh, Jonides, and Reuter-Lorenz (Journal of Experimental Psychology: Human Perception and Performance 24(3):780-90, 1998) and Awh et al. (Psychological Science 10(5):433-437, 1999) proposed that rehearsing a location in memory also leads to enhanced attentional processing at that location. In regard to eye movements, Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) found that holding a location in working memory affects saccade programming, albeit negatively. In three experiments, we attempted to replicate the findings of Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) and determine whether the spatial memory effect can occur in other saccade-cuing paradigms, including endogenous central arrow cues and exogenous irrelevant singletons. In the first experiment, our results were the opposite of those in Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009), in that we found facilitation (shorter saccade latencies) instead of inhibition when the saccade target matched the region in spatial working memory. In Experiment 2, we sought to determine whether the spatial working memory effect would generalize to other endogenous cuing tasks, such as a central arrow that pointed to one of six possible peripheral locations. As in Experiment 1, we found that saccade programming was facilitated when the cued location coincided with the saccade target. In Experiment 3, we explored how spatial memory interacts with other types of cues, such as a peripheral color singleton target or irrelevant onset. In both cases, the eyes were more likely to go to either singleton when it coincided with the location held in spatial working memory. On the basis of these results, we conclude that spatial working memory and saccade programming are likely to share common overlapping circuitry. |
Heather Cleland Woods; Christoph Scheepers; K. A. Ross; Colin A. Espie; Stephany M. Biello What are you looking at? Moving toward an attentional timeline in insomnia: A novel semantic eye tracking study Journal Article In: Sleep, vol. 36, no. 10, pp. 1491–1499, 2013. @article{Woods2013, STUDY OBJECTIVES: To date, cognitive probe paradigms have been used in different guises to obtain reaction time measurements suggestive of an attention bias towards sleep in insomnia. This study adopts a methodology which is novel to sleep research to obtain a continual record of where the eyes-and therefore attention-are being allocated with regard to sleep and neutral stimuli.$backslash$n$backslash$nDESIGN: A head mounted eye tracker (Eyelink II,SR Research, Ontario, Canada) was used to monitor eye movements in respect to two words presented on a computer screen, with one word being a sleep positive, sleep negative, or neutral word above or below a second distracter pseudoword. Probability and reaction times were the outcome measures.$backslash$n$backslash$nPARTICIPANTS: Sleep group classification was determined by screening interview and PSQI (> 8 = insomnia, < 3 = good sleeper) score.$backslash$n$backslash$nMEASUREMENTS AND RESULTS: Those individuals with insomnia took longer to fixate on the target word and remained fixated for less time than the good sleep controls. Word saliency had an effect with longer first fixations on positive and negative sleep words in both sleep groups, with largest effect sizes seen with the insomnia group.$backslash$n$backslash$nCONCLUSIONS: This overall delay in those with insomnia with regard to vigilance and maintaining attention on the target words moves away from previous attention bias work showing a bias towards sleep, particularly negative, stimuli but is suggestive of a neurocognitive deficit in line with recent research. |
Nicola M. Wöstmann; Désirée S. Aichert; Anna Costa; Katya Rubia; Hans-Jürgen Möller; Ulrich Ettinger Reliability and plasticity of response inhibition and interference control Journal Article In: Brain and Cognition, vol. 81, no. 1, pp. 82–94, 2013. @article{Woestmann2013, This study investigated the internal reliability, temporal stability and plasticity of commonly used measures of inhibition-related functions. Stop-signal, go/no-go, antisaccade, Simon, Eriksen flanker, Stroop and Continuous Performance tasks were administered twice to 23 healthy participants over a period of approximately 11. weeks in order to assess test-retest correlations, internal consistency (Cronbach's alpha), and systematic between as well as within session performance changes. Most of the inhibition-related measures showed good test-retest reliabilities and internal consistencies, with the exception of the stop-signal reaction time measure, which showed poor reliability. Generally no systematic performance changes were observed across the two assessments with the exception of four variables of the Eriksen flanker, Simon and Stroop task which showed reduced variability of reaction time and an improvement in the response time for incongruent trials at second assessment. Predominantly stable performance within one test session was shown for most measures. Overall, these results are informative for studies with designs requiring temporally stable parameters e.g. genetic or longitudinal treatment studies. |
Zhou Yang; Todd Jackson; Hong Chen Effects of chronic pain and pain-related fear on orienting and maintenance of attention: An eye movement study Journal Article In: Journal of Pain, vol. 14, no. 10, pp. 1148–1157, 2013. @article{Yang2013b, Abstract In this study, effects of chronic pain and pain-related fear on orienting and maintenance of attention toward pain stimuli were evaluated by tracking eye movements within a dot-probe paradigm. The sample comprised matched chronic pain (n = 24) and pain-free (n = 24) groups, each of which included lower and higher fear of pain subgroups. Participants completed a dot-probe task wherein eye movements were assessed during the presentation of sensory pain-neutral, health catastrophe-neutral, and neutral-neutral word pairs. Higher fear of pain levels were associated with biases in 1) directing initial gaze toward health catastrophe words and, among participants with chronic pain, 2) subsequent avoidance of threat as reflected by shorter first fixation durations on health catastrophe words compared to pain-free cohorts. As stimulus word pairs persisted for 2,000 ms, no group differences were observed for overall gaze durations or reaction times to probes that followed. In sum, this research identified specific biases in visual attention related to fear of pain and chronic pain during early stages of information processing that were not evident on the basis of later behavior responses to probes. Perspective Effects of chronic pain and fear of pain on attention were examined by tracking eye movements within a dot-probe paradigm. Heightened fear of pain corresponded to biases in initial gaze toward health catastrophe words and, among participants with chronic pain, subsequent gaze shifts away from these words. No reaction time differences emerged. |
Chun Po Yin; Feng-Yang Kuo A study of how information system professionals comprehend indirect and direct speech acts in project communication Journal Article In: IEEE Transactions on Professional Communication, vol. 56, no. 3, pp. 226–241, 2013. @article{Yin2013, Research problem: Indirect communication is prevalent in business communication practices. For information systems (IS) projects that require professionals from multiple disciplines to work together, the use of indirect communication may hinder successful design, implementation, and maintenance of these systems. Drawing on the Speech Act Theory (SAT), this study investigates how direct and indirect speech acts may influence language comprehension in the setting of communication problems inherent in IS projects. Research questions: (1) Do participating subjects, who are IS professionals, differ in their comprehension of indirect and direct speech acts? (2) Do participants display different attention processes in their comprehension of indirect and direct speech acts? (3) Do participants' attention processes influence their comprehension of indirect and direct speech acts? Literature review: We review two relevant areas of theory—polite speech acts in professional communication and SAT. First, a broad review that focuses on literature related to the use of polite speech acts in the workplace and in information system (IS) projects suggests the importance of investigating speech acts by professionals. In addition, the SAT provides the theoretical framework guiding this study and the development of hypotheses. Methodology: The current study uses a quantitative approach. A between-groups experiment design was employed to test how direct and indirect speech acts influence the language comprehension of participants. Forty-three IS professionals participated in the experiment. In addition, through the use of eye-tracking technology, this study captured the attention process and analyzed the relationship between attention and comprehension. Results and discussion: The results show that the directness of speech acts significantly influences participants' attention process, which, in turn, significantly affects their comprehension. In addition, the findings indicate that indirect speech acts, if employed by IS professionals to communicate with others, may easily be distorted or misunderstood. Professionals and managers of organizations should be aware that effective communication in interdisciplinary projects, such as IS development, is not easy, and that reliance on polite or indirect communication may inhibit the generation of valid information. |
Angela H. Young; Johan Hulleman Eye movements reveal how task difficulty moulds visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 1, pp. 168–190, 2013. @article{Young2013, In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size during visual search shrinks with increasing task difficulty. In Experiment 2, we used a gaze-contingent window and confirmed the validity of the size estimates. The experiment also revealed that breakdown in robustness against item motion is related to item-by-item search, rather than search difficulty per se. We argue that visual search is an eye-movement-based process that works on a continuum, from almost parallel (where many items can be processed within a fixation) to completely serial (where only one item can be processed within a fixation). |
Kiwon Yun; Yifan Peng; Dimitris Samaras; Gregory J. Zelinsky; Tamara L. Berg Exploring the role of gaze behavior and object detection in scene understanding Journal Article In: Frontiers in Psychology, vol. 4, pp. 917, 2013. @article{Yun2013, We posit that a person's gaze behavior while freely viewing a scene contains an abundance of information, not only about their intent and what they consider to be important in the scene, but also about the scene's content. Experiments are reported, using two popular image datasets from computer vision, that explore the relationship between the fixations that people make during scene viewing, how they describe the scene, and automatic detection predictions of object categories in the scene. From these exploratory analyses, we then combine human behavior with the outputs of current visual recognition methods to build prototype human-in-the-loop applications for gaze-enabled object detection and scene annotation. |
Michael Zehetleitner; Anja Isabel Koch; Harriet Goschy; Hermann J. Müller Salience-based selection: Attentional capture by distractors less salient than the target Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e52595, 2013. @article{Zehetleitner2013, Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. |
Gregory J. Zelinsky; Hossein Adeli; Yifan Peng; Dimitris Samaras Modelling eye movements in a categorical search task Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–13, 2013. @article{Zelinsky2013, We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. |
En Zhang; Gong-Liang Zhang; Wu Li Spatiotopic perceptual learning mediated by retinotopic processing and attentional remapping Journal Article In: European Journal of Neuroscience, vol. 38, no. 12, pp. 3758–3767, 2013. @article{Zhang2013, Visual processing takes place in both retinotopic and spatiotopic frames of reference. Whereas visual perceptual learning is usually specific to the trained retinotopic location, our recent study has shown spatiotopic specificity of learning in motion direction discrimination. To explore the mechanisms underlying spatiotopic processing and learning, and to examine whether similar mechanisms also exist in visual form processing, we trained human subjects to discriminate an orientation difference between two successively displayed stimuli, with a gaze shift in between to manipulate their positional relation in the spatiotopic frame of reference without changing their retinal locations. Training resulted in better orientation discriminability for the trained than for the untrained spatial relation of the two stimuli. This learning-induced spatiotopic preference was seen only at the trained retinal location and orientation, suggesting experience-dependent spatiotopic form processing directly based on a retinotopic map. Moreover, a similar but weaker learning-induced spatiotopic preference was still present even if the first stimulus was rendered irrelevant to the orientation discrimination task by having the subjects judge the orientation of the second stimulus relative to its mean orientation in a block of trials. However, if the first stimulus was absent, and thus no attention was captured before the gaze shift, the learning produced no significant spatiotopic preference, suggesting an important role of attentional remapping in spatiotopic processing and learning. Taken together, our results suggest that spatiotopic visual representation can be mediated by interactions between retinotopic processing and attentional remapping, and can be modified by perceptual training. |
Clive R. Rosenthal; Tammy W. C. Ng; Christopher Kennard Generalisation of new sequence knowledge depends on response modality Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e53990, 2013. @article{Rosenthal2013, New visuomotor skills can guide behaviour in novel situations. Prior studies indicate that learning a visuospatial sequence via responses based on manual key presses leads to effector- and response-independent knowledge. Little is known, however, about the extent to which new sequence knowledge can generalise, and, thereby guide behaviour, outside of the manual response modality. Here, we examined whether learning a visuospatial sequence either via manual (key presses, without eye movements), oculomotor (obligatory eye movements), or perceptual (covert reorienting of visuospatial attention) responses supported generalisation to direct and indirect tests administered either in the same (baseline conditions) or a novel response modality (transfer conditions) with respect to initial study. Direct tests measured the use of conscious knowledge about the studied sequence, whereas the indirect tests did not ostensibly draw on the study phase and measured response priming. Oculomotor learning supported the use of conscious knowledge on the manual direct tests, whereas manual learning supported generalisation to the oculomotor direct tests but did not support the conscious use of knowledge. Sequence knowledge acquired via perceptual responses did not generalise onto any of the manual tests. Manual, oculomotor, and perceptual sequence learning all supported generalisation in the baseline conditions. Notably, the manual baseline condition and the manual to oculomotor transfer condition differed in the magnitude of general skill acquired during the study phase; however, general skill did not predict performance on the post-study tests. The results demonstrated that generalisation was only affected by the responses used to initially code the visuospatial sequence when new knowledge was applied to a novel response modality. We interpret these results in terms of response-effect distinctiveness, the availability of integrated effector- and motor-plan based information, and discuss their implications for neurocognitive accounts of sequence learning. |
Nicholas M. Ross; Eileen Kowler Eye movements while viewing narrated, captioned, and silent videos Journal Article In: Journal of Vision, vol. 13, no. 4, pp. 1–19, 2013. @article{Ross2013, Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. |
Paul Roux; Christine Passerieux; Franck Ramus Kinematics matters: A new eye-tracking investigation of animated triangles Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 2, pp. 229–244, 2013. @article{Roux2013, Eye movements have been recently recorded in participants watching animated triangles in short movies that normally evoke mentalizing (Frith-Happé animations). Authors have found systematic differences in oculomotor behaviour according to the degree of mental state attribution to these triangles: Participants made longer fixations and looked longer at intentional triangles than at triangles moving randomly. However, no study has yet explored kinematic characteristics of Frith-Happé animations and their influence on eye movements. In a first experiment, we have run a quantitative kinematic analysis of Frith-Happé animations and found that the time triangles spent moving and the distance between them decreased with the mentalistic complexity of their movements. In a second experiment, we have recorded eye movements in 17 participants watching Frith-Happé animations and found that some differences in fixation durations and in the proportion of gaze allocated to triangles between the different kinds of animations were entirely explained by low-level kinematic confounds. We finally present a new eye-tracking measure of visual attention, triangle pursuit duration, which does differentiate the different types of animations even after taking into account kinematic cofounds. However, some idiosyncratic kinematic properties of the Frith-Happé animations prevent an entirely satisfactory interpretation of these results. The different eye-tracking measures are interpreted as implicit and line measures of the processing of animate movements. |
Donghyun Ryu; Bruce Abernethy; David L. Mann; Jamie M. Poolton; Adam D. Gorman The role of central and peripheral vision in expert decision making Journal Article In: Perception, vol. 42, no. 6, pp. 591–607, 2013. @article{Ryu2013, The purpose of this study was to investigate the role of central and peripheral vision in expert decision making. A gaze-contingent display was used to selectively present information to the central and peripheral areas of the visual field while participants performed a decision-making task. Eleven skilled and eleven less-skilled male basketball players watched video clips of basketball scenarios in three different viewing conditions: full-image control, moving window (central vision only), and moving mask (peripheral vision only). At the conclusion of each clip participants were required to decide whether it was more appropriate for the ball-carrier to pass the ball or to drive to the basket. The skilled players showed significantly higher response accuracy and faster response times compared with their lesser-skilled counterparts in all three viewing conditions, demonstrating superiority in information extraction that held irrespective of whether they were using central or peripheral vision. The gaze behaviour of the skilled players was less influenced by the gaze-contingent manipulations, suggesting they were better able to use the remaining information to sustain their normal gaze behaviour. The superior capacity of experts to interpret dynamic visual information is evident regardless of whether the visual information is presented across the whole visual field or selectively to either central or peripheral vision alone. |
Nuria Sagarra; Nick C. Ellis From seeing adverbs to seeing verbal morphology Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 2, pp. 261–290, 2013. @article{Sagarra2013, Adult learners have persistent difficulty processing second language (L2) inflectional morphology. We investigate associative learning explanations that involve the blocking of later experienced cues by earlier learned ones in the first language (L1; i.e., transfer) and the L2 (i.e., proficiency). Sagarra (2008 ) and Ellis and Sagarra (2010b ) found that, unlike Spanish monolinguals, intermediate English-Spanish learners rely more on salient adverbs than on less salient verb inflections, but it is not clear whether this preference is a result of a default or a L1-based strategy. To address this question, 120 English (poor morphology) and Romanian (rich morphology) learners of Spanish (rich morphology) and 98 English, Romanian, and Spanish monolinguals read sentences in L2 Spanish (or their L1 in the case of the monolinguals) containing adverb-verb and verb-adverb congruencies or incongruencies and chose one of four pictures after each sentence (i.e., two that competed for meaning and two for form). Eye-tracking data revealed signifi cant effects for (a) sensitivity (all participants were sensitive to tense incongruencies), (b) cue location in the sentence (participants spent more time at their preferred cue, regardless of its position), (c) L1 experience (morphologically rich L1 learners and monolinguals looked longer at verbs than morphologically poor L1 learners and monolinguals), and (d) L2 experience (low-proficiency learners read more slowly and regressed longer than high-proficiency learners). We conclude that intermediate and advanced learners are sensitive to tense incongruencies and—like native speakers—tend to rely more heavily on verbs if their L1 is morphologically rich. These findings reinforce theories that support transfer effects such as the unifi ed competition model and the associative learning model but do not contradict Clahsen and Felser's ( 2006a ) shallow structure hypothesis because the target structure was morphological agreement rather than syntactic agreement. |
Clare M. Press; James M. Kilner The time course of eye movements during action observation reflects sequence learning Journal Article In: NeuroReport, vol. 24, no. 14, pp. 822–826, 2013. @article{Press2013, When we observe object-directed actions such as grasping, we make predictive eye movements. However, eye movements are reactive when observing similar actions without objects. This reactivity may reflect a lack of attribution of intention to observed actors when they perform actions without 'goals'. Alternatively, it may simply signal that there is no cue present that has been predictive of the subsequent trajectory in the observer's experience. To test this hypothesis, the present study investigated how the time course of eye movements changes as a function of visual experience of predictable, but arbitrary, actions without objects. Participants observed a point-light display of a model performing sequential finger actions in a serial reaction time task. Eye movements became less reactive across blocks. In addition, participants who exhibited more predictive eye movements subsequently demonstrated greater learning when required either to execute, or to recognize, the sequence. No measures were influenced by whether participants had been instructed that the observed movements were human or lever generated. The present data indicate that eye movements when observing actions without objects reflect the extent to which the trajectory can be predicted through experience. The findings are discussed with reference to the implications for the mechanisms supporting perception of actions both with and without objects as well as those mediating inanimate object processing. |
Tim J. Preston; Fei Guo; Koel Das; Barry Giesbrecht; Miguel P. Eckstein Neural representations of contextual guidance in visual search of real-world scenes Journal Article In: Journal of Neuroscience, vol. 33, no. 18, pp. 7846–7855, 2013. @article{Preston2013, Exploiting scene context and object– object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. |
Steven L. Prime; Jonathan J. Marotta Gaze strategies during visually-guided versus memory-guided grasping Journal Article In: Experimental Brain Research, vol. 225, no. 2, pp. 291–305, 2013. @article{Prime2013, Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream. |
Alec Scharff; John Palmer; Cathleen M. Moore Divided attention limits perception of 3-D object shapes Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 1–24, 2013. @article{Scharff2013, Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited- capacity and a fixed-capacity model. Under unlimited- capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes. |
Anne Schmechtig; Jane Lees; Lois Grayson; Kevin J. Craig; Rukiya Dadhiwala; Gerard R. Dawson; J. F. William Deakin; Colin T. Dourish; Ivan Koychev; Katrina McMullen; Ellen M. Migo; Charlotte Perry; Lawrence Wilkinson; Robin Morris; Steve C. R. Williams; Ulrich Ettinger Effects of risperidone, amisulpride and nicotine on eye movement control and their modulation by schizotypy Journal Article In: Psychopharmacology, vol. 227, no. 2, pp. 331–345, 2013. @article{Schmechtig2013a, RATIONALE: The increasing demand to develop more efficient compounds to treat cognitive impairments in schizophrenia has led to the development of experimental model systems. One such model system combines the study of surrogate populations expressing high levels of schizotypy with oculomotor biomarkers. OBJECTIVES: We aimed (1) to replicate oculomotor deficits in a psychometric schizotypy sample and (2) to investigate whether the expected deficits can be remedied by compounds shown to ameliorate impairments in schizophrenia. METHODS: In this randomized double-blind, placebo-controlled study 233 healthy participants performed prosaccade (PS), antisaccade (AS) and smooth pursuit eye movement (SPEM) tasks after being randomly assigned to one of four drug groups (nicotine, risperidone, amisulpride, placebo). Participants were classified into medium- and high-schizotypy groups based on their scores on the Schizotypal Personality Questionnaire (SPQ, Raine (Schizophr Bull 17:555-564, 1991)). RESULTS: AS error rate showed a main effect of Drug (p < 0.01), with nicotine improving performance, and a Drug by Schizotypy interaction (p = 0.04), indicating higher error rates in medium schizotypes (p = 0.01) but not high schizotypes under risperidone compared to placebo. High schizotypes had higher error rates than medium schizotypes under placebo (p = 0.03). There was a main effect of Drug for saccadic peak velocity and SPEM velocity gain (both p </= 0.01) indicating impaired performance with risperidone. CONCLUSIONS: We replicate the observation of AS impairments in high schizotypy under placebo and show that nicotine enhances performance irrespective of group status. Caution should be exerted in applying this model as no beneficial effects of antipsychotics were seen in high schizotypes. |
Anne Schmechtig; Jane Lees; Adam M. Perkins; A. Altavilla; Kevin J. Craig; G. R. Dawson; J. F. William Deakin; Colin T. Dourish; L. H. Evans; Ivan Koychev; K. Weaver; R. Smallman; J. Walters; L. S. Wilkinson; R. Morris; Steve C. R. Williams; Ulrich Ettinger The effects of ketamine and risperidone on eye movement control in healthy volunteers Journal Article In: Translational Psychiatry, vol. 3, pp. e334, 2013. @article{Schmechtig2013, The non-competitive N-methyl-D-aspartate receptor antagonist ketamine leads to transient psychosis-like symptoms and impairments in oculomotor performance in healthy volunteers. This study examined whether the adverse effects of ketamine on oculomotor performance can be reversed by the atypical antipsychotic risperidone. In this randomized double-blind, placebo-controlled study, 72 healthy participants performed smooth pursuit eye movements (SPEM), prosaccades (PS) and antisaccades (AS) while being randomly assigned to one of four drug groups (intravenous 100 ng ml(-1) ketamine, 2 mg oral risperidone, 100 ng ml(-1) ketamine plus 2 mg oral risperidone, placebo). Drug administration did not lead to harmful adverse events. Ketamine increased saccadic frequency and decreased velocity gain of SPEM (all P < 0.01) but had no significant effects on PS or AS (all P > or = 0.07). An effect of risperidone was observed for amplitude gain and peak velocity of PS and AS, indicating hypometric gain and slower velocities compared with placebo (both P < or = 0.04). No ketamine by risperidone interactions were found (all P > or = 0.26). The results confirm that the administration of ketamine produces oculomotor performance deficits similar in part to those seen in schizophrenia. The atypical antipsychotic risperidone did not reverse ketamine-induced deteriorations. These findings do not support the cognitive enhancing potential of risperidone on oculomotor biomarkers in this model system of schizophrenia and point towards the importance of developing alternative performance-enhancing compounds to optimise pharmacological treatment of schizophrenia. |
Casey A. Schofield; Albrecht W. Inhoff; Meredith E. Coles Time-course of attention biases in social phobia Journal Article In: Journal of Anxiety Disorders, vol. 27, no. 7, pp. 661–669, 2013. @article{Schofield2013, Theoretical models of social phobia implicate preferential attention to social threat in the maintenance of anxiety symptoms, though there has been limited work characterizing the nature of these biases over time. The current study utilized eye-movement data to examine the time-course of visual attention over 1500. ms trials of a probe detection task. Nineteen participants with a primary diagnosis of social phobia based on DSM-IV criteria and 20 non-clinical controls completed this task with angry, fearful, and happy face trials. Overt visual attention to the emotional and neutral faces was measured in 50. ms segments across the trial. Over time, participants with social phobia attend less to emotional faces and specifically less to happy faces compared to controls. Further, attention to emotional relative to neutral expressions did not vary notably by emotion for participants with social phobia, but control participants showed a pattern after 1000. ms in which over time they preferentially attended to happy expressions and avoided negative expressions. Findings highlight the importance of considering attention biases to positive stimuli as well as the pattern of attention between groups. These results suggest that attention "bias" in social phobia may be driven by a relative lack of the biases seen in non-anxious participants. |
Marc Pomplun; Tyler W. Garaas; Marisa Carrasco The effects of task difficulty on visual search strategy in virtual 3D displays Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–22, 2013. @article{Pomplun2013, Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy'' conjunction search task and a "difficult'' shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy'' task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult'' task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. |
Florian Perdreau; Patrick Cavanagh The artistís advantage: Better integration of object information across eye movements Journal Article In: i-Perception, vol. 4, no. 6, pp. 380–395, 2013. @article{Perdreau2013, Over their careers, figurative artists spend thousands of hours analyzing objects and scene layout. We examined what impact this extensive training has on the ability to encode complex scenes, comparing participants with a wide range of training and drawing skills on a possible versus impossible objects task. We used a gaze-contingent display to control the amount of information the participants could sample on each fixation either from central or peripheral visual field. Test objects were displayed and participants reported, as quickly as possible, whether the object was structurally possible or not. Our results show that when viewing the image through a small central window, performance improved with the years of training, and to a lesser extent with the level of skill. This suggests that the extensive training itself confers an advantage for integrating object structure into more robust object descriptions. |
Melanie Perron; Annie Roy-Charland Analysis of eye movements in the judgment of enjoyment and non-enjoyment smiles Journal Article In: Frontiers in Psychology, vol. 4, pp. 659, 2013. @article{Perron2013, Enjoyment smiles are more often associated with the simultaneous presence of the Cheek raiser and Lip corner puller action units, and these units' activation is more often symmetric. Research on the judgment of smiles indicated that individuals are sensitive to these types of indices, but it also suggested that their ability to perceive these specific indices might be limited. The goal of the current study was to examine perceptual-attentional processing of smiles by using eye movement recording in a smile judgment task. Participants were presented with three types of smiles: a symmetric Duchenne, a non-Duchenne, and an asymmetric smile. Results revealed that the Duchenne smiles were judged happier than those with characteristics of non-enjoyment. Asymmetric smiles were also judged happier than the non-Duchenne smiles. Participants were as effective in judging the latter smiles as not really happy as they were in judging the symmetric Duchenne smiles as happy. Furthermore, they did not spend more time looking at the eyes or mouth regardless of types of smiles. While participants made more saccades between each side of the face for the asymmetric smiles than the symmetric ones, they judged the asymmetric smiles more often as really happy than not really happy. Thus, processing of these indices do not seem limited to perceptual-attentional difficulties as reflected in viewing behavior. |
Yoni Pertzov; Paul M. Bays; Sabine Joseph; Masud Husain Rapid forgetting prevented by retrospective attention cues Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1224–1231, 2013. @article{Pertzov2013, Recent studies have demonstrated that memory performance can be enhanced by a cue which indicates the item most likely to be subsequently probed, even when that cue is delivered seconds after a stimulus array is extinguished. Although such retro-cuing has attracted considerable interest, the mechanisms underlying it remain unclear. Here, we tested the hypothesis that retro-cues might protect an item from degradation over time. We employed two techniques that previously have not been deployed in retro-cuing tasks. First, we used a sensitive, continuous scale for reporting the orientation of a memorized item, rather than binary measures (change or no change) typically used in previous studies. Second, to investigate the stability of memory across time, we also systematically varied the duration between the retro-cue and report. Although accuracy of reporting uncued objects rapidly declined over short intervals, retro-cued items were significantly more stable, showing negligible decline in accuracy across time and protection from forgetting. Retro-cuing an object's color was just as advantageous as spatial retro-cues. These findings demonstrate that during maintenance, even when items are no longer visible, attention resources can be selectively redeployed to protect the accuracy with which a cued item can be recalled over time, but with a corresponding cost in recall for uncued items. |
Claudia Peschke; Claus C. Hilgetag; Bettina Olk Influence of stimulus type on effects of flanker, flanker position, and trial sequence in a saccadic eye movement task Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 11, pp. 2253–2267, 2013. @article{Peschke2013, Using the flanker paradigm in a task requiring eye movement responses, we examined how stimulus type (arrows vs. letters) modulated effects of flanker and flanker position. Further, we examined trial sequence effects and the impact of stimulus type on these effects. Participants responded to a central target with a left- or rightward saccade. We reasoned that arrows, being overlearned symbols of direction, are processed with less effort and are therefore linked more easily to a direction and a required response than are letters. The main findings demonstrate that (a) flanker effects were stronger for arrows than for letters, (b) flanker position more strongly modulated the flanker effect for letters than for arrows, and (c) trial sequence effects partly differed between the two stimulus types. We discuss these findings in the context of a more automatic and effortless processing of arrow relative to letter stimuli. |
Anders Petersen; Søren Kyllingsbæk Eye movements and practice effects in the attentional dwell time paradigm Journal Article In: Experimental Psychology, vol. 60, no. 1, pp. 22–33, 2013. @article{Petersen2013a, In the attentional dwell time paradigm by Duncan, Ward, and Shapiro (1994), two backward masked targets are presented at different spatial locations and separated by a varying time interval. Results show that report of the second target is severely impaired when the time interval is less than 500 ms which has been taken as a direct measure of attentional dwell time in human vision. However, we show that eye movements may have confounded the estimate of the dwell time and that the measure may not be robust as previously suggested. The latter is supported by evidence suggesting that intensive training strongly attenuates the dwell time because of habituation to the masks. Thus, this article points to eye movements and masking as two potential methodological pitfalls that should be considered when using the attentional dwell time paradigm to investigate the temporal dynamics of attention. |
Anders Petersen; Søren Kyllingsbæk; Claus Bundesen Attentional dwell times for targets and masks Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–12, 2013. @article{Petersen2013, Studies on the temporal dynamics of attention have shown that the report of a masked target (T2) is severely impaired when the target is presented with a delay (stimulus onset asynchrony) of less than 500 ms after a spatially separate masked target (T1). This is known as the attentional dwell time. Recently, we have proposed a computational model of this effect building on the idea that a stimulus retained in visual short-term memory (VSTM) takes up visual processing resources that otherwise could have been used to encode subsequent stimuli into VSTM. The resources are locked until the stimulus in VSTM has been recoded, which explains the long dwell time. Challenges for this model and others are findings by Moore, Egeth, Berglan, and Luck (1996) suggesting that the dwell time is substantially reduced when the mask of T1 is removed. Here we suggest that the mask of T1 modulates performance not by noticeably affecting the dwell time but instead by acting as a distractor drawing processing resources away from T2. This is consistent with our proposed model in which targets and masks compete for attentional resources and attention dwells on both. We tested the model by replicating the study by Moore et al., including a new condition in which T1 is omitted but the mask of T1 is retained. Results from this and the original study by Moore et al. are modeled with great precision. |
Matthew F. Peterson; Miguel P. Eckstein Individual differences in eye movements during face identification reflect observer-specific optimal points of fixation Journal Article In: Psychological Science, vol. 24, no. 7, pp. 1216–1225, 2013. @article{Peterson2013, In general, humans tend to first look just below the eyes when identifying another person. Does everybody look at the same place on a face during identification, and, if not, does this variability in fixation behavior lead to functional consequences? In two conditions, observers had their free eye movements recorded while they performed a face-identification task. In another condition, the same observers identified faces while their gaze was restricted to specific locations on each face. We found substantial differences, which persisted over time, in where individuals chose to first move their eyes. Observers' systematic departure from a canonical, theoretically optimal fixation point did not correlate with performance degradation. Instead, each individual's looking preference corresponded to an idiosyncratic performance-maximizing point of fixation: Those who looked lower on the face performed better when forced to fixate the lower part of the face. The results suggest an observer-specific synergy between the face-recognition and eye movement systems that optimizes face-identification performance. |
Judith Peth; Johann S. C. Kim; Matthias Gamer Fixations and eye-blinks allow for detecting concealed crime related memories Journal Article In: International Journal of Psychophysiology, vol. 88, no. 1, pp. 96–103, 2013. @article{Peth2013, The Concealed Information Test (CIT) is a method of forensic psychophysiology that allows for revealing concealed crime related knowledge. Such detection is usually based on autonomic responses but there is a huge interest in other measures that can be acquired unobtrusively. Eye movements and blinks might be such measures but their validity is unclear. Using a mock crime procedure with a manipulation of the arousal during the crime as well as the delay between crime and CIT, we tested whether eye tracking measures allow for detecting concealed knowledge. Guilty participants showed fewer but longer fixations on central crime details and this effect was even present after stimulus offset and accompanied by a reduced blink rate. These ocular measures were partly sensitive for induction of emotional arousal and time of testing. Validity estimates were moderate but indicate that a significant differentiation between guilty and innocent subjects is possible. Future research should further investigate validity differences between gaze measures during a CIT and explore the underlying mechanisms. |
Bettina Olk Measuring the allocation of attention in the Stroop task: Evidence from eye movement patterns Journal Article In: Psychological Research, vol. 77, no. 2, pp. 106–115, 2013. @article{Olk2013, Attention plays a crucial role in the Stroop task, which requires attending to less automatically processed task-relevant attributes of stimuli and the suppression of involuntary processing of task-irrelevant attributes. The experiment assessed the allocation of attention by monitoring eye movements throughout congruent and incongruent trials. Participants viewed two stimulus arrays that differed regarding the amount of items and their numerical value and judged by manual response which of the arrays contained more items, while disregarding their value. Different viewing patterns were observed between congruent (e.g., larger array of numbers with higher value) and incongruent (e.g., larger array of numbers with lower value) trials. The direction of first saccades was guided by task-relevant information but in the incongruent condition directed more frequently towards task-irrelevant information. The data further suggest that the difference in the deployment of attention between conditions changes throughout a trial, likely reflecting the impact and resolution of the conflict. For instance, stimulus arrays in line with the correct response were attended for longer and fixations were longer for incongruent trials, with the second fixation and considering all fixations. By the time of the correct response, this latter difference between conditions was absent. Possible mechanisms underlying eye movement patterns are discussed. |
Hans P. Op de Beeck; Ben Vermaercke; Daniel G. Woolley; Nicole Wenderoth Combinatorial brain decoding of people's whereabouts during visuospatial navigation Journal Article In: Frontiers in Neuroscience, vol. 7, pp. 78, 2013. @article{OpdeBeeck2013, Complex behavior typically relies upon many different processes which are related to activity in multiple brain regions. In contrast, neuroimaging analyses typically focus upon isolated processes. Here we present a new approach, combinatorial brain decoding, in which we decode complex behavior by combining the information which we can retrieve from the neural signals about the many different sub-processes. The case in point is visuospatial navigation. We explore the extent to which the route travelled by human subjects (N = 3) in a complex virtual maze can be decoded from activity patterns as measured with functional magnetic resonance imaging. Preliminary analyses suggest that it is difficult to directly decode spatial position from regions known to contain an explicit cognitive map of the environment, such as the hippocampus. Instead, we were able to indirectly derive spatial position from the pattern of activity in visual and motor cortex. The non-spatial representations in these regions reflect processes which are inherent to navigation, such as which stimuli are perceived at which point in time and which motor movement is executed when (e.g., turning left at a crossroad). Highly successful decoding of routes followed through the maze was possible by combining information about multiple aspects of navigation events across time and across multiple cortical regions. This "proof of principle" study highlights how visuospatial navigation is related to the combined activity of multiple brain regions, and establishes combinatorial brain decoding as a means to study complex mental events that involve a dynamic interplay of many cognitive processes. |
Jorge Otero-Millan; Stephen L. Macknik; Rachel E. Langston; Susana Martinez-Conde An oculomotor continuum from exploration to fixation Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 15, pp. 6175–6180, 2013. @article{OteroMillan2013, During visual exploration, saccadic eye movements scan the scene for objects of interest. During attempted fixation, the eyes are relatively still but often produce microsaccades. Saccadic rates during exploration are higher than those of microsaccades during fixation, reinforcing the classic view that exploration and fixation are two distinct oculomotor behaviors. An alternative model is that fixation and exploration are not dichotomous, but are instead two extremes of a functional continuum. Here, we measured the eye movements of human observers as they either fixed their gaze on a small spot or scanned natural scenes of varying sizes. As scene size diminished, so did saccade rates, until they were continuous with microsaccadic rates during fixation. Other saccadic properties varied as function of image size as well, forming a continuum with microsaccadic parameters during fixation. This saccadic continuum extended to nonrestrictive, ecological viewing conditions that allowed all types of saccades and fixation positions. Eye movement simulations moreover showed that a single model of oculomotor behavior can explain the saccadic continuum from exploration to fixation, for images of all sizes. These findings challenge the view that exploration and fixation are dichotomous, suggesting instead that visual fixation is functionally equivalent to visual exploration on a spatially focused scale. |
Weston Pack; Thom Carney; Stanley A. Klein Involuntary attention enhances identification accuracy for unmasked low contrast letters using non-predictive peripheral cues Journal Article In: Vision Research, vol. 89, pp. 79–89, 2013. @article{Pack2013, There is controversy regarding whether or not involuntary attention improves response accuracy at a cued location when the cue is non-predictive and if these cueing effects are dependent on backward masking. Various perceptual and decisional mechanisms of performance enhancement have been proposed, such as signal enhancement, noise reduction, spatial uncertainty reduction, and decisional processes. Herein we review a recent report of mask-dependent accuracy improvements with low contrast stimuli and demonstrate that the experiments contained stimulus artifacts whereby the cue impaired perception of low contrast stimuli, leading to an absence of improved response accuracy with unmasked stimuli. Our experiments corrected these artifacts by implementing an isoluminant cue and increasing its distance relative to the targets. The results demonstrate that cueing effects are robust for unmasked stimuli presented in the periphery, resolving some of the controversy concerning cueing enhancement effects from involuntary attention and mask dependency. Unmasked low contrast and/or short duration stimuli as implemented in these experiments may have a short enough iconic decay that the visual system functions similarly as if a mask were present leading to improved accuracy with a valid cue. |
Simon Palmer; Uwe Mattler Masked stimuli modulate endogenous shifts of spatial attention Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 486–503, 2013. @article{Palmer2013, Unconscious stimuli can influence participants' motor behavior but also more complex mental processes. Recent research has gradually extended the limits of effects of unconscious stimuli. One field of research where such limits have been proposed is spatial cueing, where exogenous automatic shifts of attention have been distinguished from endogenous controlled processes which govern voluntary shifts of attention. Previous evidence suggests unconscious effects on mechanisms of exogenous shifts of attention. Here, we applied a cue-priming paradigm to a spatial cueing task with arbitrary cues by centrally presenting a masked symmetrical prime before every cue stimulus. We found priming effects on response times in target discrimination tasks with the typical dynamic of cue-priming effects (Experiments 1 and 2) indicating that central symmetrical stimuli which have been associated with endogenous orienting can modulate shifts of spatial attention even when they are masked. Prime-Cue Congruency effects of perceptual dissimilar prime and cue stimuli (Experiment 3) suggest that these effects cannot be entirely reduced to perceptual repetition priming of cue processing. In addition, priming effects did not differ between participants with good and poor prime recognition performance consistent with the view that unconscious stimulus features have access to processes of endogenous shifts of attention. |
Simon Palmer; Uwe Mattler On the source and scope of priming effects of masked stimuli on endogenous shifts of spatial attention Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 528–544, 2013. @article{Palmer2013a, Unconscious stimuli can influence participants' motor behavior as well as more complex mental processes. Previous cue-priming experiments demonstrated that masked cues can modulate endogenous shifts of spatial attention as measured by choice reaction time tasks. Here, we applied a signal detection task with masked luminance targets to determine the source and the scope of effects of masked stimuli. Target-detection performance was modulated by prime-cue congruency, indicating that prime-cue congruency modulates signal enhancement at early levels of target processing. These effects, however, were only found when the prime was perceptually similar to the cue indicting that primes influence early target processing in an indirect way by facilitating cue processing. Together with previous research we conclude that masked stimuli can modulate perceptual and post-central levels of processing. Findings mark a new limit of the effects of unconscious stimuli which seem to have a smaller scope than conscious stimuli. |
Sarah J. Rappaport; Glyn W. Humphreys; M. Jane Riddoch The attraction of yellow corn: Reduced attentional constraints on coding learned conjunctive relations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 1016–1031, 2013. @article{Rappaport2013, Physiological evidence indicates that different visual features are computed quasi-independently. The subsequent step of binding features, to generate coherent perception, is typically considered a major rate-limiting process, confined to one location at a time and taking 25 ms per item or longer (A. Treisman & S. Gormican, 1988, Feature analysis in early vision: Evidence from search asymmetries, Psychological Review, Vol. 95, pp. 15-48). We examined whether these processing limitations remain once bindings are learned for familiar objects. Participants searched for objects that could appear either in familiar or unfamiliar colors. Objects in familiar colors were detected efficiently at rates consistent with simultaneous binding across multiple stimuli. Processing limitations were evident for objects in unfamiliar colors. The advantage for the learned color for known targets was eliminated when participants searched for geometric shapes carrying the object colors and when the colors fell in local background areas around the shapes. The effect occurred irrespective of whether the nontargets had familiar colors, but was largest when nontargets had incorrect colors. The efficient search for targets in familiar colors held, even when the search was biased to favor objects in unfamiliar colors. The data indicate that learned bindings can be computed with minimal attentional limitations, consistent with the direct activation of learned conjunctive representations in vision. |
Benjamin Reichelt; Sina Kühnel; Dennis E. Dal Mas The influence of explicit and implicit memory processes on experience-dependent eye movements Journal Article In: Procedia Social and Behavioral Sciences, vol. 82, pp. 455–460, 2013. @article{Reichelt2013, In some studies experience-dependent eye movements have been reported with as well as without conscious awareness. Thus, our study aims to clarify if experience-dependent eye movements are influenced by mainly implicit or explicit memory processes. Participants saw in experiment 1 photographed scenes that were novel, repeated or repeated with a manipulation (object added /removed). In experiment 2, participants viewed novel and repeated scenes distributed over three days. Participants subsequently had to recognize whether the scenes were novel, repeated or manipulated. In both experiments, experience-dependent eye movements were observed when participants were aware of the manipulation or repetition as well as when they were unaware. In contrast to previous studies, our results suggest that explicit as well as implicit memory processes have an influence on experience-dependent eye movements. |
Fabio Richlan; Benjamin Gagl; Sarah Schuster; Stefan Hawelka; Josef Humenberger; Florian Hutzler A new high-speed visual stimulation method for gaze-contingent eye movement and brain activity studies Journal Article In: Frontiers in Systems Neuroscience, vol. 7, pp. 24, 2013. @article{Richlan2013, Approaches using eye movements as markers of ongoing brain activity to investigate perceptual and cognitive processes were able to implement highly sophisticated paradigms driven by eye movement recordings. Crucially, these paradigms involve display changes that have to occur during the time of saccadic blindness, when the subject is unaware of the change. Therefore, a combination of high-speed eye tracking and high-speed visual stimulation is required in these paradigms. For combined eye movement and brain activity studies (e.g., fMRI, EEG, MEG), fast and exact timing of display changes is especially important, because of the high susceptibility of the brain to visual stimulation. Eye tracking systems already achieve sampling rates up to 2000 Hz, but recent LCD technologies for computer screens reduced the temporal resolution to mostly 60 Hz, which is too slow for gaze-contingent display changes. We developed a high-speed video projection system, which is capable of reliably delivering display changes within the time frame of < 5 ms. This could not be achieved even with the fastest cathode ray tube (CRT) monitors available (< 16 ms). The present video projection system facilitates the realization of cutting-edge eye movement research requiring reliable high-speed visual stimulation (e.g., gaze-contingent display changes, short-time presentation, masked priming). Moreover, this system can be used for fast visual presentation in order to assess brain activity using various methods, such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). The latter technique was previously excluded from high-speed visual stimulation, because it is not possible to operate conventional CRT monitors in the strong magnetic field of an MRI scanner. Therefore, the present video projection system offers new possibilities for studying eye movement-related brain activity using a combination of eye tracking and fMRI. |
Gerulf Rieger; Allen M. Rosenthal; Brian M. Cash; Joan A. W. Linsenmeier; J. Michael Bailey; Ritch C. Savin-Williams Male bisexual arousal: A matter of curiosity? Journal Article In: Biological Psychology, vol. 94, no. 3, pp. 479–489, 2013. @article{Rieger2013, Conflicting evidence exists regarding whether bisexual-identified men are sexually aroused to both men and women. We hypothesized that a distinct characteristic, level of curiosity about sexually diverse acts, distinguishes bisexual-identified men with and without bisexual arousal. Study 1 assessed men's (n= 277) sexual arousal via pupil dilation to male and female sexual stimuli. Bisexual men were, on average, higher in their sexual curiosity than other men. Despite this general difference, only bisexual-identified men with elevated sexual curiosity showed bisexual arousal. Those lower in curiosity had responses resembling those of homosexual men. Study 2 assessed men's (n= 72) sexual arousal via genital responses and replicated findings of Study 1. Study 3 provided information on the validity on our measure of sexual curiosity by relating it to general curiosity and sexual sensation seeking (n= 83). Based on their sexual arousal and personality, at least two groups of men identify as bisexual. |
Hector Rieiro; Susana Martinez-Conde; Stephen L. Macknik Perceptual elements in Penn & Teller's “Cups and Balls” magic trick Journal Article In: PeerJ, vol. 1, pp. 1–12, 2013. @article{Rieiro2013, Magic illusions provide the perceptual and cognitive scientist with a toolbox of experimental manipulations and testable hypotheses about the building blocks of conscious experience. Here we studied several sleight-of-hand manipulations in the performance of the classic "Cups and Balls" magic trick (where balls appear and disappear inside upside-down opaque cups). We examined a version inspired by the entertainment duo Penn & Teller, conducted with three opaque and subsequently with three transparent cups. Magician Teller used his right hand to load (i.e. introduce surreptitiously) a small ball inside each of two upside-down cups, one at a time, while using his left hand to remove a different ball from the upside-down bottom of the cup. The sleight at the third cup involved one of six manipulations: (a) standard maneuver, (b) standard maneuver without a third ball, (c) ball placed on the table, (d) ball lifted, (e) ball dropped to the floor, and (f) ball stuck to the cup. Seven subjects watched the videos of the performances while reporting, via button press, whenever balls were removed from the cups/table (button "1") or placed inside the cups/on the table (button "2"). Subjects' perception was more accurate with transparent than with opaque cups. Perceptual performance was worse for the conditions where the ball was placed on the table, or stuck to the cup, than for the standard maneuver. The condition in which the ball was lifted displaced the subjects' gaze position the most, whereas the condition in which there was no ball caused the smallest gaze displacement. Training improved the subjects' perceptual performance. Occlusion of the magician's face did not affect the subjects' perception, suggesting that gaze misdirection does not play a strong role in the Cups and Balls illusion. Our results have implications for how to optimize the performance of this classic magic trick, and for the types of hand and object motion that maximize magic misdirection. |
James A. Roberts; Guy Wallis; Michael Breakspear Fixational eye movements during viewing of dynamic natural scenes Journal Article In: Frontiers in Psychology, vol. 4, pp. 797, 2013. @article{Roberts2013a, Even during periods of fixation our eyes undergo small amplitude movements. These movements are thought to be essential to the visual system because neural responses rapidly fade when images are stabilized on the retina. The considerable recent interest in fixational eye movements (FEMs) has thus far concentrated on idealized experimental conditions with artificial stimuli and restrained head movements, which are not necessarily a suitable model for natural vision. Natural dynamic stimuli, such as movies, offer the potential to move beyond restrictive experimental settings to probe the visual system with greater ecological validity. Here, we study FEMs recorded in humans during the unconstrained viewing of a dynamic and realistic visual environment, revealing that drift trajectories exhibit the properties of a random walk with memory. Drifts are correlated at short time scales such that the gaze position diverges from the initial fixation more quickly than would be expected for an uncorrelated random walk. We propose a simple model based on the premise that the eye tends to avoid retracing its recent steps to prevent photoreceptor adaptation. The model reproduces key features of the observed dynamics and enables estimation of parameters from data. Our findings show that FEM correlations thought to prevent perceptual fading exist even in highly dynamic real-world conditions. |
Jörg Schorer; Rebecca Rienhoff; Lennart Fischer; Joseph Baker Foveal and peripheral fields of vision influences perceptual skill in anticipating opponents' attacking position in volleyball Journal Article In: Applied Psychophysiology Biofeedback, vol. 38, no. 3, pp. 185–192, 2013. @article{Schorer2013, The importance of perceptual-cognitive expertise in sport has been repeatedly demonstrated. In this study we examined the role of different sources of visual information (i.e., foveal versus peripheral) in anticipating volleyball attack positions. Expert (n = 11), advanced (n = 13) and novice (n = 16) players completed an anticipation task that involved predicting the location of volleyball attacks. Video clips of volleyball attacks (n = 72) were spatially and temporally occluded to provide varying amounts of information to the participant. In addition, participants viewed the attacks under three visual conditions: full vision, foveal vision only, and peripheral vision only. Analysis of variance revealed significant between group differences in prediction accuracy with higher skilled players performing better than lower skilled players. Additionally, we found significant differences between temporal and spatial occlusion conditions. Both of those factors interacted separately, but not combined with expertise. Importantly, for experts the sum of both fields of vision was superior to either source in isolation. Our results suggest different sources of visual information work collectively to facilitate expert anticipation in time-constrained sports and reinforce the complexity of expert perception. |
Elizabeth R. Schotter; Victor S. Ferreira; Keith Rayner Parallel object activation and attentional gating of information: Evidence from eye movements in the multiple object naming paradigm Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 2, pp. 365–374, 2013. @article{Schotter2013, Do we access information from any object we can see, or do we access information only from objects that we intend to name? In 3 experiments using a modified multiple object naming paradigm, subjects were required to name several objects in succession when previews appeared briefly and simultaneously in the same location as the target as well as at another location. In Experiment 1, preview benefit-faster processing of the target when the preview was related (a mirror image of the target) compared to unrelated (semantically and phonologically)-was found for the preview in the target location but not a location that was never to be named. In Experiment 2, preview benefit was found if a related preview appeared in either the target location or the third-to-be-named location. Experiment 3 showed the difference between results from the first 2 experiments was not due to the number of objects on the screen. These data suggest that attention serves to gate visual input about objects based on the intention to name them and that information from one intended-to-be-named object can facilitate processing of an object in another location. |
Alexander C. Schutz; Felix Lossin; Dirk Kerzel Temporal stimulus properties that attract gaze to the periphery and repel gaze from fixation Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–17, 2013. @article{Schutz2013, Humans use saccadic eye movements to fixate different parts of their visual environment. While stimulus features that determine the location of the next fixation in static images have been extensively studied, temporal stimulus features that determine the timing of the gaze shifts received less attention. It is also unclear if stimulus features at the present gaze location can trigger gaze shifts to another location. To investigate these questions, we asked observers to switch their gaze between two blobs. In three different conditions, either the fixated blob, the peripheral blob, or both blobs were flickering. A time-frequency analysis of the flickering noise values, time locked to the gaze shifts, revealed significant phase locking in a time window 300 to 100 ms before the gaze shift at temporal frequencies below 20 Hz. The average phase angles at these time-frequency points indicated that observer's gaze was repelled by decreasing contrast of the fixated blob and attracted by increasing contrast of the peripheral blob. These results show that temporal properties of both, fixated, and peripheral stimuli are capable of triggering gaze shifts. |
Immo Schütz; Denise Y. P. Henriques; K. Fiehler Gaze-centered spatial updating in delayed reaching even in the presence of landmarks Journal Article In: Vision Research, vol. 87, pp. 46–52, 2013. @article{Schuetz2013, Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12. s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets. |
Alessandra Sciutti; Ambra Bisio; Francesco Nori; Giorgio Metta; Luciano Fadiga; Giulio Sandini Robots can be perceived as goal-oriented agents Journal Article In: Interaction Studies, vol. 14, no. 3, pp. 329–350, 2013. @article{Sciutti2013, Understanding the goals of others is fundamental for any kind of interpersonal interaction and collaboration. From a neurocognitive perspective, intention understanding has been proposed to depend on an involvement of the observer's motor system in the prediction of the observed actions (Nyström et al. 2011; Rizzolatti & Sinigaglia 2010; Southgate et al. 2009). An open question is if a similar understanding of the goal mediated by motor resonance can occur not only between humans, but also for humanoid robots. In this study we investigated whether goal-oriented robotic actions can induce motor resonance by measuring the appearance of anticipatory gaze shifts to the goal during action observation. Our results indicate a similar implicit processing of humans' and robots' actions and propose to use anticipatory gaze behaviour as a tool for the evaluation of human-robot interactions. |
Diego E. Shalom; Maximiliano G. Sousa Serro; Maximiliano Giaconia; Luis M. Martinez; Andres Rieznik; Mariano Sigman Choosing in freedom or forced to choose? Introspective blindness to psychological forcing in stage-magic Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e58254, 2013. @article{Shalom2013, We investigated an individual ability to identify whether choices were made freely or forced by external parameters. We capitalized on magical setups where the notion of psychological forcing constitutes a well trodden path. In live stage magic, a magician guessed cards from spectators while inquiring how freely they thought they had made the choice. Our data showed a marked blindness in the introspection of free choice. Spectators assigned comparable ratings when choosing the card that the magician deliberately forced them compared to any other card, even in classical forcing, where the magician literally handles a card to the participant This observation was paralleled by a laboratory experiment where we observed modest changes in subjective reports by factors with drastic effect in choice. Pupil dilatation, which is known to tag slow cognitive events related to memory and attention, constitutes an efficient fingerprint to index subjective and objective aspects of choice. |
Diego E. Shalom; Mariano Sigman Freedom and rules in human sequential performance: A refractory period in eye-hand coordination Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–13, 2013. @article{Shalom2013a, In action sequences, the eyes and hands ought to be coordinated in precise ways. The mechanisms governing the architecture of encoding and action of several effectors remain unknown. Here we study hand and eye movements in a sequential task in which letters have to be typed while they move down through the screen. We observe a strict refractory period of about 200 ms between the initiation of manual and eye movements. Subjects do not initiate a saccade just after typing and do not type just after making the saccade. This refractory period is observed ubiquitously in every subject and in each step of the sequential task, even when keystrokes and saccades correspond to different items of the sequence-for instance when a subject types a letter that has been gazed at in a preceding fixation. These results extend classic findings of dual-task paradigms, of a bottleneck tightly locked to the response selection process, to unbounded serial routines. Interestingly, while the bottleneck is seemingly inevitable, better performing subjects can adopt a strategy to minimize the cost of the bottleneck, overlapping the refractory period with the encoding of the next item in the sequence. |
Tracey A. Shaw; Melanie A. Porter Emotion recognition and visual-scan paths in fragile X syndrome Journal Article In: Journal of Autism and Developmental Disorders, vol. 43, no. 5, pp. 1119–1139, 2013. @article{Shaw2013, This study investigated emotion recognition abilities and visual scanning of emotional faces in 16 Fragile X syndrome (FXS) individuals compared to 16 chronological-age and 16 mental-age matched controls. The relationships between emotion recognition, visual scan-paths and symptoms of social anxiety, schizotypy and autism were also explored. Results indicated that, com- pared to both control groups, the FXS group displayed specific emotion recognition deficits for angry and neutral (but not happy or fearful) facial expressions. Despite these evident emotion recognition deficits, the visual scanning of emotional faces was found to be at developmentally appropriate levels in the FXS group. Significant relation- ships were also observed between visual scan-paths, emo- tion recognition performance and symptomology in the FXS group. |
Stefan Van der Stigchel; Richard A. I. Bethlehem; Barrie P. Klein; Tos T. J. M. Berendschot; Tanja C. W. Nijboer; Serge O. Dumoulin Macular degeneration affects eye movement behavior during visual search Journal Article In: Frontiers in Psychology, vol. 4, pp. 579, 2013. @article{VanderStigchel2013, Patients with a scotoma in their central vision (e.g., due to macular degeneration, MD) commonly adopt a strategy to direct the eyes such that the image falls onto a peripheral location on the retina. This location is referred to as the preferred retinal locus (PRL). Although previous research has investigated the characteristics of this PRL, it is unclear whether eye movement metrics are modulated by peripheral viewing with a PRL as measured during a visual search paradigm. To this end, we tested four MD patients in a visual search paradigm and contrasted their performance with a healthy control group and a healthy control group performing the same experiment with a simulated scotoma. The experiment contained two conditions. In the first condition the target was an unfilled circle hidden among c-shaped distractors (serial condition) and in the second condition the target was a filled circle (pop-out condition). Saccadic search latencies for the MD group were significantly longer in both conditions compared to both control groups. Results of a subsequent experiment indicated that this difference between the MD and the control groups could not be explained by a difference in target selection sensitivity. Furthermore, search behavior of MD patients was associated with saccades with smaller amplitudes toward the scotoma, an increased intersaccadic interval and an increased number of eye movements necessary to locate the target. Some of these characteristics, such as the increased intersaccadic interval, were also observed in the simulation group, which indicate that these characteristics are related to the peripheral viewing itself. We suggest that the combination of the central scotoma and peripheral viewing can explain the altered search behavior and no behavioral evidence was found for a possible reorganization of the visual system associated with the use of a PRL. Thus the switch from a fovea-based to a PRL-based reference frame impairs search efficiency. |
Nathalie Van Humbeeck; Nadine Schmitt; Frouke Hermens; Johan Wagemans; Udo A. Ernst The role of eye movements in a contour detection task Journal Article In: Journal of Vision, vol. 13, no. 14, pp. 5, 2013. @article{VanHumbeeck2013, Vision combines local feature integration with active viewing processes, such as eye movements, to perceive complex visual scenes. However, it is still unclear how these processes interact and support each other. Here, we investigated how the dynamics of saccadic eye movements interact with contour integration, focusing on situations in which contours are difficult to find or even absent. We recorded observers' eye movements while they searched for a contour embedded in a background of randomly oriented elements. Task difficulty was manipulated by varying the contour's path angle. An association field model of contour integration was employed to predict potential saccade targets by identifying stimulus locations with high contour salience. We found that the number and duration of fixations increased with the increasing path angle of the contour. In addition, fixation duration increased over the course of a trial, and the time course of saccade amplitude depended on the percept of observers. Model fitting revealed that saccades fully compensate for the reduced saliency of peripheral contour targets. Importantly, our model predicted fixation locations to a considerable degree, indicating that observers fixated collinear elements. These results show that contour integration actively guides eye movements and determines their spatial and temporal parameters. |
Anouk Mariette Loon; Tomas Knapen; H. Steven Scholte; Elexa St. John-Saaltink; Tobias H. Donner; Victor A. F. Lamme GABA shapes the dynamics of bistable perception Journal Article In: Current Biology, vol. 23, no. 9, pp. 823–827, 2013. @article{Loon2013, Sometimes, perception fluctuates spontaneously between two distinct interpretations of a constant sensory input. These bistable perceptual phenomena provide a unique window into the neural mechanisms that create the contents of conscious perception [1]. Models of bistable perception posit that mutual inhibition between stimulus-selective neural populations in visual cortex plays a key role in these spontaneous perceptual fluctuations [2, 3]. However, a direct link between neural inhibition and bistable perception has not yet been established experimentally. Here, we link perceptual dynamics in three distinct bistable visual illusions (binocular rivalry, motion-induced blindness, and structure from motion) to measurements of gamma-aminobutyric acid (GABA) concentrations in human visual cortex (as measured with magnetic resonance spectroscopy) and to pharmacological stimulation of the GABAAreceptor by means of lorazepam. As predicted by a model of neural interactions underlying bistability, both higher GABA concentrations in visual cortex and lorazepam administration induced slower perceptual dynamics, as reflected in a reduced number of perceptual switches and a lengthening of percept durations. Thus, we show that GABA, the main inhibitory neurotransmitter, shapes the dynamics of bistable perception. These results pave the way for future studies into the competitive neural interactions across the visual cortical hierarchy that elicit conscious perception. |
Kathleen Vancleef; Johan Wagemans; Glyn W. Humphreys Impaired texture segregation but spared contour integration following damage to right posterior parietal cortex Journal Article In: Experimental Brain Research, vol. 230, no. 1, pp. 41–57, 2013. @article{Vancleef2013, We examined the relations between texture segregation and contour integration in patients with deficits in spatial attention leading to left or right hemisphere extinction. Patients and control participants were presented with texture and contour stimuli consisting of oriented elements. We induced regularity in the stimuli by manipulating the element orientations resulting in an implicit texture border or explicit contour. Participants had to discriminate curved from straight shapes without making eye movements, while the stimulus presentation time was varied using a QUEST procedure. The results showed that only patients with right hemisphere extinction had a spatial bias, needing a longer presentation time to determine the shape of the border or contour on the contralesional side, especially for borders defined by texture. These results indicate that texture segregation is modulated by attention-related brain areas in the right posterior parietal cortex. |
Signe Vangkilde; Anders Petersen; Claus Bundesen Temporal expectancy in the context of a theory of visual attention Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–11, 2013. @article{Vangkilde2013, Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue-stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s(-1)) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations. |
Ronaldo Vigo; Derek E. Zeigler; Phillip A. Halsey Gaze and informativeness during category learning: Evidence for an inverse relation Journal Article In: Visual Cognition, vol. 21, no. 4, pp. 446–476, 2013. @article{Vigo2013, In what follows, we explore the general relationship between eye gaze during a category learning task and the information conveyed by each member of the learned category. To understand the nature of this relationship empirically, we used eye tracking during a novel object classification paradigm. Results suggest that the average fixation time per object during learning is inversely proportional to the amount of information that object conveys about its category. This inverse relationship may seem counterintuitive; however, objects that have a high-information value are inherently more representative of their category. Therefore, their generality captures the essence of the category structure relative to less representative objects. As such, it takes relatively less time to process these objects than their less informative companions. We use a general information measure referred to as representational information theory (Vigo, 2011a, 2013a) to articulate and interpret the results from our experiment and compare its predictions to those of three models of prototypicality. |
Heather Sheridan; Eyal M. Reingold The mechanisms and boundary conditions of the Einstellung Effect in chess: Evidence from eye movements Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75796, 2013. @article{Sheridan2013, In a wide range of problem-solving settings, the presence of a familiar solution can block the discovery of better solutions (i.e., the Einstellung effect). To investigate this effect, we monitored the eye movements of expert and novice chess players while they solved chess problems that contained a familiar move (i.e., the Einstellung move), as well as an optimal move that was located in a different region of the board. When the Einstellung move was an advantageous (but suboptimal) move, both the expert and novice chess players who chose the Einstellung move continued to look at this move throughout the trial, whereas the subset of expert players who chose the optimal move were able to gradually disengage their attention from the Einstellung move. However, when the Einstellung move was a blunder, all of the experts and the majority of the novices were able to avoid selecting the Einstellung move, and both the experts and novices gradually disengaged their attention from the Einstellung move. These findings shed light on the boundary conditions of the Einstellung effect, and provide convergent evidence for Bilalić, McLeod, & Gobet (2008)'s conclusion that the Einstellung effect operates by biasing attention towards problem features that are associated with the familiar solution rather than the optimal solution. |
Veronica Shi; Jie Cui; Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde Effect of stimulus width on simultaneous contrast Journal Article In: PeerJ, vol. 1, pp. 1–13, 2013. @article{Shi2013, Perceived brightness of a stimulus depends on the background against which the stimulus is set, a phenomenon known as simultaneous contrast. For instance, the same gray stimulus can look light against a black background or dark against a white background. Here we quantified the perceptual strength of simultaneous contrast as a function of stimulus width. Previous studies have reported that wider stimuli result in weaker simultaneous contrast, whereas narrower stimuli result in stronger simultaneous contrast. However, no previous research has quantified this relationship. Our results show a logarithmic relationship between stimulus width and perceived brightness. This relationship is well matched by the normalized output of a Difference-of-Gaussians (DOG) filter applied to stimuli of varied widths. |
Masanori Shimono; Kazuhisa Niki Global mapping of the whole-brain network underlining binocular rivalry Journal Article In: Brain Connectivity, vol. 3, no. 2, pp. 212–221, 2013. @article{Shimono2013, We investigated how the structure of the brain network relates to the stability of perceptual alternation in binocular rivalry. Historically, binocular rivalry has provided important new insights to our understandings in neuroscience. Although various relationships between the local regions of the human brain structure and perceptual switching phenomena have been shown in previous researches, the global organization of the human brain structural network relating to this phenomenon has not yet been addressed. To approach this issue, we reconstructed fiber-tract bundles using diffusion tensor imaging and then evaluated the correlations between the speeds of perceptual alternation and fractional anisotropy (FA) values in each fiber-tract bundle integrating among 84 brain regions. The resulting comparison revealed that the distribution of the global organization of the structural brain network showed positive or negative correlations between the speeds of perceptual alternation and the FA values. First, the connections between the subcortical regions stably were negatively correlated. Second, the connections between the cortical regions mainly showed positive correlations. Third, almost all other cortical connections that showed negative correlations were located in one central cluster of the subcortical connections. This contrast between the contribution of the cortical regions to destabilization and the contribution of the subcortical regions to stabilization of perceptual alternation provides important information as to how the global architecture of the brain structural network supports the phenomenon of binocular rivalry. |
Alisha Siebold; Wieske Zoest; Martijn Meeter; Mieke Donk In defense of the salience map: Salience rather than visibility determines selection Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 6, pp. 1516–1524, 2013. @article{Siebold2013, The aim of the present study was to investigate whether time-dependent biases of oculomotor selection as typically observed during visual search are better accounted for by an absolute-processing-speed account (J. P. de Vries, I. T. C. Hooge, M. A. Wiering, & F. A. J. Verstraten, 2011, How longer saccade latencies lead to a competition for salience. Psychological Science, 22, 916-923) or a relative-salience account (e.g., M. Donk, & W. van Zoest, 2008, Effects of salience are short-lived. Psychological Science, 19, 733-739; M. Donk & W. van Zoest, 2011, No control in orientation search: The effects of instruction on oculomotor selection in visual search. Vision Research, 51, 2156-2166). In order to test these two models, we performed an experiment in which participants were instructed to make a speeded eye movement to any of two orientation singletons presented among a homogeneous set of vertically oriented background lines. One singleton, the fixed singleton, remained identical across conditions, whereas the other singleton, the variable singleton, varied such that its orientation contrast relative to the background lines was either smaller or larger than that of the fixed singleton. The results showed that the proportion of eye movements directed toward the fixed singleton varied substantially depending on the orientation contrast of the variable singleton. A model assuming selection behavior to be determined by relative salience provided a better fit to the individual data than the absolute processing speed model. These findings suggest that relative salience rather than the visibility of an element is crucial in determining temporal variations in oculomotor selection behavior and that an explanation of visual selection behavior is insufficient without the concept of a salience map. |
Massimo Silvetti; Ruth Seurinck; Marlies E. Bochove; Tom Verguts The influence of the noradrenergic system on optimal control of neural plasticity Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 160, 2013. @article{Silvetti2013, Decision making under uncertainty is challenging for any autonomous agent. The challenge increases when the environment's stochastic properties change over time, i.e., when the environment is volatile. In order to efficiently adapt to volatile environments, agentsmust primarily rely on recent outcomes to quickly change their decision strategies; in otherwords, they need to increase their knowledge plasticity. On the contrary, in stable environments, knowledge stability must be preferred to preserve useful information against noise. Here we propose that in mammalian brain, the locus coeruleus (LC) is one of the nuclei involved in volatility estimation and in the subsequent control of neural plasticity. During a reinforcement learning task, LC activation, measured bymeans of pupil diameter, coded both for environmental volatility and learning rate. We hypothesize that LC could be responsible, through norepinephrinic modulation, for adaptations to optimize decisionmaking in volatile environments.We also suggest a computational model on the interaction between the anterior cingulate cortex (ACC) and LC for volatility estimation. |
Yin Su; Li-Lin Rao; Hong-Yue Sun; Xue-Lei Du; Xingshan Li; Shu Li Is making a risky choice based on a weighting and adding process? An eye-tracking investigation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 6, pp. 1765–1780, 2013. @article{Su2013, The debate about whether making a risky choice is based on a weighting and adding process has a long history and is still unresolved. To address this long-standing controversy, we developed a comparative paradigm. Participants' eye movements in 2 risky choice tasks that required participants to choose between risky options in single-play and multiple-play conditions were separately compared with those in a baseline task in which participants naturally performed a deliberate calculation following a weighting and adding process. The results showed that, when participants performed the multiple-play risky choice task, their eye movements were similar to those in the baseline task, suggesting that participants may use a weighting and adding process to make risky choices in multiple-play conditions. In contrast, participants' eye movements were different in the single-play risky choice task versus the baseline task, suggesting that participants were not likely to use a weighting and adding process to make risky choices in single-play conditions and were more likely to use a heuristic process. We concluded that an expectation-based index for predicting risk preferences is applicable in multiple-play conditions but not in single-play conditions, implying the need to improve current theories that postulate the use of a heuristic process. |
Pei Sun; Justin L. Gardner; Mauro Costagli; Kenichi Ueno; R. Allen Waggoner; Keiji Tanaka; Kang Cheng In: Cerebral Cortex, vol. 23, no. 7, pp. 1618–1629, 2013. @article{Sun2013, Cells in the animal early visual cortex are sensitive to contour orientations and form repeated structures known as orientation columns. At the behavioral level, there exist 2 well-known global biases in orientation perception (oblique effect and radial bias) in both animals and humans. However, their neural bases are still under debate. To unveil how these behavioral biases are achieved in the early visual cortex, we conducted high-resolution functional magnetic resonance imaging experiments with a novel continuous and periodic stimulation paradigm. By inserting resting recovery periods between successive stimulation periods and introducing a pair of orthogonal stimulation conditions that differed by 90 degrees continuously, we focused on analyzing a blood oxygenation level-dependent response modulated by the change in stimulus orientation and reliably extracted orientation preferences of single voxels. We found that there are more voxels preferring horizontal and vertical orientations, a physiological substrate underlying the oblique effect, and that these over-representations of horizontal and vertical orientations are prevalent in the cortical regions near the horizontal- and vertical-meridian representations, a phenomenon related to the radial bias. Behaviorally, we also confirmed that there exists perceptual superiority for horizontal and vertical orientations around horizontal and vertical meridians, respectively. Our results, thus, refined the neural mechanisms of these 2 global biases in orientation perception. |
Megumi Suzuki; Jeremy M. Wolfe; Todd S. Horowitz; Yasuki Noguchi Apparent color-orientation bindings in the periphery can be influenced by feature binding in central vision Journal Article In: Vision Research, vol. 82, pp. 58–65, 2013. @article{Suzuki2013, A previous study reported the misbinding illusion in which visual features belonging to overlapping sets of items were erroneously integrated (Wu, Kanai, & Shimojo, 2004, Nature, 429, 262). In this illusion, central and peripheral portions of a transparent motion field combined color and motion in opposite fashions. When observers saw such stimuli, their perceptual color-motion bindings in the periphery were re-arranged in such a way as to accord with the bindings in the central region, resulting in erroneous color-motion pairings (misbinding) in peripheral vision. Here we show that this misbinding illusion is also seen in the binding of color and orientation. When the central field of a stimulus array was composed of objects that had coherent (regular) color-orientation pairings, subjective color-orientation bindings in the peripheral stimuli were automatically altered to match the coherent pairings of the central stimuli. Interestingly, the illusion was induced only when all items in the central field combined color and orientation in an orthogonal fashion (e.g. all red bars were horizontal and all green bars were vertical). If this orthogonality was disrupted (e.g. all red and green bars were horizontal), the central field lost its power to induce the misbinding illusion in the peripheral stimuli. The original misbinding illusion study proposed that the illusion stemmed from a perceptual extrapolation that resolved peripheral ambiguity with clear central vision. However, our present results indicate that visual analyses of the correlational structure between two features (color and orientation) are critical for the illusion to occur, suggesting a rapid integration of multiple featural cues in the human visual system. |
Bernard Marius Hart; Hannah Claudia Elfriede; Fanny Schmidt; Ingo Klein-Harmeyer; Wolfgang Einhäuser Attention in natural scenes: Contrast affects rapid visual processing and fixations alike Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–10, 2013. @article{tHart2013a, For natural scenes, attention is frequently quantified either by performance during rapid presentation or by gaze allocation during prolonged viewing. Both paradigms operate on different time scales, and tap into covert and overt attention, respectively. To compare these, we ask some observers to detect targets (animals/vehicles) in rapid sequences, and others to freely view the same target images for 3 s, while their gaze is tracked. In some stimuli, the target's contrast is modified (increased/decreased) and its back- ground modified either in the same or in the opposite way. We find that increasing target contrast relative to the background increases fixations and detection alike, whereas decreasing target contrast and simultaneously increasing background contrast has little effect. Contrast increase for the whole image (target þ background) improves detection, decrease worsens detection, whereas fixation probability remains unaffected by whole-image modifications. Object-unrelated local increase or decrease of contrast attracts gaze, but less than actual objects, supporting a precedence of objects over low-level features. Detection and fixation probability are correlated: the more likely a target is detected in one paradigm, the more likely it is fixated in the other. Hence, the link between overt and covert attention, which has been established in simple stimuli, transfers to more naturalistic scenarios. |
Bernard Marius Hart; Hannah C. E. F. Schmidt; Christine Roth; Wolfgang Einhäuser Fixations on objects in natural scenes: Dissociating importance from salience Journal Article In: Frontiers in Psychology, vol. 4, pp. 455, 2013. @article{tHart2013, The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object's "importance" for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named ("common"/"important") or a rarely named ("rare"/"unimportant") object, track the observers' eye movements during scene viewing and ask them to provide wibble99 describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object's importance suggests an analogy to the effects of word frequency on landing positions in reading. |
Yusuke Uchida; Daisuke Kudoh; Takatoshi Higuchi; Masaaki Honda; Kazuyuki Kanosue Dynamic visual acuity in baseball players is due to superior tracking abilities Journal Article In: Medicine and Science in Sports and Exercise, vol. 45, no. 2, pp. 319–325, 2013. @article{Uchida2013, PURPOSE: Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in baseball players than that in nonplayers. Although the better DVA of baseball players has been attributed to a better ability to track moving objects, it might be derived from the ability to perceive an object even in the presence of a great distance between the image on the retina and the fovea (retinal error). However, the ability to perceive moving visual stimuli has not been compared between baseball players and nonplayers. METHODS: To clarify this, we quantitatively measured abilities of eye movement and visual perception using moving Landolt C rings in baseball players and nonplayers. RESULTS: Baseball players could achieve high DVA with significantly faster eye movement at shorter latencies than nonplayers. There was no difference in the ability to perceive moving object's images projected onto the retina between baseball players and nonplayers. CONCLUSIONS: These results suggest that the better DVA of baseball players is primarily due to a better ability to track moving objects with their eyes rather than to improved perception of moving images on the retina. This skill is probably obtained through baseball training. |
Matteo Valsecchi; Matteo Toscani; Karl R. Gegenfurtner Perceived numerosity is reduced in peripheral vision Journal Article In: Journal of Vision, vol. 13, no. 13, pp. 7–7, 2013. @article{Valsecchi2013, In four experiments we investigated the perception of numerosity in the peripheral visual field. We found that the perceived numerosity of a peripheral cloud of dots was judged to be inferior to the one of a central cloud of dots, particularly when the dots were highly clustered. Blurring the stimuli accordingly to peripheral spatial frequency sensitivity did not abolish the effect and had little impact on numerosity judgments. In a dedicated control experiment we ruled out that the reduction in peripheral perceived numerosity is secondary to a reduction of perceived stimulus size. We suggest that visual crowding might be at the origin of the observed reduction in peripheral perceived numerosity, implying that numerosity could be partly estimated through the individuation of the elements populating the array. |
Marlies E. Bochove; Lise Van Der Haegen; Wim Notebaert; Tom Verguts Blinking predicts enhanced cognitive control Journal Article In: Cognitive, Affective, & Behavioral Neuroscience, vol. 13, no. 2, pp. 346–354, 2013. @article{Bochove2013, Recent models have suggested an important role for neuromodulation in explaining trial-to-trial adaptations in cognitive control. The adaptation-by-binding model (Verguts & Notebaert, Psychological review, 115(2), 518-525, 2008), for instance, suggests that increased cognitive control in response to conflict (e.g., incongruent flanker stimulus) is the result of stronger binding of stimulus, action, and context representations, mediated by neuromodulators like dopamine (DA) and/or norepinephrine (NE). We presented a flanker task and used the Gratton effect (smaller congruency effect following incongruent trials) as an index of cognitive control. We investigated the Gratton effect in relation to eye blinks (DA related) and pupil dilation (NE related). The results for pupil dilation were not unequivocal, but eye blinks clearly modulated the Gratton effect: The Gratton effect was enhanced after a blink trial, relative to after a no-blink trial, even when controlling for correlated variables. The latter suggests an important role for DA in cognitive control on a trial-to-trial basis. © 2012 Psychonomic Society, Inc. |
Benjamin W. Tatler; Yoriko Hirose; Sarah K. Finnegan; Riina Pievilainen; Clare Kirtley; Alan Kennedy Priorities for selection and representation in natural tasks Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–10, 2013. @article{Tatler2013, Selecting and remembering visual information is an active and competitive process. In natural environments, representations are tightly coupled to task. Objects that are task-relevant are remembered better due to a combination of increased selection for fixation and strategic control of encoding and/or retaining viewed information. However, it is not understood how physically manipulating objects when performing a natural task influences priorities for selection and memory. In this study, we compare priorities for selection and memory when actively engaged in a natural task with first-person observation of the same object manipulations. Results suggest that active manipulation of a task-relevant object results in a specific prioritization for object position information compared with other properties and compared with action observation of the same manipulations. Experiment 2 confirms that this spatial prioritization is likely to arise from manipulation rather than differences in spatial representation in real environments and the movies used for action observation. Thus, our findings imply that physical manipulation of task relevant objects results in a specific prioritization of spatial information about task-relevant objects, possibly coupled with strategic de-prioritization of colour memory for irrelevant objects. |
Laura E. Thomas Spatial working memory is necessary for actions to guide thought Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 6, pp. 1974–1981, 2013. @article{Thomas2013a, Directed actions can play a causal role in cognition, shaping thought processes. What drives this cross-talk between action and thought? I investigated the hypothesis that representations in spatial working memory mediate interactions between directed actions and problem solving. Participants attempted to solve an insight problem while occasionally either moving their eyes in a pattern embodying the problem's solution or maintaining fixation. They simultaneously held either a spatial or verbal stimulus in working memory. Participants who moved their eyes in a pattern that embodied the solution were more likely to solve the problem, but only while also performing a verbal working memory task. Embodied guidance of insight was eliminated when participants were instead engaged in a spatial working memory task while moving their eyes, implying that loading spatial working memory prevented movement representations from influencing problem solving. These results point to spatial working memory as a mechanism driving embodied guidance of insight, suggesting that actions do not automatically influence problem solving. Instead, cross-talk between action and higher order cognition requires representations in spatial working memory. |
Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner Optimal sampling of visual information for lightness judgments Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 27, pp. 11163–11168, 2013. @article{Toscani2013a, The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting showthat higher values in an object's luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. |
Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner Selection of visual information for lightness judgements by eye movements Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–8, 2013. @article{Toscani2013, When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance. |
R. Blythe Towal; Milica Mormann; Christof Koch Simultaneous modeling of visual saliency and value computation improves predictions of economic choice Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 40, pp. E3858–E3867, 2013. @article{Towal2013, Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. |
Guanghan Song; Denis Pellerin; Lionel Granjon Different types of sounds influence gaze differently in videos Journal Article In: Journal of Eye Movement Research, vol. 6, no. 4, pp. 1–13, 2013. @article{Song2013, This paper presents an analysis of the effect of different types of sounds on visual gaze when a person is looking freely at videos, which would be helpful to predict eye position. In order to test the effect of sound, an audio-visual experiment was designed with two groups of participants, with audio-visual (AV) and visual (V) conditions. By using statisti- cal tools, we analyzed the difference between eye position of participants with AV and V conditions. We observed that the effect of sound is different depending on the kind of sound, and that the classes with human voice (i.e. speech, singer, human noise and singers) have the greatest effect. Furthermore, the results of the distance between sound source and eye position of the group with AV condition, suggested that only particular types of sound attract human eye position to the sound source. Finally, an analysis of the fixation duration between AV and V conditions showed that participants with AV condition move eyes more frequently than those with V condition. |
Joo-Hyun Song; Patrick Bédard Allocation of attention for dissociated visual and motor goals Journal Article In: Experimental Brain Research, vol. 226, no. 2, pp. 209–219, 2013. @article{Song2013a, In daily life, selecting an object visually is closely intertwined with processing that object as a potential goal for action. Since visual and motor goals are typically identical, it remains unknown whether attention is primarily allocated to a visual target, a motor goal, or both. Here, we dissociated visual and motor goals using a visuomotor adaptation paradigm, in which participants reached toward a visual target using a computer mouse or a stylus pen, while the direction of the cursor was rotated 45° counter-clockwise from the direction of the hand movement. Thus, as visuomotor adaptation was accomplished, the visual target was dissociated from the movement goal. Then, we measured the locus of attention using an attention-demanding rapid serial visual presentation (RSVP) task, in which participants detected a pre-defined visual stimulus among the successive visual stimuli presented on either the visual target, the motor goal, or a neutral control location. We demonstrated that before visuomotor adaptation, participants performed better when the RSVP stream was presented at the visual target than at other locations. However, once visual and motor goals were dissociated following visuomotor adaptation, performance at the visual and motor goals was equated and better than performance at the control location. Therefore, we concluded that attentional resources are allocated both to visual target and motor goals during goal-directed reaching movements. |
Mingli Song; Dapeng Tao; Chun Chen; Jiajun Bu; Yezhou Yang Color-to-gray based on chance of happening preservation Journal Article In: Neurocomputing, vol. 119, pp. 222–231, 2013. @article{Song2013b, It is important to convert color images into grayscale ones for both commercial and scientific applications, such as reducing the publication cost and making the color blind people capture the visual content and semantics from color images. Recently, a dozen of algorithms have been developed for color-to-gray conversion. However, none of them considers the visual attention consistency between the color image and the converted grayscale one. Therefore, these methods may fail to convey important visual information from the original color image to the converted grayscale image. Inspired by the Helmholtz principle (Desolneux et al. 2008 [16]) that "we immediately perceive whatever could not happen by chance", we propose a new algorithm for color-to-gray to solve this problem. In particular, we first define the Chance of Happening (CoH) to measure the attentional level of each pixel in a color image. Afterward, natural image statistics are introduced to estimate the CoH of each pixel. In order to preserve the CoH of the color image in the converted grayscale image, we finally cast the color-to-gray to a supervised dimension reduction problem and present locally sliced inverse regression that can be efficiently solved by singular value decomposition. Experiments on both natural images and artificial pictures suggest (1) that the proposed approach makes the CoH of the color image and that of the converted grayscale image consistent and (2) the effectiveness and the efficiency of the proposed approach by comparing with representative baseline algorithms. In addition, it requires no human-computer interactions. |
Matthew J. Stainer; Kenneth C. Scott-Brown; Benjamin W. Tatler Behavioral biases when viewing multiplexed scenes: Scene structure and frames of reference for inspection Journal Article In: Frontiers in Psychology, vol. 4, pp. 624, 2013. @article{Stainer2013, Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate "sub-scenes." Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers. |
Denise Nadine Stephan; Iring Koch; Jessica Hendler; Lynn Huestegge Task switching, modality compatibility, and the supra-modal function of eye movements Journal Article In: Experimental Psychology, vol. 60, no. 2, pp. 90–99, 2013. @article{Stephan2013, Previous research suggested that specific pairings of stimulus and response modalities (visual-manual and auditory-vocal tasks) lead to better dual-task performance than other pairings (visual-vocal and auditory-manual tasks). In the present task-switching study, we further examined this modality compatibility effect and investigated the role of response modality by additionally studying oculomotor responses as an alternative to manual responses. Interestingly, the switch cost pattern revealed a much stronger modality compatibility effect for groups in which vocal and manual responses were combined as compared to a group involving vocal and oculomotor responses, where the modality compatibility effect was largely abolished. We suggest that in the vocal-manual response groups the modality compatibility effect is based on cross-talk of central processing codes due to preferred stimulus-response modality processing pathways, whereas the oculomotor response modality may be shielded against cross-talk due to the supra-modal functional importance of visual orientation. |
Jonas Knöll; M. Concetta Morrone; Frank Bremmer Spatio-temporal topography of saccadic overestimation of time Journal Article In: Vision Research, vol. 83, pp. 56–65, 2013. @article{Knoell2013, Rapid eye movements (saccades) induce visual misperceptions. A number of studies in recent years have investigated the spatio-temporal profiles of effects like saccadic suppression or perisaccadic mislocalization and revealed substantial functional similarities. Saccade induced chronostasis describes the subjective overestimation of stimulus duration when the stimulus onset falls within a saccade. In this study we aimed to functionally characterize saccade induced chronostasis in greater detail. Specifically we tested if chronostasis is influenced by or functionally related to saccadic suppression. In a first set of experiments, we measured the perceived duration of visual stimuli presented at different spatial positions as a function of presentation time relative to the saccade. We further compared perceived duration during saccades for isoluminant and luminant stimuli. Finally, we investigated whether or not saccade induced chronostasis is dependent on the execution of a saccade itself. We show that chronostasis occurs across the visual field with a clear spatio-temporal tuning. Furthermore, we report chronostasis during simulated saccades, indicating that spurious retinal motion induced by the saccade is a prime origin of the phenomenon. |
Junpeng Lao; Luca Vizioli; Roberto Caldara Culture modulates the temporal dynamics of global/local processing Journal Article In: Culture and Brain, vol. 1, no. 2-4, pp. 158–174, 2013. @article{Lao2013, Cultural differences in the way individuals from Western Caucasian (WC) and East Asian (EA) societies perceive and attend to visual information have been consistently reported in recent years. WC observers favor and perceive most efficiently the salient, local visual information by directing attention to focal objects. In contrast, EA observers show a bias towards global information, by preferentially attending elements in the background. However, the underlying neural mechanisms and the temporal dynamics of this striking cultural contrast have yet to be clarified. The combination of Navon figures, which contain both global and local features, and the measurement of neural adaptation constitute an ideal way to probe this issue. We recorded the electrophysiological signals of WC and EA observers while they actively matched culturally neutral geometric Navon shapes. In each trial, participants sequentially viewed and categorized an adapter shape followed by a target shape, as being either: identical; global congruent; local congruent; and different. We quantified the repetition suppression, a reduction in neural activity in stimulus sensitive regions following stimulus repetition, using a single-trial approach. A robust data-driven spatio-temporal analysis revealed at 80 ms a significant interaction between the culture of the observers and shape adaptation. EA observers showed sensitivity to global congruency on the attentional P1 component, whereas WC observers showed discrimination for global shapes at later stages. Our data revealed an early sensitivity to global and local shape cate- gorization, which is modulated by culture. This neural tuning could underlie more complex behavioral differences observed across human populations. |
Jochen Laubrock Laubrock; Anke Cajar Cajar; Ralf Engbert Control of fixation duration during scene viewing by interaction of foveal and peripheral processing Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 1–20, 2013. @article{Laubrock2013, Processing in our visual system is functionally segregated, with the fovea specialized in processing fine detail (high spatial frequencies) for object identification, and the periphery in processing coarse information (low frequencies) for spatial orienting and saccade target selection. Here we investigate the consequences of this functional segregation for the control of fixation durations during scene viewing. Using gaze-contingent displays, we applied high-pass or low-pass filters to either the central or the peripheral visual field and compared eye-movement patterns with an unfiltered control condition. In contrast with predictions from functional segregation, fixation durations were unaffected when the critical information for vision was strongly attenuated (foveal low-pass and peripheral high-pass filtering); fixation durations increased, however, when useful information was left mostly intact by the filter (foveal high-pass and peripheral low-pass filtering). These patterns of results are difficult to explain under the assumption that fixation durations are controlled by foveal processing difficulty. As an alternative explanation, we developed the hypothesis that the interaction of foveal and peripheral processing controls fixation duration. To investigate the viability of this explanation, we implemented a computational model with two compartments, approximating spatial aspects of processing by foveal and peripheral activations that change according to a small set of dynamical rules. The model reproduced distributions of fixation durations from all experimental conditions by variation of few parameters that were affected by specific filtering conditions. |
Ada Le; Matthias Niemeier A right hemisphere dominance for bimanual grasps Journal Article In: Experimental Brain Research, vol. 224, no. 2, pp. 263–273, 2013. @article{Le2013, To find points on the surface of an object that ensure a stable grasp, it would be most effective to employ one area in one cortical hemisphere. But grasping the object with both hands requires control through both hemispheres. To better understand the control mechanisms underlying this "bimanual grasping", here we examined how the two hemispheres coordinate their control processes for bimanual grasping depending on visual field. We asked if bimanual grasping involves both visual fields equally or one more than the other. To test this, participants fixated either to the left or right of an object and then grasped or pushed it off a pedestal. We found that when participants grasped the object in the right visual field, maximum grip aperture (MGA) was larger and more variable, and participants were slower to react and to show MGA compared to when they grasped the object in the left visual field. In contrast, when participants pushed the object we observed no comparable visual field effects. These results suggest that grasping with both hands, specifically the computation of grasp points on the object, predominantly involves the right hemisphere. Our study provides new insights into the interactions of the two hemispheres for grasping. |
Ada Le; Matthias Niemeier Left visual field preference for a bimanual grasping task with ecologically valid object sizes Journal Article In: Experimental Brain Research, vol. 230, pp. 187–196, 2013. @article{Le2013a, Grasping using two forelimbs in opposition to one another is evolutionary older than the hand with an opposable thumb (Whishaw and Coles in Behav Brain Res 77:135–148, 1996); yet, the mechanisms for bimanual grasps remain unclear. Similar to unimanual grasping, the localization of matching stable grasp points on an object is computationally expensive and so it makes sense for the signals to converge in a single cortical hemisphere. Indeed, bimanual grasps are faster and more accurate in the left visual field, and are disrupted if there is transcra- nial stimulation of the right hemisphere (Le and Niemeier in Exp Brain Res 224:263–273, 2013; Le et al. in Cereb Cortex. doi:10.1093/cercor/bht115, 2013). However, research so far has tested the right hemisphere dominance based on small objects only, which are usually grasped with one hand, whereas bimanual grasping is more com- monly used for objects that are too big for a single hand. Because grasping large objects might involve different neural circuits than grasping small objects (Grol et al. in J Neurosci 27:11877–11887, 2007), here we tested whether a left visual field/right hemisphere dominance for biman- ual grasping exists with large and thus more ecologically valid objects or whether the right hemisphere dominance is a function of object size. We asked participants to fixate to the left or right of an object and to grasp the object with the index and middle fingers of both hands. Consistent with previous observations, we found that for objects in the left visual field, the maximum grip apertures were scaled closer to the object width and were smaller and less variable, than for objects in the right visual field. Our results demonstrate that bimanual grasping is predominantly controlled by the right hemisphere, even in the context of grasping larger objects. |
Alwine Lenzner; Wolfgang Schnotz; Andreas Müller The role of decorative pictures in learning Journal Article In: Instructional Science, vol. 41, no. 5, pp. 811–831, 2013. @article{Lenzner2013, Three experiments with students from 7th and 8th grade were performed to investigate the effects of decorative pictures in learning as compared to instructional pictures. Pictures were considered as instructional, when they were primarily informative, and as decorative, when they were primarily aesthetically appealing. The experiments investigated, whether and to what extent decorative pictures affect the learner's distribution of attention, whether they have an effect on the affective and motivational state and whether they affect the learning outcomes. The first experiment indicated with eye-tracking methodology that decorative pictures receive only a bit initial attention as part of the learner's initial orientation and are largely ignored afterwards, which suggests that they have only a minor distracting effect if any. The second experiment showed that despite the small amount of attention they receive, decorative pictures seem to induce better mood, alertness and calmness with learners. The third experiment indicated that decorative pictures did not intensify students' situational interest, but reduced perceived difficulty of the learning material. Regarding outcomes of learning, decorative pictures were altogether neither harmful nor beneficial for learning. However, they moderated the beneficial effect of instructional pictures–in essence: the multimedia effect. The moderating effect was especially pronounced when learners had lower prior knowledge. The findings are discussed from the perspective of cognitive, affective and motivational psychology. Perspectives of further research are pointed out. |
Carly J. Leonard; Benjamin M. Robinson; Samuel T. Kaiser; Britta Hahn; Clara McClenon; Alexander N. Harvey; Steven J. Luck; James M. Gold Testing sensory and cognitive explanations of the antisaccade deficit in schizophrenia Journal Article In: Journal of Abnormal Psychology, vol. 122, no. 4, pp. 1111–1120, 2013. @article{Leonard2013, Recent research has suggested that people with schizophrenia (PSZ) have sensory deficits, especially in the magnocellular pathway, and this has led to the proposal that dysfunctional sensory processing may underlie higher-order cognitive deficits. Here we test the hypothesis that the antisaccade deficit in PSZ reflects dysfunctional magnocellular processing rather than impaired cognitive processing, as indexed by working memory capacity. This is a plausible hypothesis because oculomotor regions have direct magnocellular inputs, and the stimuli used in most antisaccade tasks strongly activate the magnocellular visual pathway. In the current study, we examined both prosaccade and antisaccade performance in PSZ (N = 22) and matched healthy control subjects (HCS; N = 22) with Gabor stimuli designed to preferentially activate the magnocellular pathway, the parvocellular pathway, or both pathways. We also measured working memory capacity. PSZ exhibited impaired antisaccade performance relative to HCS across stimulus types, with impairment even for stimuli that minimized magnocellular activation. Although both sensory thresholds and working memory capacity were impaired in PSZ, only working memory capacity was correlated with antisaccade accuracy, consistent with a cognitive rather than sensory origin for the antisaccade deficit. |
Ute Leonards; Samantha Stone; Christine Mohr Line bisection by eye and by hand reveal opposite biases Journal Article In: Experimental Brain Research, vol. 228, no. 4, pp. 513–525, 2013. @article{Leonards2013, The vision-for-action literature favours the idea that the motor output of an action-whether manual or oculomotor-leads to similar results regarding object handling. Findings on line bisection performance challenge this idea: healthy individuals bisect lines manually to the left of centre and to the right of centre when using eye fixation. In case that these opposite biases for manual and oculomotor action reflect more universal compensatory mechanisms that cancel each other out to enhance overall accuracy, one would like to observe comparable opposite biases for other material. In the present study, we report on three independent experiments in which we tested line bisection (by hand, by eye fixation) not only for solid lines, but also for letter lines; the latter, when bisected manually, is known to result in a rightward bias. Accordingly, we expected a leftward bias for letter lines when bisected via eye fixation. Analysis of bisection biases provided evidence for this idea: manual bisection was more rightward for letter as compared to solid lines, while bisection by eye fixation was more leftward for letter as compared to solid lines. Support for the eye fixation observation was particularly obvious in two of the three studies, for which comparability between eye and hand action was increasingly adjusted (paper-pencil versus touch screen for manual action). These findings question the assumption that ocular motor and manual output are always inter-changeable, but rather suggest that at least for some situations ocular motor and manual output biases are orthogonal to each other, possibly balancing each other out. |
Benjamin D. Lester; Paul Dassonville Shifts of visuospatial attention do not cause the spatial distortions of the Roelofs effect Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 4–4, 2013. @article{Lester2013, When a visible frame is offset left or right of an observer's objective midline, subjective midline is pulled toward the frame's center, resulting in an illusion of perceived space known as the Roelofs effect. However, a large frame is not necessary to generate the effect-even a small peripheral stimulus is sufficient, raising the possibility that the effect would be brought about by any stimulus that draws attention away from the midline. To assess the relationship between attention and distortions of perceived space, we adopted a paradigm that included a spatial cue that attracted the participant's attention, and an occasional probe whose location was to be reported. If shifts of attention cause the Roelofs effect, the probe's perceived location should vary with the locus of attention. Exogenous attentional cues caused a Roelofs-like effect, but these cues created an asymmetry in the visual display that may have driven the effect directly. In contrast, there was no mislocation after endogenous cues that contained no asymmetry in the visual display. A final experiment used color-contingent attentional cues to eliminate the confound between cue location and asymmetry in the visual display, and provided a clear demonstration that the Roelofs effect is caused by an asymmetric visual display, independent of any shift of attention. |
Joshua Levy; Tom Foulsham; Alan Kingstone Monsters are people too Journal Article In: Biology Letters, vol. 9, pp. 1–4, 2013. @article{Levy2013, Animals, including dogs, dolphins, monkeys and man, follow gaze. What mediates this bias towards the eyes? One hypothesis is that primates possess a distinct neural module that is uniquely tuned for the eyes of others. An alternative explanation is that configural face processing drives fixations to the middle of peoples' faces, which is where the eyes happen to be located. We distinguish between these two accounts. Observers were presented with images of people, non-human creatures with eyes in the middle of their faces (`humanoids') or creatures with eyes positioned elsewhere (`monsters'). There was a profound and significant bias towards looking early and often at the eyes of humans and humanoids and also, critically, at the eyes of monsters. These findings demonstrate that the eyes, and not the middle of the head, are being targeted by the oculomotor system. |