All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2012 |
Silke Paulmann; Debra Titone; Marc D. Pell How emotional prosody guides your way: Evidence from eye movements Journal Article In: Speech Communication, vol. 54, no. 1, pp. 92–107, 2012. @article{Paulmann2012, This study investigated cross-modal effects of emotional voice tone (prosody) on face processing during instructed visual search. Specifically, we evaluated whether emotional prosodic cues in speech have a rapid, mandatory influence on eye movements to an emotionally-related face, and whether these effects persist as semantic information unfolds. Participants viewed an array of six emotional faces while listening to instructions spoken in an emotionally congruent or incongruent prosody (e.g.; "Click on the happy face" spoken in a happy or angry voice). The duration and frequency of eye fixations were analyzed when only prosodic cues were emotionally meaningful (pre-emotional label window: "Click on the/..."), and after emotional semantic information was available (post-emotional label window: ".../happy face"). In the pre-emotional label window, results showed that participants made immediate use of emotional prosody, as reflected in significantly longer frequent fixations to emotionally congruent versus incongruent faces. However, when explicit semantic information in the instructions became available (post-emotional label window), the influence of prosody on measures of eye gaze was relatively minimal. Our data show that emotional prosody has a rapid impact on gaze behavior during social information processing, but that prosodic meanings can be overridden by semantic cues when linguistic information is task relevant. |
Brennan R. Payne; Elizabeth A. L. Stine-Morrow Aging, parafoveal preview, and semantic integration in sentence processing: Testing the cognitive workload of wrap-up Journal Article In: Psychology and Aging, vol. 27, no. 3, pp. 638–649, 2012. @article{Payne2012, The current study investigated the degree to which semantic-integration processes (“wrap-up”) during sentence understanding demand attentional resources by examining the effects of clause and sentence wrap-up on the parafoveal preview benefit (PPB) in younger and older adults. The PPB is defined as facilitation in processing word N + 1, based on information extracted while the eyes are fixated on word N, and is known to be reduced by processing difficulty at word N. Participants read passages in which word N occurred in a sentence-internal, clause-final, or sentence-final position, and a gaze-contingent boundary-change paradigm was used to manipulate the information available in parafoveal vision for word N + 1. Wrap-up effects were found on word N for both younger and older adults. Early-pass measures (first-fixation duration and single-fixation duration) of the PPB on word N + 1 were reduced by clause wrap-up and sentence wrap-up on word N, with similar effects for younger and older adults. However, for intermediate (gaze duration) and later-pass measures (regression-path duration, and selective regression-path duration), sentence wrap-up (but not clause wrap-up) on word N differentially reduced the PPB of word N + 1 for older adults. These findings suggest that wrap-up is demanding and may be less efficient with advancing age, resulting in a greater cognitive processing load for older readers. |
Mackay Mackay; Moran Cerf; Christof Koch Evidence for two distinct mechanisms directing gaze in natural scenes Journal Article In: Journal of Vision, vol. 12, no. 4, pp. 1–12, 2012. @article{Mackay2012, Various models have been proposed to explain the interplay between bottom-up and top-down mechanisms in driving saccades rapidly to one or a few isolated targets. We investigate this relationship using eye-tracking data from subjects viewing natural scenes to test attentional allocation to high-level objects within a mathematical decision-making framework. We show the existence of two distinct types of bottom-up saliency to objects within a visual scene, which disappear within a few fixations, and modification of this saliency by top-down influences. Our analysis reveals a subpopulation of early saccades, which are capable of accurately fixating salient targets after prior fixation within the same image. These data can be described quantitatively in terms of bottom-up saliency, including an explicit face channel, weighted by top-down influences, determining the mean rate of rise of a decision-making model to a threshold that triggers a saccade. These results are compatible with a rapid subcortical pathway generating accurate saccades to salient targets after analysis by cortical mechanisms. |
Kevin J. MacKenzie Vergence and accommodation to multiple-image-plane stereoscopic displays: “Real world” responses with practical image-plane separations? Journal Article In: Journal of Electronic Imaging, vol. 21, no. 1, pp. 1–8, 2012. @article{MacKenzie2012, Conventional stereoscopic displays present images on a single focal plane. The resulting mismatch between the stimuli to the eyes' focusing response (accommodation) and to convergence causes fatigue and poor stereo performance. One solution is to distribute image intensity across a number of widely spaced image planes—a technique referred to as depth filtering. Previously, we found this elicits accurate, continuous monocular accommodation responses with image-plane separations as large as 1.1 Diopters (D, the reciprocal of distance in meters), suggesting that a small number of image planes could eliminate vergence-accommodation conflicts over a large range of simulated distances. Evidence exists, however, of systematic differences between accommodation responses to binocular and monocular stimuli when the stimulus to accommodation is degraded, or at an incorrect distance. We examined the minimum image-plane spacing required for accurate accommodation to binocular depth-filtered images. We compared accommodation and vergence responses to changes in depth specified by depth filtering, using image-plane separations of 0.6 to 1.2 D, and equivalent real stimuli. Accommodation responses to real and depth-filtered stimuli were equivalent for image-plane separations of ∼0.6 to 0.9 D, but differed thereafter. We conclude that depth filtering can be used to precisely match accommodation and vergence demand in a practical stereoscopic display. |
Annmarie MacNamara; Joseph Schmidt; Gregory J. Zelinsky; Greg Hajcak Electrocortical and ocular indices of attention to fearful and neutral faces presented under high and low working memory load Journal Article In: Biological Psychology, vol. 91, no. 3, pp. 349–356, 2012. @article{MacNamara2012, Working memory load reduces the late positive potential (LPP), consistent with the notion that functional activation of the DLPFC attenuates neural indices of sustained attention. Visual attention also modulates the LPP. In the present study, we sought to determine whether working memory load might exert its influence on ERPs by reducing fixations to arousing picture regions. We simultaneously recorded eye-tracking and EEG while participants performed a working memory task interspersed with the presentation of task-irrelevant fearful and neutral faces. As expected, fearful compared to neutral faces elicited larger N170 and LPP amplitudes; in addition, working memory load reduced the N170 and the LPP. Participants made more fixations to arousing regions of neutral faces and faces presented under high working memory load. Therefore, working memory load did not induce avoidance of arousing picture regions and visual attention cannot explain load effects on the N170 and LPP. |
Adrian M. Madsen; Adam M. Larson; Lester C. Loschky; N. Sanjay Rebello Differences in visual attention between those who correctly and incorrectly answer physics problems Journal Article In: Physical Review Special Topics - Physics Education Research, vol. 8, pp. 010122, 2012. @article{Madsen2012, This study investigated how visual attention differed between those who correctly versus incorrectly answered introductory physics problems. We recorded eye movements of 24 individuals on six different conceptual physics problems where the necessary information to solve the problem was contained in a diagram. The problems also contained areas consistent with a novicelike response and areas of high perceptual salience. Participants ranged from those who had only taken one high school physics course to those who had completed a Physics Ph.D. We found that participants who answered correctly spent a higher percentage of time looking at the relevant areas of the diagram, and those who answered incorrectly spent a higher percentage of time looking in areas of the diagram consistent with a novicelike answer. Thus, when solving physics problems, top-down processing plays a key role in guiding visual selective attention either to thematically relevant areas or novicelike areas depending on the accuracy of a student's physics knowledge. This result has implications for the use of visual cues to redirect individuals' attention to relevant portions of the diagrams and may potentially influence the way they reason about these problems. |
Femke Maij; Maria Matziridi; Jeroen B. J. Smeets; Eli Brenner Luminance contrast in the background makes flashes harder to detect during saccades Journal Article In: Vision Research, vol. 60, pp. 22–27, 2012. @article{Maij2012, To explore a visual scene we make many fast eye movements (saccades) every second. During those saccades the image of the world shifts rapidly across our retina. These shifts are normally not detected, because perception is suppressed during saccades. In this paper we study the origin of this saccadic suppression by examining the influence of luminance borders in the background on the perception of flashes presented near the time of saccades in a normally illuminated room. We used different types of backgrounds: either with isoluminant red and green areas or with black and white areas. We found that the ability to perceive flashes that were presented during saccades was suppressed when there were luminance borders in the background, but not when there were isoluminant color borders in the background. Thus, masking by moving luminance borders plays an important role in saccadic suppression. The perceived positions of detected flashes were only influenced by the borders between the areas in the background when the flashes were presented . before or . after the saccades. Moreover, the influence did not depend on the kind of contrast forming the border. Thus, the masking effect of moving luminance borders does not appear to play an important role in the mislocalization of flashes that are presented near the time of saccades. |
Tal Seidel Malkinson; Ayelet McKyton; Ehud Zohary Motion adaptation reveals that the motion vector is represented in multiple coordinate frames Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 1–11, 2012. @article{Malkinson2012, Accurately perceiving the velocity of an object during smooth pursuit is a complex challenge: although the object is moving in the world, it is almost still on the retina. Yet we can perceive the veridical motion of a visual stimulus in such conditions, suggesting a nonretinal representation of the motion vector. To explore this issue, we studied the frames of representation of the motion vector by evoking the well known motion aftereffect during smooth-pursuit eye movements (SPEM). In the retinotopic configuration, due to an accompanying smooth pursuit, a stationary adapting random-dot stimulus was actually moving on the retina. Motion adaptation could therefore only result from motion in retinal coordinates. In contrast, in the spatiotopic configuration, the adapting stimulus moved on the screen but was practically stationary on the retina due to a matched SPEM. Hence, adaptation here would suggest a representation of the motion vector in spatiotopic coordinates. We found that exposure to spatiotopic motion led to significant adaptation. Moreover, the degree of adaptation in that condition was greater than the adaptation induced by viewing a random-dot stimulus that moved only on the retina. Finally, pursuit of the same target, without a random-dot array background, yielded no adaptation. Thus, in our experimental conditions, adaptation is not induced by the SPEM per se. Our results suggest that motion computation is likely to occur in parallel in two distinct representations: a low-level, retinal-motion dependent mechanism and a high-level representation, in which the veridical motion is computed through integration of information from other sources. |
Mercer Moss; Felix Joseph; Roland J. Baddeley; Nishan Canagarajah Eye movements to natural images as a function of sex and Ppersonality Journal Article In: PLoS ONE, vol. 7, no. 11, pp. e47870, 2012. @article{Moss2012, Women and men are different. As humans are highly visual animals, these differences should be reflected in the pattern of eye movements they make when interacting with the world. We examined fixation distributions of 52 women and men while viewing 80 natural images and found systematic differences in their spatial and temporal characteristics. The most striking of these was that women looked away and usually below many objects of interest, particularly when rating images in terms of their potency. We also found reliable differences correlated with the images' semantic content, the observers' personality, and how the images were semantically evaluated. Information theoretic techniques showed that many of these differences increased with viewing time. These effects were not small: the fixations to a single action or romance film image allow the classification of the sex of an observer with 64% accuracy. While men and women may live in the same environment, what they see in this environment is reliably different. Our findings have important implications for both past and future eye movement research while confirming the significant role individual differences play in visual attention. |
Albert Moukheiber; Gilles Rautureau; Fernando Perez-Diaz; Roland Jouvent; Antoine Pelissolo Gaze behaviour in social blushers Journal Article In: Psychiatry Research, vol. 200, no. 2-3, pp. 614–619, 2012. @article{Moukheiber2012, Gaze aversion could be a central component of social phobia. Fear of blushing is a symptom of social anxiety disorder (SAD) but is not yet described as a specific diagnosis in psychiatric classifications. Our research consists of comparing gaze aversion in SAD participants with or without fear of blushing in front of pictures of different emotional faces using an eye tracker. Twenty-six participants with DSM-IV SAD and expressed fear of blushing (SAD+FB) were recruited in addition to twenty-five participants with social phobia and no fear of blushing (SAD-FB). Twenty-four healthy participants aged and sex matched constituted the control group. We studied the number of fixations and the dwell time in the eyes area on the pictures. The results showed gaze avoidance in the SAD-FB group when compared to controls and when compared to the SAD+FB group. However we found no significant difference between SAD+FB and controls. We also observed a correlation between the severity of the phobia and the degree of gaze avoidance across groups. These findings seem to support the claim that social phobia is a heterogeneous disorder. Further research is advised to decide whether fear of blushing can constitute a subtype with specific behavioral characteristics. |
Ryan E. B. Mruczek; David L. Sheinberg Stimulus selectivity and response latency in putative inhibitory and excitatory neurons of the primate inferior temporal cortex Journal Article In: Journal of Neurophysiology, vol. 108, no. 10, pp. 2725–2736, 2012. @article{Mruczek2012, The cerebral cortex is composed of many distinct classes of neurons. Numerous studies have demonstrated corresponding differences in neuronal properties across cell types, but these comparisons have largely been limited to conditions outside of awake, behaving animals. Thus the functional role of the various cell types is not well understood. Here, we investigate differences in the functional properties of two widespread and broad classes of cells in inferior temporal cortex of macaque monkeys: inhibitory interneurons and excitatory projection cells. Cells were classified as putative inhibitory or putative excitatory neurons on the basis of their extracellular waveform characteristics (e.g., spike duration). Consistent with previous intracellular recordings in cortical slices, putative inhibitory neurons had higher spontaneous firing rates and higher stimulus-evoked firing rates than putative excitatory neurons. Additionally, putative excitatory neurons were more susceptible to spike waveform adaptation following very short interspike intervals. Finally, we compared two functional properties of each neuron's stimulus-evoked response: stimulus selectivity and response latency. First, putative excitatory neurons showed stronger stimulus selectivity compared with putative inhibitory neurons. Second, putative inhibitory neurons had shorter response latencies compared with putative excitatory neurons. Selectivity differences were maintained and latency differences were enhanced during a visual search task emulating more natural viewing conditions. Our results suggest that short-latency inhibitory responses are likely to sculpt visual processing in excitatory neurons, yielding a sparser visual representation. |
Marion G. Müller; Arvid Kappas; Bettina Olk Perceiving press photography: A new integrative model, combining iconology with psychophysiological and eye-tracking methods Journal Article In: Visual Communication, vol. 11, no. 3, pp. 307–328, 2012. @article{Mueller2012, Any analysis of how mass-mediated visuals are perceived and interpreted in multimodal contexts should be informed by a scientific understanding of the biological constraints on visual processing, as well as a solid culturally aware visual communication approach. This article focuses on the interdis- ciplinary combination of three methods – iconology, a qualitative method of visual analysis targeted at the meanings of visuals and based in the humanities, and eye-tracking and psychophysiological reaction measure- ment, both based in experimental psychology. The authors propose a Visual Communication Process Model as an integrative means for connecting dif- ferent facets of the communication processes involved in visual mass com- munication. The goal of this new model is to widen and sharpen the focus on explaining (a) meaning-attribution processes, (b) visual perception and attention processes, and (c) psychophysiological reactions to mass-medi- ated visuals, illustrated in this article with examples of press photography. |
Peter R. Murphy; Ian H. Robertson; Darren Allen; Robert Hester; Redmond G. O'Connell An electrophysiological signal that precisely tracks the emergence of error awareness Journal Article In: Frontiers in Human Neuroscience, vol. 6, pp. 65, 2012. @article{Murphy2012, Recent electrophysiological research has sought to elucidate the neural mechanisms necessary for the conscious awareness of action errors. Much of this work has focussed on the error positivity (Pe), a neural signal that is specifically elicited by errors that have been consciously perceived. While awareness appears to be an essential prerequisite for eliciting the Pe, the precise functional role of this component has not been identified. Twenty-nine participants performed a novel variant of the Go/No-go Error Awareness Task (EAT) in which awareness of commission errors was indicated via a separate speeded manual response. Independent component analysis (ICA) was used to isolate the Pe from other stimulus- and response-evoked signals. Single-trial analysis revealed that Pe peak latency was highly correlated with the latency at which awareness was indicated. Furthermore, the Pe was more closely related to the timing of awareness than it was to the initial erroneous response. This finding was confirmed in a separate study which derived IC weights from a control condition in which no indication of awareness was required, thus ruling out motor confounds. A receiver-operating-characteristic (ROC) curve analysis showed that the Pe could reliably predict whether an error would be consciously perceived up to 400ms before the average awareness response. Finally, Pe latency and amplitude were found to be significantly correlated with overall error awareness levels between subjects. Our data show for the first time that the temporal dynamics of the Pe trace the emergence of error awareness. These findings have important implications for interpreting the results of clinical EEG studies of error processing. |
Andriy Myachykov; Simon Garrod; Christoph Scheepers Determinants of structural choice in visually situated sentence production Journal Article In: Acta Psychologica, vol. 141, no. 3, pp. 304–315, 2012. @article{Myachykov2012, Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. |
Andriy Myachykov; Dominic Thompson; Simon Garrod; Christoph Scheepers Referential and visual cues to structural choice in visually situated sentence production Journal Article In: Frontiers in Psychology, vol. 2, pp. 396, 2012. @article{Myachykov2012a, We investigated how conceptually informative (referent preview) and conceptually uninformative (pointer to referent's location) visual cues affect structural choice during production of English transitive sentences. Cueing the Agent or the Patient prior to presenting the target-event reliably predicted the likelihood of selecting this referent as the sentential Subject, triggering, correspondingly, the choice between active and passive voice. Importantly, there was no difference in the magnitude of the general Cueing effect between the informative and uninformative cueing conditions, suggesting that attentionally driven structural selection relies on a direct automatic mapping mechanism from attentional focus to the Subject's position in a sentence. This mechanism is, therefore, independent of accessing conceptual, and possibly lexical, information about the cued referent provided by referent preview. |
Marnix Naber; Maximilian Hilger; Wolfgang Einhäuser Animal detection and identification in natural scenes: Image statistics and emotional valence Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 1–24, 2012. @article{Naber2012, Humans process natural scenes rapidly and accurately. Low-level image features and emotional valence affect such processing but have mostly been studied in isolation. At which processing stage these factors operate and how they interact has remained largely unaddressed. Here, we briefly presented natural images and asked observers to report the presence or absence of an animal (detection), species of the detected animal (identification), and their confidence. In a second experiment, the same observers rated images with respect to their emotional affect and estimated their anxiety when imagining a real-life encounter with the depicted animal. We found that detection and identification improved with increasing image luminance, background contrast, animal saturation, and luminance plus color contrast between target and background. Surprisingly, animals associated with lower anxiety were detected faster and identified with higher confidence, and emotional affect was a better predictor of performance than anxiety. Pupil size correlated with detection, identification, and emotional valence judgments at different time points after image presentation. Remarkably, images of threatening animals induced smaller pupil sizes, and observers with higher mean anxiety ratings had smaller pupils on average. In sum, rapid visual processing depends on contrasts between target and background features rather than overall visual context, is negatively affected by anxiety, and finds its processing stages differentially reflected in the pupillary response. |
Kazuyo Nakabayashi; Toby J. Lloyd-Jones; Natalie Butcher; Chang Hong Liu Independent influences of verbalization and race on the configural and featural processing of faces: A behavioral and eye movement study Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 38, no. 1, pp. 61–77, 2012. @article{Nakabayashi2012, Describing a face in words can either hinder or help subsequent face recognition. Here, the authors examined the relationship between the benefit from verbally describing a series of faces and the same-race advantage (SRA) whereby people are better at recognizing unfamiliar faces from their own race as compared with those from other races. Verbalization and the SRA influenced face recognition independently, as evident on both behavioral (Experiment 1) and eye movement measures (Experiment 2). The findings indicate that verbalization and the SRA each recruit different types of configural processing, with verbalization modulating face learning and the SRA modulating both face learning and recognition. Eye movement patterns demonstrated greater feature sampling for describing as compared with not describing faces and for other-race as compared with same-race faces. In both cases, sampling of the eyes, nose, and mouth played a major role in performance. The findings support a single process account whereby verbalization can influence perceptual processing in a flexible and yet fundamental way through shifting one's processing orientation. |
Yaqing Niu; Rebecca M. Todd; Adam K. Anderson Affective salience can reverse the effects of stimulus-driven salience on eye movements in complex scenes Journal Article In: Frontiers in Psychology, vol. 3, pp. 336, 2012. @article{Niu2012a, In natural vision both stimulus features and cognitive/affective factors influence an observer's attention. However, the relationship between stimulus-driven ("bottom-up") and cognitive/affective ("top-down") factors remains controversial: Can affective salience counteract strong visual stimulus signals and shift attention allocation irrespective of bottom-up features? Is there any difference between negative and positive scenes in terms of their influence on attention deployment? Here we examined the impact of affective factors on eye movement behavior, to understand the competition between visual stimulus-driven salience and affective salience and how they affect gaze allocation in complex scene viewing. Building on our previous research, we compared predictions generated by a visual salience model with measures indexing participant-identified emotionally meaningful regions of each image. To examine how eye movement behavior differs for negative, positive, and neutral scenes, we examined the influence of affective salience in capturing attention according to emotional valence. Taken together, our results show that affective salience can override stimulus-driven salience and overall emotional valence can determine attention allocation in complex scenes. These findings are consistent with the hypothesis that cognitive/affective factors play a dominant role in active gaze control. |
Yaqing Niu; Rebecca M. Todd; Matthew Kyan; Adam K. Anderson Visual and emotional salience influence eye movements Journal Article In: ACM Transactions on Applied Perception, vol. 9, no. 3, pp. 1–18, 2012. @article{Niu2012, In natural vision both stimulus features and cognitive/affective factors influence an observer's attention. However, the relationship between stimulus-driven (bottom-up) and cognitive/affective (top-down) factors remains controversial: How well does the classic visual salience model account for gaze locations? Can emotional salience counteract strong visual stimulus signals and shift attention allocation irrespective of bottom-up features? Here we compared Itti and Koch's [2000] and Spectral Residual (SR) visual salience model and explored the impact of visual salience and emotional salience on eye movement behavior, to understand the competition between visual salience and emotional salience and how they affect gaze allocation in complex scenes viewing. Our results show the insufficiency of visual salience models in predicting fixation. Emotional salience can override visual salience and can determine attention allocation in complex scenes. These findings are consistent with the hypothesis that cognitive/affective factors play a dominant role in active gaze control. |
Laura R. Novick; Andrew T. Stull; Kefyn M. Catley Reading phylogenetic trees: The effects of tree orientation and text processing on comprehension Journal Article In: BioScience, vol. 62, no. 8, pp. 757–764, 2012. @article{Novick2012, Although differently formatted cladograms (hierarchical diagrams depicting evolutionary relationships among taxa) depict the same information, they may not be equally easy to comprehend. Undergraduate biology students attempted to translate cladograms from the diagonal to the rectangular format. The "backbone" line of each diagonal cladogram was slanted either up or down to the right. Eye movement analyses indicated that the students had a general bias to scan from left to right. Their scanning direction also depended on the orientation of the "backbone" line, resulting in upward or downward scanning, following the directional slant of the line. Because scanning down facilitates correct interpretation of the nested relationships, translation accuracy was higher for the down than for the up cladograms. Unfortunately, most diagonal cladograms in textbooks are in the upward orientation. This probably impairs students' success at tree thinking (i.e., interpreting and reasoning about evolutionary relationships depicted in cladograms), an important twenty-first century skill. |
Lauri Nummenmaa; Jari K. Hietanen; Pekka Santtila; Jukka Hyönä Gender and visibility of sexual cues influence eye movements while viewing faces and bodies Journal Article In: Archives of Sexual Behavior, vol. 41, no. 6, pp. 1439–1451, 2012. @article{Nummenmaa2012, Faces and bodies convey important information for the identification of potential sexual partners, yet clothing typically covers many of the bodily cues relevant for mating and reproduction. In this eye tracking study, we assessed how men and women viewed nude and clothed, same and opposite gender human figures. We found that participants inspected the nude bodies more thoroughly. First fixations landed almost always on the face, but were subsequently followed by viewing of the chest and pelvic regions. When viewing nude images, fixations were biased away from the face towards the chest and pelvic regions. Fixating these regions was also associated with elevated physiological arousal. Overall, men spent more time looking at female than male stimuli, whereas women looked equally long at male and female stimuli. In comparison to women, men spent relatively more time looking at the chests of nude female stimuli whereas women spent more time looking at the pelvic/genital region of male stimuli. We propose that the augmented and gender-contingent visual scanning of nude bodies reflects selective engagement of the visual attention circuits upon perception of signals relevant to choosing a sexual partner, which supports mating and reproduction. |
Antje Nuthmann; John M. Henderson Using CRISP to model global characteristics of fixation durations in scene viewing and reading with a common mechanism Journal Article In: Visual Cognition, vol. 20, no. 4-5, pp. 457–494, 2012. @article{Nuthmann2012, Fixation durations vary when we read text or inspect a natural scene. Past studies suggest that this variability is controlled by the visual input available within the current fixation. The present study directly compared the control of fixation durations in reading and scene viewing in a common experimental paradigm, and attempted to account for the control of these durations within a common modelling framework using the CRISP architecture (Nuthmann, Smith, Engbert, & Henderson, 2010). In the experimental paradigm, a stimulus onset delay paradigm was used. Avisual mask was presented at the beginning of critical fixations, which delayed the onset of the text or scene, and the length of the delay was varied. Irrespective of task, two populations of fixation durations were observed. One population of fixations was under the direct control of the current stimulus, increasing in duration as delay increased. A second population of fixation durations was relatively constant across delay. Additional task-specific quantitative differ- ences in the adjustment of fixation durations were found. The pattern of mixed control of fixation durations obtained for scene viewing has been previously simulated with the CRISP model of fixation durations. In the present work, the model's generality was tested by applying its architecture to the text reading data, with task-specific influences realized by different parameter settings. The results of the numerical simulations suggest that global characteristics of fixation durations in scene viewing and reading can be explained by a common mechanism. |
Amanda F. Moates; Elena I. Ivleva; Hugh B. O'Neill; Nithin Krishna; C. Munro Cullum; Gunvant K. Thaker; Carol A. Tamminga Predictive pursuit association with deficits in working memory in psychosis Journal Article In: Biological Psychiatry, vol. 72, no. 9, pp. 752–757, 2012. @article{Moates2012, Background: Deficits in smooth pursuit eye movements are an established phenotype for schizophrenia (SZ) and are being investigated as a potential liability marker for bipolar disorder. Although the molecular determinants of this deficit are still unclear, research has verified deficits in predictive pursuit mechanisms in SZ. Because predictive pursuit might depend on the working memory system, we have hypothesized a relationship between the two in healthy control subjects (HC) and SZ and here examine whether it extends to psychotic bipolar disorder (BDP). Methods: Volunteers with SZ (n = 38), BDP (n = 31), and HC (n = 32) performed a novel eye movement task to assess predictive pursuit as well as a standard visuospatial measure of working memory. Results: Individuals with SZ and BDP both showed reduced predictive pursuit gain compared with HC (p <.05). Moreover, each patient group showed worse performance in visuospatial working memory compared with control subjects (p <.05). A strong correlation (r =.53 |
Jim M. Monti; Charles H. Hillman; Neal J. Cohen Aerobic fitness enhances relational memory in preadolescent children: The FITKids randomized control trial Journal Article In: Hippocampus, vol. 22, no. 9, pp. 1876–1882, 2012. @article{Monti2012, It is widely accepted that aerobic exercise enhances hippocampal plasticity. Often, this plasticity co-occurs with gains in hippocampal-dependent memory. Cross-sectional work investigating this relationship in preadolescent children has found behavioral differences in higher versus lower aerobically fit participants for tasks measuring relational memory, which is known to be critically tied to hippocampal structure and function. The present study tested whether similar differences would arise in a clinical intervention setting where a group of preadolescent children were randomly assigned to a 9-month after school aerobic exercise intervention versus a wait-list control group. Performance measures included eye-movements as a measure of memory, based on recent work linking eye-movement indices of relational memory to the hippocampus. Results indicated that only children in the intervention increased their aerobic fitness. Compared to the control group, those who entered the aerobic exercise program displayed eye-movement patterns indicative of superior memory for face-scene relations, with no differences observed in memory for individual faces. The results of this intervention study provide clear support for the proposed linkage among the hippocampus, relational memory, and aerobic fitness, as well as illustrating the sensitivity of eye-movement measures as a means of assessing memory. |
Dan Morrow; Laura D'Andrea; Elizabeth A. L. Stine-Morrow; Matthew Shake; Sven Bertel; Jessie Chin; Katie Kopren; Xuefei Gao; Thembi Conner-Garcia; James Graumlich; Michael Murray Comprehension of multimedia health information among older adults with chronic illness Journal Article In: Visual Communication, vol. 11, no. 3, pp. 347–362, 2012. @article{Morrow2012, The authors explored knowledge effects on comprehension of multimedia health information by older adults (age 60 or older). Participants viewed passages about hypertension, with text accompanied by relevant and irrelevant pictures, and then answered questions about the passage. Fixations on text and pictures were measured by eye-tracking. Participants with more knowledge of hypertension understood the passages better. This advantage was related to how they processed the passages: while knowledge differences were unrelated to overall time viewing displays, relationships between allocation and knowledge emerged when the data were partitioned into phases (during and after first reading the text). More knowledgeable participants spent relatively more time fixating text than pictures during the first pass. After this pass, they spent more time viewing the relevant picture rather than re-reading, with some evidence that this strategy was associated with comprehension. The findings have implications for designing multimedia education materials and analyzing eye-tracking measures during multimedia learning. |
Camille Morvan; Laurence T. Maloney Human visual search does not maximize the post-saccadic probability of identifying targets Journal Article In: PLoS Computational Biology, vol. 8, no. 2, pp. e1002342, 2012. @article{Morvan2012, Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. |
Chie Nakamura; Manabu Arai; Reiko Mazuka Immediate use of prosody and context in predicting a syntactic structure Journal Article In: Cognition, vol. 125, no. 2, pp. 317–323, 2012. @article{Nakamura2012, Numerous studies have reported an effect of prosodic information on parsing but whether prosody can impact even the initial parsing decision is still not evident. In a visual world eye-tracking experiment, we investigated the influence of contrastive intonation and visual context on processing temporarily ambiguous relative clause sentences in Japanese. Our results showed that listeners used the prosodic cue to make a structural prediction before hearing disambiguating information. Importantly, the effect was limited to cases where the visual scene provided an appropriate context for the prosodic cue, thus eliminating the explanation that listeners have simply associated marked prosodic information with a less frequent structure. Furthermore, the influence of the prosodic information was also evident following disambiguating information, in a way that reflected the initial analysis. The current study demonstrates that prosody, when provided with an appropriate context, influences the initial syntactic analysis and also the subsequent cost at disambiguating information. The results also provide first evidence for pre-head structural prediction driven by prosodic and contextual information with a head-final construction. |
Shahin Nasr; Roger B. H. Tootell A cardinal orientation bias in scene-selective visual cortex Journal Article In: Journal of Neuroscience, vol. 32, no. 43, pp. 14921–14926, 2012. @article{Nasr2012, It has long been known that human vision is more sensitive to contours at cardinal (horizontal and vertical) orientations, compared with oblique orientations; this is the "oblique effect." However, the real-world relevance of the oblique effect is not well understood. Experiments here suggest that this effect is linked to scene perception, via a common bias in the image statistics of scenes. This statistical bias for cardinal orientations is found in many "carpentered environments" such as buildings and indoor scenes, and some natural scenes. In Experiment 1, we confirmed the presence of a perceptual oblique effect in a specific set of scene stimuli. Using those scenes, we found that a well known "scene-selective" visual cortical area (the parahippocampal place area; PPA) showed distinctively higher functional magnetic resonance imaging (fMRI) activity to cardinal versus oblique orientations. This fMRI-based oblique effect was not observed in other cortical areas (including scene-selective areas transverse occipital sulcus and retrosplenial cortex), although all three scene-selective areas showed the expected inversion effect to scenes. Experiments 2 and 3 tested for an analogous selectivity for cardinal orientations using computer-generated arrays of simple squares and line segments, respectively. The results confirmed the preference for cardinal orientations in PPA, thus demonstrating that the oblique effect can also be produced in PPA by simple geometrical images, with statistics similar to those in scenes. Thus, PPA shows distinctive fMRI selectivity for cardinal orientations across a broad range of stimuli, which may reflect a perceptual oblique effect. |
Amy Nau; Richard W. Hertle; Dongsheng Yang Effect of tongue stimulation on nystagmus eye movements in blind patients Journal Article In: Brain Structure and Function, vol. 217, no. 3, pp. 761–765, 2012. @article{Nau2012, We have observed dramatic effects of tactile tongue stimulation on nystagmus eye movements in patients with acquired blindness, and we report these results. Six adult subjects (3 subjects with light perception or worse vision and 3 normal subjects) were included in this study. Causes of blindness included traumatic explosion, anterior ischemic optic neuropathy, and central retinal artery occlusion. Duration of blindness was 15, 3 and 1.5 years, respectively. A video eye tracking system (Eyelink 1000) was used to record eye movements. The eye movement recording (EMR) was repeated four times in a span of 20 min. Two of the EMRs were performed without tongue stimulation and two with tongue stimulation in randomized order. A tongue stimulus was applied to the surface of the tongue using a Brainport device that produces an electrical tactile stimulus. The nystagmus waveform characteristics and frequency were analyzed. We found that all blind subjects showed continuous jerk nystagmus with slow and quick phases, mainly in horizontal plane in their primary eye positions. The recorded nystagmus waveforms were jerk with linear velocity slow phases. When the tongue stimulus was applied, the frequency of nystagmus was significantly reduced by 47, 40, and 11%, and relative amplitude was reduced by 43, 45, and 6% for three blind subjects, respectively. In conclusion, we think our results that tongue stimulation influences nystagmus eye movements support a link between non-visual sensory input and ocular motor activity. |
Jos J. Adam; Simona Buetti; Dirk Kerzel Coordinated flexibility: How initial gaze position modulates eye-hand coordination and reaching Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 4, pp. 891–901, 2012. @article{Adam2012, Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags between eye and hand onsets were small and initiation times were substantially correlated, suggesting simultaneous control and tight eye-hand coupling. With eyes and hand departing from different start locations (gaze aligned with the center of the range of possible target positions), time lags between eye and hand onsets were large and initiation times were largely uncorrelated, suggesting independent control and decoupling of eye and hand movements. Furthermore, initial gaze position strongly mediated manual reaching performance indexed by increments in movement time as a function of target distance. Experiment 2 confirmed the impact of target foveation in modulating the effect of target distance on movement time. Our findings reveal the operation of an overarching, flexible neural control system that tunes the operation and cooperation of saccadic and manual control systems depending on where the eyes look at target onset. |
Robert Adam; Paul M. Bays; Masud Husain Rapid decision-making under risk Journal Article In: Cognitive Neuroscience, vol. 3, no. 1, pp. 52–61, 2012. @article{Adam2012a, Impulsivity is often characterized by rapid decisions under risk, but most current tests of decision-making do not impose time pressures on participants' choices. Here we introduce a new Traffic Lights test which requires people to choose whether to programme a risky, early eye movement before a traffic light turns green (earning them high rewards or a penalty) or wait for the green light before responding to obtain a small reward instead. Young participants demonstrated bimodal responses: an early, high-risk and a later, low-risk set of choices. By contrast, elderly people invariably waited for the green light and showed little risk-taking. Performance could be modelled as a race between two rise-to-threshold decision processes, one triggered by the green light and the other initiated before it. The test provides a useful measure of rapid decision-making under risk, with the potential to reveal how this process alters with aging or in patient groups. |
Stephanie Ahken; Gilles Comeau; Sylvie Hébert; Ramesh Balasubramaniam Eye movement patterns during the processing of musical and linguistic syntactic incongruities. Journal Article In: Psychomusicology: Music, Mind, and Brain, vol. 22, no. 1, pp. 18–25, 2012. @article{Ahken2012, It has been suggested that music and language share syntax-supporting brain mechanisms. Consequently, violations of syntax in either domain may have similar effects. The present study examined the effects of syntactic incongruities on eye movements and reading time in both music and language domains. In the music notation condition, the syntactic incongruities violated the prevailing musical tonality (i.e., the last bar of the incongruent sequence was a nontonic chord or nontonic note in the given key). In the linguistic condition, syntactic incongruities violated the expected grammatical structure (i.e., sentences with anomalies carrying the progressive –ing affix or the past tense inflection). Eighteen pianists were asked to sight-read and play musical phrases (music condition) and read sentences aloud (linguistic condition). Syntactic incongruities in both domains were associated with an increase in the mean proportion and duration of fixations in the target region of interest, as well as longer reading duration. The results are consistent with the growing evidence of a shared network of neural structures for syntactic processing, while not ruling out the possibility of independent networks for each domain. |
Désirée S. Aichert; Nicola M. Wöstmann; Anna Costa; Christine Macare; Johanna R. Wenig; Hans-Jürgen Möller; Katya Rubia; Ulrich Ettinger Associations between trait impulsivity and prepotent response inhibition Journal Article In: Journal of Clinical and Experimental Neuropsychology, vol. 34, no. 10, pp. 37–41, 2012. @article{Aichert2012, This study addresses the relationship between trait impulsivity and inhibitory control, two features known to be impaired in a number ofpsychiatric conditions. While impulsivity is often measured using psychometric self-report questionnaires, the inhibition ofinappropriate, impulsive motor responses is typically measured using experimental laboratory tasks. It remains unclear, however, whether psychometrically assessed impulsivity and experimentally operationalized inhibitory performance are related to each other. Therefore, we investigated the relationship between these two traits in a large sample using correlative and latent variable analysis. A total of 504 healthy individuals completed the Barratt Impulsiveness Scale (BIS-11) and a battery of four prepotent response inhibi- tion paradigms: the antisaccade, Stroop, stop-signal, and go/no-go tasks. We found significant associations of BIS impulsivity with commission errors on the go/no-go task and directional errors on the antisaccade task, over and above effects of age, gender, and intelligence. Latent variable analysis (a) supported the idea that all four inhibitory measures load on the same underlying construct termed “prepotent response inhibition” and (b) revealed that 12% of variance of the prepotent response inhibition construct could be explained by BIS impulsivity. Overall, the magnitude of associations observed was small, indicating that while a portion of variance in prepotent response inhibition can be explained by psychometric trait impulsivity, the majority of variance remains unexplained. Thus, these findings suggest that prepotent response inhibition paradigms can account for psychometric trait impulsivity only to a limited extent. Implications for studies of patient populations with symptoms of impulsivity are discussed. |
Natalie Mestry; Tamaryn Menneer; Michael J. Wenger; Nick Donnelly Identifying sources of configurality in three face processing tasks Journal Article In: Frontiers in Psychology, vol. 3, pp. 456, 2012. @article{Mestry2012, Participants performed three feature-complete face processing tasks involving detection of changes in: (1) feature size and (2) feature identity in successive matching tasks, and (3) feature orientation. In each experiment, information in the top (eyes) and bottom (mouths) parts of faces were manipulated. All tasks were performed with upright and inverted faces. Data were analyzed first using group-based analysis of signal detection measures (sensitivity and bias), and second using analysis of multidimensional measures of sensitivity and bias along with probit regression models in order to draw inferences about independence and separability as defined within general recognition theory (Ashby andTownsend, 1986). The results highlighted different patterns of perceptual and decisional influences across tasks and orientations. There was evidence of orientation specific configural effects (violations of perceptual independence, perceptual seperability and decisional separabilty) in the Feature Orientation Task. For the Feature Identity Task there were orientation specific performance effects and there was evidence of configural effects (violations of decisional separability) in both orientations. Decisional effects are consistent with previous research (Wenger and Ingvalson, 2002, 2003; Richler et al., 2008; Cornes et al., 2011). Crucially, the probit analysis revealed violations of perceptual independence that remain undetected by marginal analysis. |
Antje S. Meyer; Linda Wheeldon; Femke Meulen; Agnieszka E. Konopka Effects of speech rate and practice on the allocation of visual attention in multiple object naming Journal Article In: Frontiers in Psychology, vol. 3, pp. 39, 2012. @article{Meyer2012, Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye-speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast. |
Hauke S. Meyerhoff; Korbinian Moeller; Kolja Debus; Hans-Christoph Nuerk Multi-digit number processing beyond the two-digit number range: A combination of sequential and parallel processes Journal Article In: Acta Psychologica, vol. 140, no. 1, pp. 81–90, 2012. @article{Meyerhoff2012, Investigations of multi-digit number processing typically focus on two-digit numbers. Here, we aim to investigate the generality of results from two-digit numbers for four- and six-digit numbers. Previous studies on two-digit numbers mostly suggested a parallel processing of tens and units. In contrast, the few studies examining the processing of larger numbers suggest sequential processing of the individual constituting digits. In this study, we combined the methodological approaches of studies implying either parallel or sequential processing. Participants completed a number magnitude comparison task on two-, four-, and six-digit numbers including unit-decade compatible and incompatible differing digit pairs (e.g., 32_47, 3. <. 4 and 2. <. 7 vs. 37_52, 3. <. 5 but 7. >. 2, respectively) at all possible digit positions. Response latencies and fixation behavior indicated that sequential and parallel decomposition is not exclusive in multi-digit number processing. Instead, our results clearly suggested that sequential and parallel processing strategies seem to be combined when processing multi-digit numbers beyond the two-digit number range. To account for the results, we propose a chunking hypothesis claiming that multi-digit numbers are separated into chunks of shorter digit strings. While the different chunks are processed sequentially digits within these chunks are processed in parallel. |
Sébastien Miellet; Liingang He; Xin Zhou; Ju Lao; Roberto Caldara When East meets West: Gaze-contingent Blindspots abolish cultural diversity in eye movements for faces Journal Article In: Journal of Eye Movement Research, vol. 5, no. 2, pp. 1–12, 2012. @article{Miellet2012, Culture impacts on how people sample visual information for face processing. Westerners deploy fixations towards the eyes and the mouth to achieve face recognition. In contrast, Easterners reach equal performance by deploying more central fixations, suggesting an effective extrafoveal information use. However, this hypothesis has not been yet directly investigated, i.e. by providing only extrafoveal information to both groups of observers. We used a parametric gaze-contingent technique dynamically masking central vision - the Blindspot - with Western and Eastern observers during face recognition. Westerners shifted progressively towards the typical Eastern central fixation pattern with larger Blindspots, whereas Easterners were insensitive to the Blindspots. These observations clearly show that Easterners preferentially sample information extrafoveally for faces. Conversely, the Western data also show that culturally-dependent visuo-motor strategies can flexibly adjust to constrained visual situations. |
Lisa M. Soederberg Miller; Diana L. Cassady Making healthy food choices using nutrition facts panels. The roles of knowledge, motivation, dietary modifications goals, and age Journal Article In: Appetite, vol. 59, no. 1, pp. 129–139, 2012. @article{Miller2012, Nutrition facts panels (NFPs) contain a rich assortment of nutrition information and are available on most food packages. The importance of this information is potentially even greater among older adults due to their increased risk for diet-related diseases, as well as those with goals for dietary modifications that may impact food choice. Despite past work suggesting that knowledge and motivation impact attitudes surrounding and self-reported use of NFPs, we know little about how (i.e., strategies used) and how well (i.e., level of accuracy) younger and older individuals process NFP information when evaluating healthful qualities of foods. We manipulated the content of NFPs and, using eye tracking methodology, examined strategies associated with deciding which of two NFPs, presented side-by-side, was healthier. We examined associations among strategy use and accuracy as well as age, dietary modification status, knowledge, and motivation. Results showed that, across age groups, those with dietary modification goals made relatively more comparisons between NFPs with increasing knowledge and motivation; but that strategy effectiveness (relationship to accuracy) depended on age and motivation. Results also showed that knowledge and motivation may protect against declines in accuracy in later life and that, across age and dietary modification status, knowledge mediates the relationship between motivation and decision accuracy. |
Milica Milosavljevic; Vidhya Navalpakkam; Christof Koch; Antonio Rangel Relative visual saliency differences induce sizable bias in consumer choice Journal Article In: Journal of Consumer Psychology, vol. 22, no. 1, pp. 67–74, 2012. @article{Milosavljevic2012, Consumers often need to make very rapid choices among multiple brands (e.g., at a supermarket shelf) that differ both in their reward value (e.g., taste) and in their visual properties (e.g., color and brightness of the packaging). Since the visual properties of stimuli are known to influence visual attention, and attention is known to influence choices, this gives rise to a potential visual saliency bias in choices. We utilize experimental design from visual neuroscience in three real food choice experiments to measure the size of the visual saliency bias and how it changes with decision speed and cognitive load. Our results show that at rapid decision speeds visual saliency influences choices more than preferences do, that the bias increases with cognitive load, and that it is particularly strong when individuals do not have strong preferences among the options. |
Patrick J. Mineault; Farhan A. Khawaja; Daniel A. Butts; Christopher C. Pack Hierarchical processing of complex motion along the primate dorsal visual pathway Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 16, pp. E972–E980, 2012. @article{Mineault2012, Neurons in the medial superior temporal (MST) area of the primate visual cortex respond selectively to complex motion patterns defined by expansion, rotation, and deformation. Consequently they are often hypothesized to be involved in important behavioral functions, such as encoding the velocities of moving objects and surfaces relative to the observer. However, the computations underlying such selectivity are unknown. In this work we have developed a unique, naturalistic motion stimulus and used it to probe the complex selectivity of MST neurons. The resulting data were then used to estimate the properties of the feed-forward inputs to each neuron. This analysis yielded models that successfully accounted for much of the observed stimulus selectivity, provided that the inputs were combined via a nonlinear integration mechanism that approximates a multiplicative interaction among MST inputs. In simulations we found that this type of integration has the functional role of improving estimates of the 3D velocity of moving objects. As this computation is of general utility for detecting complex stimulus features, we suggest that it may represent a fundamental aspect of hierarchical sensory processing. |
Daniel Mirman; Kristen M. Graziano Damage to temporo-parietal cortex decreases incidental activation of thematic relations during spoken word comprehension Journal Article In: Neuropsychologia, vol. 50, no. 8, pp. 1990–1997, 2012. @article{Mirman2012, Both taxonomic and thematic semantic relations have been studied extensively in behavioral studies and there is an emerging consensus that the anterior temporal lobe plays a particularly important role in the representation and processing of taxonomic relations, but the neural basis of thematic semantics is less clear. We used eye tracking to examine incidental activation of taxonomic and thematic relations during spoken word comprehension in participants with aphasia. Three groups of participants were tested: neurologically intact control participants (N=14), individuals with aphasia resulting from lesions in left hemisphere BA 39 and surrounding temporo-parietal cortex regions (N=7), and individuals with the same degree of aphasia severity and semantic impairment and anterior left hemisphere lesions (primarily inferior frontal gyrus and anterior temporal lobe) that spared BA 39 (N=6). The posterior lesion group showed reduced and delayed activation of thematic relations, but not taxonomic relations. In contrast, the anterior lesion group exhibited longer-lasting activation of taxonomic relations and did not differ from control participants in terms of activation of thematic relations. These results suggest that taxonomic and thematic semantic knowledge are functionally and neuroanatomically distinct, with the temporo-parietal cortex playing a particularly important role in thematic semantics. |
Daniel Mirman; Kristen M. Graziano Individual differences in the strength of taxonomic versus thematic relations Journal Article In: Journal of Experimental Psychology: General, vol. 141, no. 4, pp. 601–609, 2012. @article{Mirman2012a, Knowledge about word and object meanings can be organized taxonomically (fruits, mammals, etc.) on the basis of shared features or thematically (eating breakfast, taking a dog for a walk, etc.) on the basis of participation in events or scenarios. An eye-tracking study showed that both kinds of knowledge are activated during comprehension of a single spoken word, even when the listener is not required to perform any active task. The results further revealed that an individual's relative activation of taxonomic relations compared to thematic relations predicts that individual's tendency to favor taxonomic over thematic relations when asked to choose between them in a similarity judgment task. These results indicate that individuals differ in the relative strengths of their taxonomic and thematic semantic knowledge and suggest that meaning information is organized in 2 parallel, complementary semantic systems. |
Kristien Ooms; Gennady Andrienko; Natalia Andrienko; Philippe De Maeyer; Veerle Fack Analysing the spatial dimension of eye movement data using a visual analytic approach Journal Article In: Expert Systems with Applications, vol. 39, no. 1, pp. 1324–1332, 2012. @article{Ooms2012, Conventional analyses on eye movement data only take into account eye movement metrics, such as the number or the duration of fixations and length of the scanpaths, on which statistical analysis is performed for detecting significant differences. However, the spatial dimension in the eye movements is neglected, which is an essential element when investigating the design of maps. The study described in this paper uses a visual analytics software package, the Visual Analytics Toolkit, to analyse the eye movement data. Selection, simplification and aggregation functions are applied to filter out meaningful subsets of the data to be able to recognise structures in the movement data. Visualising and analysing these patterns provides essential insights in the user's search strategies while working on a (n interactive) map. |
Kristien Ooms; Philippe De Maeyer; Veerle Fack; Eva Van Assche; Frank Witlox Investigating the effectiveness of an efficient label placement method using eye movement data Journal Article In: The Cartographic Journal, vol. 49, no. 3, pp. 234–246, 2012. @article{Ooms2012a, This paper focuses on improving the efficiency and effectiveness of dynamic and interactive maps in relation to the user. A label placement method with an improved algorithmic efficiency is presented. Since this algorithm has an influence on the actual placement of the name labels on the map, it is tested if this efficient algorithms also creates more effective maps: how well is the information processed by the user. We tested 30 participants while they were working on a dynamic and interactive map display. Their task was to locate geographical names on each of the presented maps. Their eye movements were registered together with the time at which a given label was found. The gathered data reveals no difference in the user's response times, neither in the number and the duration of the fixations between both map designs. The results of this study show that the efficiency of label placement algorithms can be improved without disturbing the user's cognitive map. Consequently, we created a more efficient map without affecting it's effectiveness towards the user. |
Kristien Ooms; Philippe De Maeyer; Veerle Fack; Eva Van; Frank Witlox Interpreting maps through the eyes of expert and novice users Journal Article In: International Journal of Geographical Information Science, vol. 26, no. 10, pp. 1773–1788, 2012. @article{Ooms2012b, The experiments described in this article combine response time measurements and eye movement data to gain insight into the users' cognitive processes while working with dynamic and interactive maps. Experts and novices participated in a user study with a ‘between user' design. Twenty screen maps were presented in a random order to each participant, on which he had to execute a visual search. The combined information of the button actions and eye tracker reveals that both user groups showed a similar pattern in the time intervals needed to locate the subsequent names. From this pattern, information about the users' cognitive load could be derived: use of working memory, learning effect and so on. Moreover, the response times also showed that experts were significantly faster in finding the names in the map image. This is further explained by the eye movement metrics: experts had significantly shorter fixations and more fixations per second meaning that they could interpret a larger part of the map in the same amount of time. As a consequence, they could locate objects in the map image more efficiently and thus faster. |
Ioan Opris; Robert E. Hampson; Greg A. Gerhardt; Theodore W. Berger; Sam A. Deadwyler Columnar processing in primate pFC: Evidence for executive control microcircuits Journal Article In: Journal of Cognitive Neuroscience, vol. 24, no. 12, pp. 2334–2347, 2012. @article{Opris2012, A common denominator for many cognitive disorders of human brain is the disruption of neural activity within pFC, whose structural basis is primarily interlaminar (columnar) microcircuits or "minicolumns." The importance of this brain region for executive decision-making has been well documented; however, because of technological constraints, the minicolumnar basis is not well understood. Here, via implementation of a unique conformal multielectrode recording array, the role of interlaminar pFC minicolumns in the executive control of task-related target selection is demonstrated in nonhuman primates performing a visuomotor DMS task. The results reveal target-specific, interlaminar correlated firing during the decision phase of the trial between multielectrode recording array-isolated minicolumnar pairs of neurons located in parallel in layers 2/3 and layer 5 of pFC. The functional significance of individual pFC minicolumns (separated by 40 μm) was shown by reduced correlated firing between cell pairs within single minicolumns on error trials with inappropriate target selection. To further demonstrate dependence on performance, a task-disrupting drug (cocaine) was administered in the middle of the session, which also reduced interlaminar firing in minicolumns that fired appropriately in the early (nondrug) portion of the session. The results provide a direct demonstration of task-specific, real-time columnar processing in pFC indicating the role of this type of microcircuit in executive control of decision-making in primate brain. |
José P. Ossandón; Selim Onat; Dario Cazzoli; Thomas Nyffeler; René M. Müri; Peter König Unmasking the contribution of low-level features to the guidance of attention Journal Article In: Neuropsychologia, vol. 50, no. 14, pp. 3478–3487, 2012. @article{Ossandon2012, The role of low-level stimulus-driven control in the guidance of overt visual attention has been difficult to establish because low- and high-level visual content are spatially correlated within natural visual stimuli. Here we show that impairment of parietal cortical areas, either permanently by a lesion or reversibly by repetitive transcranial magnetic stimulation (rTMS), leads to fixation of locations with higher values of low-level features as compared to control subjects or in a no-rTMS condition. Moreover, this unmasking of stimulus-driven control crucially depends on the intrahemispheric balance between top-down and bottom-up cortical areas. This result suggests that although in normal behavior high-level features might exert a strong influence, low-level features do contribute to guide visual selection during the exploration of complex natural stimuli. |
Jorge Otero-Millan; Stephen L. Macknik; Susana Martinez-Conde Microsaccades and blinks trigger illusory rotation in the "rotating snakes" illusion Journal Article In: Journal of Neuroscience, vol. 32, no. 17, pp. 6043–6051, 2012. @article{OteroMillan2012, Certain repetitive arrangements of luminance gradients elicit the perception of strong illusory motion. Among them, the "Rotating Snakes Illusion" has generated a large amount of interest in the visual neurosciences, as well as in the public. Prior evidence indicates that the Rotating Snakes illusion depends critically on eye movements, yet the specific eye movement types involved and their associated neural mechanisms remain controversial. According to recent reports, slow ocular drift–a nonsaccadic type of fixational eye movement–drives the illusion, whereas microsaccades produced during attempted fixation fail to do so. Here, we asked human subjects to indicate the presence or absence of rotation during the observation of the illusion while we simultaneously recorded their eye movements with high precision. We found a strong quantitative link between microsaccade and blink production and illusory rotation. These results suggest that transient oculomotor events such as microsaccades, saccades, and blinks, rather than continuous drift, act to trigger the illusory motion in the Rotating Snakes illusion. |
Mathias Abegg; Nishant Sharma; Jason J. S. Barton Antisaccades generate two types of saccadic inhibition Journal Article In: Biological Psychology, vol. 89, no. 1, pp. 191–194, 2012. @article{Abegg2012, To make an antisaccade away from a stimulus, one must also suppress the more reflexive prosaccade to the stimulus. Whether this inhibition is diffuse or specific for saccade direction is not known. We used a paradigm examining inter-trial carry-over effects. Twelve subjects performed sequences of four identical antisaccades followed by sequences of four prosaccades randomly directed at the location of the antisaccade stimulus, the location of the antisaccade goal, or neutral locations. We found two types of persistent antisaccade-related inhibition. First, prosaccades in any direction were delayed only in the first trial after the antisaccades. Second, prosaccades to the location of the antisaccade stimulus were delayed more than all other prosaccades, and this persisted from the first to the fourth subsequent trial. These findings are consistent with both a transient global inhibition and a more sustained focal inhibition specific for the location of the antisaccade stimulus. |
Bryan J. Hansen; Mircea I. Chelaru; Valentin Dragoi Correlated variability in laminar cortical circuits Journal Article In: Neuron, vol. 76, no. 3, pp. 590–602, 2012. @article{Hansen2012, Despite the fact that strong trial-to-trial correlated variability in responses has been reported in many cortical areas, recent evidence suggests that neuronal correlations are much lower than previously thought. Here, we used multicontact laminar probes to revisit the issue of correlated variability in primary visual (V1) cortical circuits. We found that correlations between neurons depend strongly on local network context-whereas neurons in the input (granular) layers showed virtually no correlated variability, neurons in the output layers (supragranular and infragranular) exhibited strong correlations. The laminar dependence of noise correlations is consistent with recurrent models in which neurons in the granular layer receive intracortical inputs from nearby cells, whereas supragranular and infragranular layer neurons receive inputs over larger distances. Contrary to expectation that the output cortical layers encode stimulus information most accurately, we found that the input network offers superior discrimination performance compared to the output networks. |
Adriana Hanulíková; Andrea Weber Sink positive: Linguistic experience with th substitutions influences nonnative word recognition Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 3, pp. 613–629, 2012. @article{Hanulikova2012, We used eyetracking, perceptual discrimination, and production tasks to examine the influences of perceptual similarity and linguistic experience on word recognition in nonnative (L2) speech. Eye movements to printed words were tracked while German and Dutch learners of English heard words containing one of three pronunciation variants (/t/, /s/, or /f/) of the interdental fricative /θ/. Irrespective of whether the speaker was Dutch or German, looking preferences for target words with /θ/ matched the preferences for producing /s/ variants in German speakers and /t/ variants in Dutch speakers (as determined via the production task), while a control group of English participants showed no such preferences. The perceptually most similar and most confusable /f/ variant (as determined via the discrimination task) was never preferred as a match for /θ/. These results suggest that linguistic experience with L2 pronunciations facilitates recognition of variants in an L2, with effects of frequency outweighing effects of perceptual similarity. |
Ben Harkin; Sébastien Miellet; Klaus Kessler What checkers actually check: An eye tracking study of inhibitory control and working memory Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e44689, 2012. @article{Harkin2012, Background: Not only is compulsive checking the most common symptom in Obsessive Compulsive Disorder (OCD) with an estimated prevalence of 50–80% in patients, but approximately ~15% of the general population reveal subclinical checking tendencies that impact negatively on their performance in daily activities. Therefore, it is critical to understand how checking affects attention and memory in clinical as well as subclinical checkers. Eye fixations are commonly used as indicators for the distribution of attention but research in OCD has revealed mixed results at best. Methodology/Principal Finding: Here we report atypical eye movement patterns in subclinical checkers during an ecologically valid working memory (WM) manipulation. Our key manipulation was to present an intermediate probe during the delay period of the memory task, explicitly asking for the location of a letter, which, however, had not been part of the encoding set (i.e., misleading participants). Using eye movement measures we now provide evidence that high checkers' inhibitory impairments for misleading information results in them checking the contents of WM in an atypical manner. Checkers fixate more often and for longer when misleading information is presented than non-checkers. Specifically, checkers spend more time checking stimulus locations as well as locations that had actually been empty during encoding. Conclusions/Significance: We conclude that these atypical eye movement patterns directly reflect internal checking of memory contents and we discuss the implications of our findings for the interpretation of behavioural and neuropsychological data. In addition our results highlight the importance of ecologically valid methodology for revealing the impact of detrimental attention and memory checking on eye movement patterns. |
William J. Harrison; Jason B. Mattingley; Roger W. Remington Pre-saccadic shifts of visual attention Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e45670, 2012. @article{Harrison2012, The locations of visual objects to which we attend are initially mapped in a retinotopic frame of reference. Because each saccade results in a shift of images on the retina, however, the retinotopic mapping of spatial attention must be updated around the time of each eye movement. Mathôt and Theeuwes [1] recently demonstrated that a visual cue draws attention not only to the cue's current retinotopic location, but also to a location shifted in the direction of the saccade, the "future-field". Here we asked whether retinotopic and future-field locations have special status, or whether cue-related attention benefits exist between these locations. We measured responses to targets that appeared either at the retinotopic or future-field location of a brief, non-predictive visual cue, or at various intermediate locations between them. Attentional cues facilitated performance at both the retinotopic and future-field locations for cued relative to uncued targets, as expected. Critically, this cueing effect also occurred at intermediate locations. Our results, and those reported previously [1], imply a systematic bias of attention in the direction of the saccade, independent of any predictive remapping of attention that compensates for retinal displacements of objects across saccades [2]. |
Bronson Harry; Chris Davis; Jeesun Kim Exposure in central vision facilitates view-invariant face recognition in the periphery Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 1–9, 2012. @article{Harry2012, The present study investigated the extent to which a face presented in the visual periphery is processed and whether such processing can be influenced by a recent encounter in central vision. To probe face processing, a series of studies was conducted in which participants classified the sex and identity of faces presented in central and peripheral vision. The results showed that when target faces had not been previously viewed in central vision, recognition in peripheral vision was limited whereas sex categorization was not. When faces were previously viewed in central vision, recognition in peripheral vision improved even with the pose, hairstyle, and lighting conditions of these faces changed. These results are discussed with regard to possible mechanisms unpinning this exposure effect. |
Katharina Havermann; Robert Volcic; Markus Lappe Saccadic adaptation to moving targets Journal Article In: PLoS ONE, vol. 7, no. 6, pp. e39708, 2012. @article{Havermann2012, Saccades are so called ballistic movements which are executed without online visual feedback. After each saccade the saccadic motor plan is modified in response to post-saccadic feedback with the mechanism of saccadic adaptation. The post-saccadic feedback is provided by the retinal position of the target after the saccade. If the target moves after the saccade, gaze may follow the moving target. In that case, the eyes are controlled by the pursuit system, a system that controls smooth eye movements. Although these two systems have in the past been considered as mostly independent, recent lines of research point towards many interactions between them. We were interested in the question if saccade amplitude adaptation is induced when the target moves smoothly after the saccade. Prior studies of saccadic adaptation have considered intra-saccadic target steps as learning signals. In the present study, the intra-saccadic target step of the McLaughlin paradigm of saccadic adaptation was replaced by target movement, and a post-saccadic pursuit of the target. We found that saccadic adaptation occurred in this situation, a further indication of an interaction of the saccadic system and the pursuit system with the aim of optimized eye movements. |
Ryusuke Hayashi; Manabu Tanifuji Which image is in awareness during binocular rivalry? Reading perceptual status from eye movements Journal Article In: Journal of Vision, vol. 12, no. 3, pp. 1–11, 2012. @article{Hayashi2012, Binocular rivalry is a useful psychophysical tool to investigate neural correlates of visual consciousness because the alternation between awareness of the left and right eye images occurs without any accompanying change in visual input. The conventional experiments on binocular rivalry require participants to voluntarily report their perceptual state. Obtaining reliable reports from non-human primates about their subjective visual experience, however, requires long-term training, which has made electrophysiological experiments on binocular rivalry quite difficult. Here, we developed a new binocular rivalry stimulus that consists of two different object images that are phase-shifted to move in opposite directions from each other: One eye receives leftward motion while the other eye receives rightward motion, although both eyes' images are perceived to remain at the same position. Experiments on adult human participants showed that eye movements (optokinetic nystagmus, OKN) are involuntarily evoked during the observation of our stimulus. We also found that the evoked OKN can serve as a cue for accurate estimation about which object image was dominant during rivalry, since OKN follows the motion associated with the image in awareness at a given time. This novel visual presentation technique enables us to effectively explore the neural correlates of visual awareness using animal models. |
Annabelle Goujon; James R. Brockmole; Krista A. Ehinger How visual and semantic information influence learning in familiar contexts Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 5, pp. 1315–1327, 2012. @article{Goujon2012, Previous research using the contextual cuing paradigm has revealed both quantitative and qualitative differences in learning depending on whether repeated contexts are defined by letter arrays or real-world scenes. To clarify the relative contributions of visual features and semantic information likely to account for such differences, the typical contextual cuing procedure was adapted to use meaningless but nevertheless visually complex images. The data in reaction time and in eye movements show that, like scenes, such repeated contexts can trigger large, stable, and explicit cuing effects, and that those effects result from facilitated attentional guidance. Like simpler stimulus arrays, however, those effects were impaired by a sudden change of a repeating image's color scheme at the end of the learning phase (Experiment 1), or when the repeated images were presented in a different and unique color scheme across each presentation (Experiment 2). In both cases, search was driven by explicit memory. Collectively, these results suggest that semantic information is not required for conscious awareness of context-target covariation, but it plays a primary role in overcoming variability in specific features within familiar displays. |
Dan J. Graham; Robert W. Jeffery Predictors of nutrition label viewing during food purchase decision making: An eye tracking investigation Journal Article In: Public Health Nutrition, vol. 15, no. 2, pp. 189–197, 2012. @article{Graham2012, OBJECTIVE: Nutrition label use could help consumers eat healthfully. Despite consumers reporting label use, diets are not very healthful and obesity rates continue to rise. The present study investigated whether self-reported label use matches objectively measured label viewing by monitoring the gaze of individuals viewing labels. DESIGN: The present study monitored adults viewing sixty-four food items on a computer equipped with an eye-tracking camera as they made simulated food purchasing decisions. ANOVA and t tests were used to compare label viewing across various subgroups (e.g. normal weight v. overweight v. obese; married v. unmarried) and also across various types of foods (e.g. snacks v. fruits and vegetables). SETTING: Participants came to the University of Minnesota's Epidemiology Clinical Research Center in spring 2010. SUBJECTS: The 203 participants were >/=18 years old and capable of reading English words on a computer 76 cm (30 in) away. RESULTS: Participants looked longer at labels for 'meal' items like pizza, soup and yoghurt compared with fruits and vegetables, snack items like crackers and nuts, and dessert items like ice cream and cookies. Participants spent longer looking at labels for foods they decided to purchase compared with foods they decided not to purchase. There were few between-group differences in nutrition label viewing across sex, race, age, BMI, marital status, income or educational attainment. CONCLUSIONS: Nutrition label viewing is related to food purchasing, and labels are viewed more when a food's healthfulness is ambiguous. Objectively measuring nutrition label viewing provides new insight into label use by various sociodemographic groups. |
Joshua A. Granek; Laure Pisella; Annabelle Blangero; Yves Rossetti; Lauren E. Sergio The role of the caudal superior parietal lobule in updating hand location in peripheral vision: Further evidence from optic ataxia Journal Article In: PLoS ONE, vol. 7, no. 10, pp. e46619, 2012. @article{Granek2012, Patients with optic ataxia (OA), who are missing the caudal portion of their superior parietal lobule (SPL), have difficulty performing visually-guided reaches towards extra-foveal targets. Such gaze and hand decoupling also occurs in commonly performed non-standard visuomotor transformations such as the use of a computer mouse. In this study, we test two unilateral OA patients in conditions of 1) a change in the physical location of the visual stimulus relative to the plane of the limb movement, 2) a cue that signals a required limb movement 180° opposite to the cued visual target location, or 3) both of these situations combined. In these non-standard visuomotor transformations, the OA deficit is not observed as the well-documented field-dependent misreach. Instead, OA patients make additional eye movements to update hand and goal location during motor execution in order to complete these slow movements. Overall, the OA patients struggled when having to guide centrifugal movements in peripheral vision, even when they were instructed from visual stimuli that could be foveated. We propose that an intact caudal SPL is crucial for any visuomotor control that involves updating ongoing hand location in space without foveating it, i.e. from peripheral vision, proprioceptive or predictive information. |
Margaret Grant; Charles Clifton; Lyn Frazier The role of Non-Actuality Implicatures in processing elided constituents Journal Article In: Journal of Memory and Language, vol. 66, no. 1, pp. 326–343, 2012. @article{Grant2012, When an elided constituent and its antecedent do not match syntactically, the presence of a word implying the non-actuality of the state of affairs described in the antecedent seems to improve the example. (This information should be released but Gorbachev didn't. vs. This information was released but Gorbachev didn't.) We model this effect in terms of Non-Actuality Implicatures (NAIs) conveyed by non-epistemic modals like should and other words such as want to and be eager to that imply non-actuality. We report three studies. A rating and interpretation study showed that such implicatures are drawn and that they improve the acceptability of mismatch ellipsis examples. An interpretation study showed that adding a NAI trigger to ambiguous examples increases the likelihood of choosing an antecedent from the NAI clause. An eye movement study shows that a NAI trigger also speeds on-line reading of the ellipsis clause. By introducing alternatives (the desired state of affairs vs. the actual state of affairs), the NAI trigger introduces a potential Question Under Discussion (QUD). Processing an ellipsis clause is easier, the processor is more confident of its analysis, when the ellipsis clause comments on the QUD. |
Jeroen J. M. Granzier; Matteo Toscani; Karl R. Gegenfurtner Role of eye movements in chromatic induction Journal Article In: Journal of the Optical Society of America A, vol. 29, no. 2, pp. A353–A365, 2012. @article{Granzier2012, There exist large interindividual differences in the amount of chromatic induction [Vis. Res.49, 2261 (2009)]. One possible reason for these differences between subjects could be differences in subjects? eye movements. In experiment 1, subjects either had to look exclusively at the background or at the adjustable disk while they set the disk to a neutral gray as their eye position was being recorded. We found a significant difference in the amount of induction between the two viewing conditions. In a second experiment, subjects were freely looking at the display. We found no correlation between subjects? eye movements and the amount of induction. We conclude that eye movements only play a role under artificial (forced looking) viewing conditions and that eye movements do not seem to play a large role for chromatic induction under natural viewing conditions. |
Harold H. Greene; Deborah Simpson; Jennifer Bennion The perceptual span during foveally-demanding visual target localization Journal Article In: Acta Psychologica, vol. 139, no. 3, pp. 434–439, 2012. @article{Greene2012, Foveally-induced processing load deteriorates target localization performance in vision-guided tasks. Here, participants searched for a target embedded among coded distractors. High processing load was effected by instructing some participants to use the coded distractors to guide their search for the target. Other participants (in the low processing load condition) were not apprised of the code. The experiment examined whether increased processing load alters the span of effective processing (i.e. perceptual span) by (a) reducing its size, (b) altering its shape, or (c) reducing its size and altering its shape. The results demonstrated a reduction in the size of the perceptual span, with no significant change to its shape. It is argued that when distractors are processed beyond simply rejecting them as non targets, the perceptual span shrinks with increasing processing load. The findings are discussed in contrast to a general interference theory that predicts a change in vision-guided performance without a shrinking of the perceptual span. |
Michelle R. Greene; Tommy Liu; Jeremy M. Wolfe Reconsidering Yarbus: A failure to predict observers' task from eye movement patterns. Journal Article In: Vision Research, vol. 62, pp. 1–8, 2012. @article{Greene2012a, In 1967, Yarbus presented qualitative data from one observer showing that the patterns of eye movements were dramatically affected by an observer's task, suggesting that complex mental states could be inferred from scan paths. The strong claim of this very influential finding has never been rigorously tested. Our observers viewed photographs for 10s each. They performed one of four image-based tasks while eye movements were recorded. A pattern classifier, given features from the static scan paths, could identify the image and the observer at above-chance levels. However, it could not predict a viewer's task. Shorter and longer (60s) viewing epochs produced similar results. Critically, human judges also failed to identify the tasks performed by the observers based on the static scan paths. The Yarbus finding is evocative, and while it is possible an observer's mental state might be decoded from some aspect of eye movements, static scan paths alone do not appear to be adequate to infer complex mental states of an observer. |
Nicola J. Gregory; Timothy L. Hodgson Giving subjects the eye and showing them the finger: Socio-biological cues and saccade generation in the anti-saccade task Journal Article In: Perception, vol. 41, no. 2, pp. 131–147, 2012. @article{Gregory2012, Pointing with the eyes or the finger occurs frequently in social interaction to indicate$backslash$r$backslash$ndirection of attention and one's intentions. Research with a voluntary saccade task (where saccade$backslash$r$backslash$ndirection is instructed by the colour of a fixation point) suggested that gaze cues automatically$backslash$r$backslash$nactivate the oculomotor system, but non-biological cues, like arrows, do not. However, other work$backslash$r$backslash$nhas failed to support the claim that gaze cues are special. In the current research we introduced$backslash$r$backslash$nbiological and non-biological cues into the anti-saccade task, using a range of stimulus onset$backslash$r$backslash$nasynchronies (SOAs). The anti-saccade task recruits both top ^ down and bottom^ up attentional$backslash$r$backslash$nmechanisms, as occurs in naturalistic saccadic behaviour. In experiment 1 gaze, but not arrows,$backslash$r$backslash$nfacilitated saccadic reaction times (SRTs) in the opposite direction to the cues over all SOAs,$backslash$r$backslash$nwhereas in experiment 2 directional word cues had no effect on saccades. In experiment 3 finger$backslash$r$backslash$npointing cues caused reduced SRTs in the opposite direction to the cues at short SOAs. These$backslash$r$backslash$nfindings suggest that biological cues automatically recruit the oculomotor system whereas non-$backslash$r$backslash$nbiological cues do not. Furthermore, the anti-saccade task set appears to facilitate saccadic responses in the opposite direction to the cues. |
Parampal Grewal; Jayalakshmi Viswanathan; Jason J. S. Barton; Linda J. Lanyon Line bisection under an attentional gradient induced by simulated neglect in healthy subjects Journal Article In: Neuropsychologia, vol. 50, no. 6, pp. 1190–1201, 2012. @article{Grewal2012, Whether an attentional gradient favouring the ipsilesional side is responsible for the line bisection errors in visual neglect is uncertain. We explored this by using a conjunction-search task on the right side of a computer screen to bias attention while healthy subjects performed line bisection. The first experiment used a probe detection task to confirm that the conjunction-search task created a rightward attentional gradient, as manifest in response times, detection rates, and fixation patterns. In the second experiment subjects performed line bisection with or without a simultaneous conjunction-search task. Fixation patterns in the latter condition were biased rightwards as in visual neglect, and bisection also showed a rightward bias, though modest. A third experiment using the probe detection task again showed that the attentional gradient induced by the conjunction-search task was reduced when subjects also performed line bisection, perhaps explaining the modest effects on bisection bias. Finally, an experiment with briefly viewed pre-bisected lines produced similar results, showing that the small size of the bisection bias was not due to an unlimited view allowing deployment of attentional resources to counteract the conjunction-search task's attentional gradient. These results show that an attentional gradient induced in healthy subjects can produce visual neglect-like visual scanning and a rightward shift of perceived line midpoint, but the modest size of this shift points to limitations of this physiological model in simulating the pathologic effects of visual neglect. |
Marc Grosjean; Gerhard Rinkenauer; Stephanie Jainta Where do the eyes really go in the hollow-face illusion? Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e44706, 2012. @article{Grosjean2012, The hollow-face illusion refers to the finding that people typically perceive a concave (hollow) mask as being convex, despite the presence of binocular disparity cues that indicate the contrary. Unlike other illusions of depth, recent research has suggested that the eyes tend to converge at perceived, rather than actual, depths. However, technical and methodological limitations prevented one from knowing whether disparity cues may still have influenced vergence. In the current study, we presented participants with virtual normal or hollow masks and asked them to fixate the tip of the face's nose until they had indicated whether they perceived it as pointing towards or away from them. The results showed that the direction of vergence was indeed determined by perceived depth, although vergence responses were both somewhat delayed and of smaller amplitude (by a factor of about 0.5) for concave than convex masks. These findings demonstrate how perceived depth can override disparity cues when it comes to vergence, albeit not entirely. |
Shaobo Guan; Yu Liu; Ruobing Xia; Mingsha Zhang Covert attention regulates saccadic reaction time by routing between different visual-oculomotor pathways Journal Article In: Journal of Neurophysiology, vol. 107, no. 6, pp. 1748–1755, 2012. @article{Guan2012, Covert attention modulates saccadic performance, e.g., the abrupt onset of a task-irrelevant visual stimulus grabs attention as measured by a decrease in saccadic reaction time (SRT). The attentional advantage bestowed by the task-irrelevant stimulus is short-lived: SRT is actually longer ~200 ms after the onset of a stimulus than it is when no stimulus appears, known as inhibition of return. The mechanism by which attention modulates saccadic reaction is not well-understood. Here, we propose two possible mechanisms: by selective routing of the visuomotor signal through different pathways (routing hypothesis) or by general modulation of the speed of visuomotor transformation (shifting hypothesis). To test them, we designed a cue gap paradigm in which a 100-ms gap was introduced between the fixation point disappearance and the target appearance to the conventional cued visual reaction time paradigm. The cue manipulated the location of covert attention, and the gap interval resulted in a bimodal distribution of SRT, with an early mode (express saccade) and a late mode (regular saccade). The routing hypothesis predicts changes in the proportion of express saccades vs. regular saccades, whereas the shifting hypothesis predicts a shift of SRT distribution. The addition of the cue had no effect on mean reaction time of express and regular saccades, but it changed the relative proportion of two modes. These results demonstrate that the covert attention modification of the mean SRT is largely attributed to selective routing between visuomotor pathways rather than general modulation of the speed of visuomotor transformation. |
Katherine Guérard; Jean Saint-Aubin; Marie Poirier Assessing the influence of letter position in reading normal and transposed texts using a letter detection task Journal Article In: Canadian Journal of Experimental Psychology, vol. 66, no. 4, pp. 227–238, 2012. @article{Guerard2012, During word recognition, some letters appear to play a more important role than others. Although some studies have suggested that the first and last letters of a word have a privileged status, there is no consensus with regards to the importance of the different letter positions when reading connected text. In the current experiments, we used a simple letter search task to examine the impact of letter position on word identification in connected text using a classic paper and pencil procedure (Experiment 1) and an eye movement monitoring procedure (Experiment 2). In Experiments 3 and 4, a condition with transposed letters was included. Our results show that the first letter of a word is detected more easily than the other letters, and transposing letters in a word revealed the importance of the final letter. It is concluded that both the initial and final letters play a special role in word identification during reading but that the underlying processes might differ. |
George T. Gitchel; Paul A. Wetzel; Mark S. Baron Pervasive ocular tremor in patients with parkinson disease Journal Article In: Archives of Neurology, vol. 69, no. 8, pp. 1011–1017, 2012. @article{Gitchel2012, OBJECTIVE: To further assess oculomotor control of patients with Parkinson disease (PD) during fixation and with movement. DESIGN: Case-control study. SETTING: A Parkinson disease research, education, and clinical center. PATIENTS One hundred twelve patients with PD, including 18 de novo untreated patients, and 60 age-matched controls. INTERVENTION: Modern, precise eye tracking technology was used to assess oculomotor parameters. Oculomotor function was compared between groups during fixation and while tracking a randomly displaced target on a PC monitor. MAIN OUTCOME MEASURES: Fixation stability and saccadic parameters. RESULTS: All patients with PD and 2 of 60 control subjects showed oscillatory fixation instability (ocular tremor), with an average fundamental frequency of 5.7 Hz and average magnitude of 0.27°. Saccadic parameters and occurrences of square wave jerks did not differ between subjects with PD and controls. The amplitude and frequency of fixation instability did not correlate with disease duration, clinical Unified Parkinson's Disease Rating Scale scores, or dopa-equivalent dosing. No differences in oculomotor parameters were found between medicated and unmedicated patients with PD. CONCLUSIONS: All patients with PD exhibited persistent ocular tremor that prevented stability during fixation. The pervasiveness and specificity of this feature suggest that modern, precise oculomotor testing could provide a valuable early physiological biomarker for diagnosing PD. |
Mackenzie G. Glaholt; Keith Rayner; Eyal M. Reingold The mask-onset delay paradigm and the availability of central and peripheral visual information during scene viewing Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 1–19, 2012. @article{Glaholt2012a, We employed a variant of the mask-onset delay paradigm in order to limit the availability of visual information in central and peripheral vision within individual fixations during scene viewing. Subjects viewed full-color scene photos with instructions to search for a target object (Experiment 1) or to study them for a later memory test (Experiment 2). After a fixed interval following the onset of each eye fixation (50-100 ms), the scene was scrambled either in the central visual field or over the entire display. The intact scene was presented when the subject made an eye movement. Our results reconcile different sets of findings from prior research regarding the masking of central and peripheral visual information at different intervals following fixation onset. In particular, we found that when the entire display was scrambled, both search and memory performance were impaired even at relatively long mask-onset intervals. In contrast, when central vision was scrambled, there were subtle impairments that depended on the viewing task. In the 50-ms mask-onset interval, subjects were selectively impaired at identifying, but not in locating, the search target (Experiment 1), while memory performance (Experiment 2) was unaffected in this condition, and hence, the reliance on central and peripheral visual information depends partly on the viewing task. |
Mackenzie G. Glaholt; Eyal M. Reingold Direct control of fixation times in scene viewing: Evidence from analysis of the distribution of first fixation duration Journal Article In: Visual Cognition, vol. 20, no. 6, pp. 605–626, 2012. @article{Glaholt2012, Participants' eye movements were monitored in two scene viewing experiments that manipulated the task-relevance of scene stimuli and their availability for extrafoveal processing. In both experiments, participants viewed arrays containing eight scenes drawn from two categories. The arrays of scenes were either viewed freely (Free Viewing) or in a gaze-contingent viewing mode where extrafoveal preview of the scenes was restricted (No Preview). In Experiment 1a, participants memorized the scenes from one category that was designated as relevant, and in Experiment 1b, participants chose their preferred scene from within the relevant category. We examined first fixations on scenes from the relevant category compared to the irrelevant category (Experiments 1a and 1b), and those on the chosen scene compared to other scenes not chosen within the relevant category (Experiment 1b). A survival analysis was used to estimate the first discernible influence of the task-relevance on the distribution of first-fixation durations. In the free viewing condition in Experiment 1a, the influence of task relevance occurred as early as 81 ms from the start of fixation. In contrast, the corresponding value in the no preview condition was 254 ms, demonstrating the crucial role of extrafoveal processing in enabling direct control of fixation durations in scene viewing. First fixation durations were also influenced by whether or not the scene was eventually chosen (Experiment 1b), but this effect occurred later and affected fewer fixations than the effect of scene category, indicating that the time course of scene processing is an important variable mediating direct control of fixation durations. |
Richard Godijn; Jan Theeuwes Overt is no better than covert when rehearsing visuo-spatial information in working memory Journal Article In: Memory & Cognition, vol. 40, no. 1, pp. 52–61, 2012. @article{Godijn2012, In the present study, we examined whether eye movements facilitate retention of visuo-spatial information in working memory. In two experiments, participants memorised the sequence of the spatial locations of six digits across a retention interval. In some conditions, participants were free to move their eyes during the retention interval, but in others they either were required to remain fixated or were instructed to move their eyes exclusively to a selection of the memorised locations. Memory performance was no better when participants were free to move their eyes during the memory interval than when they fixated a single location. Furthermore, the results demonstrated a primacy effect in the eye movement behaviour that corresponded with the memory performance. We conclude that overt eye movements do not provide a benefit over covert attention for rehearsing visuo-spatial information in working memory. |
Esther G. González; Agnes M. F. Wong; Ewa Niechwiej-Szwedo; Luminita Tarita-Nistor; Martin J. Steinbach Eye position stability in amblyopia and in normal binocular vision Journal Article In: Investigative Ophthalmology & Visual Science, vol. 53, no. 9, pp. 5386–5394, 2012. @article{Gonzalez2012, PURPOSE: We investigated whether the sensory impairments of amblyopia are associated with a decrease in eye position stability (PS). METHODS: The positions of both eyes were recorded simultaneously in three viewing conditions: binocular, monocular fellow eye viewing (right eye for controls), and monocular amblyopic eye viewing (left eye for controls). For monocular conditions, movements of the covered eye were also recorded (open-loop testing). Bivariate contour ellipses (BCEAs), representing the region over which eye positions were found 68.2% of the time, were calculated and normalized by log transformation. RESULTS: For controls, there were no differences between eyes. Binocular PS (log(10)BCEA = -0.88) was better than monocular PS (log(10)BCEA = -0.59) indicating binocular summation, and the PS of the viewing eye was better than that of the covered eye (log(10)BCEA = -0.33). For patients, the amblyopic eye exhibited a significant decrease in PS during amblyopic eye (log(10)BCEA = -0.20), fellow eye (log(10)BCEA = 0.0004), and binocular (log(10)BCEA = -0.44) viewing. The PS of the fellow eye depended on viewing condition: it was comparable to controls during binocular (log(10)BCEA = -0.77) and fellow eye viewing (log(10)BCEA = -0.52), but it decreased during amblyopic eye viewing (log(10)BCEA = 0.08). Patients exhibited binocular summation during fellow eye viewing, but not during amblyopic eye viewing. Decrease in PS in patients was mainly due to slow eye drifts. CONCLUSIONS: Deficits in spatiotemporal vision in amblyopia are associated with poor PS. PS of amblyopic and fellow eyes is differentially affected depending on viewing condition. |
Mary Hegarty; Harvey S. Smallman; Andrew T. Stull Choosing and using geospatial displays: Effects of design on performance and metacognition Journal Article In: Journal of Experimental Psychology: Applied, vol. 18, no. 1, pp. 1–17, 2012. @article{Hegarty2012, Interactive display systems give users flexibility to tailor their visual displays to different tasks and situations. However, in order for such flexibility to be beneficial, users need to understand how to tailor displays to different tasks (to possess “metarepresentational competence”). Recent research suggests that people may desire more complex and realistic displays than are most effective (Smallman & St. John, 2005). In Experiment 1, undergraduate students were tested on a comprehension task with geospatial displays (weather maps) that varied in the number of extraneous variables displayed. Their metacognitive judgments about the relative effectiveness of the displays were also solicited. Extraneous variables slowed response time and increased errors, but participants favored complex maps that looked more realistic about one third of the time. In Experiment 2, the eye fixations of undergraduate students were monitored as they performed the comprehension task. Complex maps that looked more realistic led to more eye fixations on both task-relevant and task-irrelevant regions of the displays. Experiment 3 compared performance of experienced meteorologists and undergraduate students on the comprehension and metacognitive tasks. Meteorologists were as likely as undergraduate students to prefer geographically complex (realistic) displays and more likely than undergraduates to opt for displays that added extraneous weather variables. However, meteorologists were also slower and less accurate with complex than with simple displays. This work highlights the importance of empirically testing principles of visual display design and suggests some limits to metarepresentational competence. |
Christoph Helmchen; Jonas Pohlmann; Peter Trillenberg; Rebekka Lencer; Julia Graf; Andreas Sprenger Role of anticipation and prediction in smooth pursuit eye movement control in Parkinson's disease Journal Article In: Movement Disorders, vol. 27, no. 8, pp. 1012–1018, 2012. @article{Helmchen2012, Patients with Parkinson's disease (PD) have difficulties in the control of self-guided (i.e., internally driven) movements. The basal ganglia provide a nonspecific internal cue for the development of a preparatory activity for a given movement in the sequence of repetitive movements. Controversy surrounds the question of whether PD patients are capable of (1) anticipating (before an external trigger appears; i.e., anticipation) and (2) predicting movement velocity once a moving target shortly disappears from the visual scene (i.e., prediction). To dissociate between these two components, we examined internally driven (extraretinal generated) smooth pursuit eye movements in PD patients and age-matched healthy controls by systematically varying target blanking periods of a trapezoidally moving target in four paradigms (initial blanking, midramp blanking, blanking after a short ramp, and no blanking). Compared to controls, PD patients showed (1) decreased smooth pursuit gain (without blanking), (2) deficient anticipatory pursuit (prolonged pursuit initiation latency; reduced eye velocity before target onset in the early onset blanking paradigm), and (3) preserved extraretinal predictive pursuit velocity (midramp target blanking). Deficient anticipation of future target motion was not related to either disease duration or the general motor impairment (UPDRS). We conclude that PD patients have difficulties in anticipating future target motion, which may play a role for the mechanisms involved in deficient gait initiation and termination of PD. In contrast, they remain unimpaired in their capacity of building up an internal representation of continuous target motion. This may explain the clinical advantage of medical devices that use visual motion to improve gait initiation (e.g., "PD glasses"). |
John M. Henderson; Steven G. Luke Oculomotor inhibition of return in normal and mindless reading Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1101–1107, 2012. @article{Henderson2012, Oculomotor inhibition of return (O-IOR) is an increase in saccade latency prior to an eye movement to a recently fixated location, as compared with other locations. To investigate O-IOR in reading, subjects participated in two conditions while their eye movements were recorded: normal reading and mindless reading with words replaced by geometric shapes. We investigated the manifestation of O-IOR in reading and whether it is related to extracting meaning from the text or is an oculomotor phenomenon. The results indicated that fixation durations prior to a saccade returning to the immediately preceding fixated word were longer than those to other words, consistent with O-IOR. Furthermore, fixation durations were longest prior to a saccade that returned the eyes to the specific character position in the word that had previously been fixated and dropped off as the distance between the previously fixated character and landing position increased. This result is consistent with the hypothesis that O-IOR is relatively precise in its application during reading and drops off as a gradient. Both of these results were found for text reading and for mindless reading, suggesting that they are consequences of oculomotor control, and not of language processing. Finally, although these temporal IOR effects were robust, no spatial consequences of IOR were observed: Previously fixated words and characters were as likely to be refixated as new words and characters. |
Scott A. Guerin; Clifford A. Robbins; Adrian W. Gilmore; Daniel L. Schacter Retrieval failure contributes to gist-based false recognition Journal Article In: Journal of Memory and Language, vol. 66, no. 1, pp. 68–78, 2012. @article{Guerin2012, People often falsely recognize items that are similar to previously encountered items. This robust memory error is referred to as gist-based false recognition. A widely held view is that this error occurs because the details fade rapidly from our memory. Contrary to this view, an initial experiment revealed that, following the same encoding conditions that produce high rates of gist-based false recognition, participants overwhelmingly chose the correct target rather than its related foil when given the option to do so. A second experiment showed that this result is due to increased access to stored details provided by reinstatement of the originally encoded photograph, rather than to increased attention to the details. Collectively, these results suggest that details needed for accurate recognition are, to a large extent, still stored in memory and that a critical factor determining whether false recognition will occur is whether these details can be accessed during retrieval. |
Maria J. S. Guerreiro; Jos J. Adam; Pascal W. M. Van Gerven Automatic selective attention as a function of sensory modality in aging Journal Article In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, vol. 67, no. 2, pp. 194–202, 2012. @article{Guerreiro2012, Objectives. It was recently hypothesized that age-related differences in selective attention depend on sensory modality (Guerreiro, M. J. S., Murphy, D. R., & Van Gerven, P. W. M. (2010). The role of sensory modality in age-related distrac- tion: A critical review and a renewed view. Psychological Bulletin, 136, 975–1022. doi:10.1037/a0020731). So far, this hypothesis has not been tested in automatic selective attention. The current study addressed this issue by investigating age-related differences in automatic spatial cueing effects (i.e., facilitation and inhibition of return [IOR]) across sensory modalities. Methods. Thirty younger (mean age = 22.4 years) and 25 older adults (mean age = 68.8 years) performed 4 left–right target localization tasks, involving all combinations of visual and auditory cues and targets. We used stimulus onset asyn- chronies (SOAs) of 100, 500, 1,000, and 1,500 ms between cue and target. Results. The results showed facilitation (shorter reaction times with valid relative to invalid cues at shorter SOAs) in the unimodal auditory and in both cross-modal tasks but not in the unimodal visual task. In contrast, there was IOR (longer reaction times with valid relative to invalid cues at longer SOAs) in both unimodal tasks but not in either of the cross-modal tasks. Most important, these spatial cueing effects were independent of age. Discussion. The results suggest that the modality hypothesis of age-related differences in selective attention does not extend into the realm of automatic selective attention. |
A. Guillaume Saccadic inhibition is accompanied by large and complex amplitude modulations when induced by visual backward masking Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 1–20, 2012. @article{Guillaume2012, Saccadic inhibition refers to the strong temporary decrease in saccadic initiation observed when a visual distractor appears shortly after the onset of a saccadic target. Here, to gain a better understanding of this phenomenon, we assessed whether saccade amplitude changes could accompany these modulations of latency distributions. As previous studies on the saccadic system using visual backward masking–a protocol in which the mask appears shortly after the target–showed latency increases and amplitude changes, we suspected that this could be a condition in which amplitude changes would accompany saccadic inhibition. We show here that visual backward masking produces a strong saccadic inhibition. In addition, this saccadic inhibition was accompanied by large and complex amplitude changes: a first phase of gain decrease occurred before the saccadic inhibition; when saccades reappeared after the inhibition, they were accurate before rapidly entering into a second phase of gain decrease. We observed changes in saccade kinematics that were consistent with the possibility of saccades being interrupted during these two phases of gain decrease. These results show that the onset of a large stimulus shortly after a first one induces the previously reported saccadic inhibition, but also induces a complex pattern of amplitude changes resulting from a dual amplitude perturbation mechanism with fast and slow components. |
Fei Guo; Tim J. Preston; Koel Das; Barry Giesbrecht; Miguel P. Eckstein Feature-independent neural coding of target detection during search of natural scenes Journal Article In: Journal of Neuroscience, vol. 32, no. 28, pp. 9499–9510, 2012. @article{Guo2012, Visual search requires humans to detect a great variety of target objects in scenes cluttered by other objects or the natural environment. It is unknown whether there is a general purpose neural detection mechanism in the brain that codes the presence of a wide variety of categories of objects embedded in natural scenes. We provide evidence for a feature-independent coding mechanism for detecting behaviorally relevant targets in natural scenes in the dorsal frontoparietal network. Pattern classifiers using single-trial fMRI responses in the dorsal frontoparietal network reliably predicted the presence of 368 different target objects and also the observer's choices. Other vision-related areas such as the primary visual cortex, lateral occipital complex, the parahippocampal, and the fusiform gyri did not predict target presence, while high-level association areas related to general purpose decision making, including the dorsolateral prefrontal cortex and anterior cingulate, did. Activity in the intraparietal sulcus, a main area in the dorsal frontoparietal network, correlated with observers' decision confidence and with the task difficulty of individual images. These results cannot be explained by physical differences across images or eye movements. Thus, the dorsal frontoparietal network detects behaviorally relevant targets in natural scenes independent of their defining visual features and may be the human analog of the priority map in monkey lateral intraparietal cortex. |
Rashmi Gupta; Jane E. Raymond Emotional distraction unbalances visual processing Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 2, pp. 184–189, 2012. @article{Gupta2012, Brain mechanisms used to control nonemotional aspects of cognition may be distinct from those regulating responses to emotional stimuli, with activity of the latter being detrimental to the former. Previous studies have shown that suppression of irrelevant emotional stimuli produces a largely right-lateralized pattern of frontal brain activation, thus predicting that emotional stimuli may invoke temporary, lateralized costs to performance on nonemotional cognitive tasks. To test this, we briefly (85 ms) presented a central, irrelevant, expressive (angry, happy, sad, or fearful) or neutral face 100 ms prior to a letter search task. The presentation of emotional versus neutral faces slowed subsequent search for targets appearing in the left, but not the right, hemifield, supporting the notion of a right-lateralized, emotional response mechanism that competes for control with nonemotional cognitive processes. Presentation of neutral, scrambled, or inverted neutral faces produced no such laterality effects on visual search response times. |
Britta Hahn; Benjamin M. Robinson; Alexander N. Harvey; Samuel T. Kaiser; Carly J. Leonard; Steven J. Luck; James M. Gold Visuospatial attention in schizophrenia: Deficits in broad monitoring Journal Article In: Journal of Abnormal Psychology, vol. 121, no. 1, pp. 119–128, 2012. @article{Hahn2012, Although selective attention is thought to be impaired in people with schizophrenia (PSZ), prior research has found no deficit in the ability to select one location and withdraw attention from another. PSZ and healthy control subjects (HCS) performed a stimulus detection task in which one, two, or all four peripheral target locations were cued. When one or two locations were cued, both PSZ and HCS responded faster when the target appeared at a cued than uncued location. However, increases in the number of validly cued locations had much more deleterious effects on performance for PSZ than HCS, especially for targets of low contrast whose detection was more dependent on attention. PSZ also responded more slowly in trials with four cued locations relative to trials with one or two invalidly cued locations. Thus, visuospatial attention deficits in schizophrenia arise when broad monitoring is required rather than when attention must be focused narrowly. |
Adrian M. Haith; Thomas R. Reppert; Reza Shadmehr Evidence for hyperbolic temporal discounting of reward in control of movements Journal Article In: Journal of Neuroscience, vol. 32, no. 34, pp. 11727–11736, 2012. @article{Haith2012, Suppose that the purpose of a movement is to place the body in a more rewarding state. In this framework, slower movements may increase accuracy and therefore improve the probability of acquiring reward, but the longer durations of slow movements produce devaluation of reward. Here we hypothesize that the brain decides the vigor of a movement (duration and velocity) based on the expected discounted reward associated with that movement. We begin by showing that durations of saccades of varying amplitude can be accurately predicted by a model in which motor commands maximize expected discounted reward. This result suggests that reward is temporally discounted even in timescales of tens of milliseconds. One interpretation of temporal discounting is that the true objective of the brain is to maximize the rate of reward-which is equivalent to a specific form of hyperbolic discounting. A consequence of this idea is that the vigor of saccades should change as one alters the intertrial intervals between movements. We find experimentally that in healthy humans, as intertrial intervals are varied, saccade peak velocities and durations change on a trial-by-trial basis precisely as predicted by a model in which the objective is to maximize the rate of reward. Our results are inconsistent with theories in which reward is discounted exponentially. We suggest that there exists a single cost, rate of reward, which provides a unifying principle that may govern control of movements in timescales of milliseconds, as well as decision making in timescales of seconds to years. |
Kai Christoph Hamborg; M. Bruns; F. Ollermann; Kai Kaspar The effect of banner animation on fixation behavior and recall performance in search tasks Journal Article In: Computers in Human Behavior, vol. 28, no. 2, pp. 576–582, 2012. @article{Hamborg2012, Previous findings suggested that banner ads have little or no impact on perceptual behavior and memory performance in search tasks, but only in browsing paradigms. This assumption is not supported by the present eye-tracking study. It investigates whether task-related selective attention is disrupted depending on the animation intensity of banner ads when users are in a search mode as well as the impact of banner animation on perceptual and memory performance. We find that fixation frequency on banners increases with animation intensity. Moreover, a specific temporal course of fixation frequency on banners could be observed. However, the duration of fixations on a banner is independent of its animation intensity. Results also reveal that animation enhances the recall performance of banner content. The subject of advertisement, the position of the banner as well as writings and colors are recalled better when the banner is animated in contrast to a non-animated banner, whereby the animation intensity has no impact on banner related recall performance. Importantly, the performance in the actual information search task is not affected by banner animation. Moreover, animation intensity does not affect subjects' attitude towards the banner ad. |
Christopher J. Hand; Patrick J. O'Donnell; Sara C. Sereno Word-initial letters influence fixation durations during fluent reading Journal Article In: Frontiers in Psychology, vol. 3, pp. 85, 2012. @article{Hand2012, The present study examined how word-initial letters influence lexical access during reading. Eye movements were monitored as participants read sentences containing target words. Three factors were independently manipulated. First, target words had either high or low constraining word-initial letter sequences (e.g., dwarf or clown, respectively). Second, tar- gets were either high or low in frequency of occurrence (e.g., train or stain, respectively). Third, targetswere embedded in either biasing or neutral contexts (i.e., targetswere high or low in their predictability).This 2 (constraint)×2 (frequency)×2 (context) design allowed us to examine the conditions under which a word's initial letter sequence could facilitate processing. Analyses of fixation duration data revealed significant main effects of constraint, frequency, and context. Moreover, in measures taken to reflect “early” lexical processing (i.e., first and single fixation duration), there was a significant interaction between constraint and context. The overall pattern of findings suggests lexical access is facilitated by highly constraining word-initial letters. Results are discussed in comparison to recent studies of lexical features involved in word recognition during reading. |
Josselin Gautier; O. Le Meur A time-dependent saliency model combining center and depth biases for 2D and 3D viewing conditions Journal Article In: Cognitive Computation, vol. 4, no. 2, pp. 141–156, 2012. @article{Gautier2012, The role of the binocular disparity in the deployment of visual attention is examined in this paper. To address this point, we compared eye tracking data recorded while observers viewed natural images in 2D and 3D conditions. The influence of disparity on saliency, center and depth biases is first studied. Results show that visual exploration is affected by the introduction of the binocular disparity. In particular, participants tend to look first at closer areas in 3D condition and then direct their gaze to more widespread locations. Beside this behavioral analysis, we assess the extent to which state-of-the-art models of bottom-up visual attention predict where observers looked at in both viewing conditions. To improve their ability to predict salient regions, low-level features as well as higher-level foreground/background cues are examined. Results indicate that, consecutively to initial centering response, the foreground feature plays an active role in the early but also middle instants of attention deployments. Importantly, this influence is more pronounced in stereoscopic conditions. It supports the notion of a quasi-instantaneous bottom-up saliency modulated by higher figure/ground processing. Beyond depth information itself, the foreground cue might constitute an early process of “selection for action”. Finally, we propose a time-dependent computational model to predict saliency on still pictures. The proposed approach combines low-level visual features, center and depth biases. Its performance outperforms state-of-the-art models of bottom-up attention. |
Frouke Hermens; Robin Walker The site of interference in the saccadic Stroop effect Journal Article In: Vision Research, vol. 73, pp. 10–22, 2012. @article{Hermens2012, In two experiments, the source of competition in the saccadic Stroop effect was investigated. Colored strings of letters were presented at fixation with colored patches in the surround. The task of the participants was to make an eye movement to the patch in the same color as the central string of letters. Three types of cues were compared: Either the string of letters composed a word indicating a direction (the saccadic Stroop condition), or it was a set of arrow signs, or a peripheral stimulus appeared. Whereas response times and saccade errors were similarly influenced by the different types of cues, saccade trajectory deviations away from the cue were found only for peripheral onsets. A second experiment demonstrated that the absence of the curvature effects for direction words was not due to insufficient time to process the words. The results raise doubts on whether the saccadic Stroop effect is effectively an oculomotor effect and could pose a challenge to models of saccade target selection. |
Frouke Hermens; Robin Walker Do you look where I look? Attention shifts and response preparation following dynamic social cues Journal Article In: Journal of Eye Movement Research, vol. 5, no. 5, pp. 1–11, 2012. @article{Hermens2012a, Studies investigating the effects of observing a gaze shift in another person often apply static images of a person with an averted gaze, while measuring response times to a peripheral target. Static images, however, are unlike how we normally perceive gaze shifts of others. Moreover, response times might only reveal the effects of a cue on covert attention and might fail to uncover cueing effects on overt attention or response preparation. We therefore extended the standard paradigm and measured cueing effects formore realistic, dynamic cues (video clips), while comparing response times, saccade direction errors and saccade trajectories. Three cues were compared: A social cue, consisting of a eye-gaze shift, and two socially less relevant cues, consisting of a head tilting movement and a person walking past. Similar results were found for the two centrally presented cues (eye-gaze shift and head tilting) on all three response measures, suggesting that cueing is unaffected by the social status of the cue. Interest- ingly, the cue showing a person walking past showed a dissociation in the direction of the effects on response times on the one hand, and saccade direction errors and latencies on the other hand, suggesting the involvement of two types of (endogenous and exogenous) attention or a distinction between attention and sacadic response preparation. Our results suggest that by using dynamic cues and multiple response measures, properties of cueing can be revealed that would not be found otherwise. |
Frouke Hermens; Johannes Zanker Looking at Op Art: Gaze stability and motion illusions Journal Article In: i-Perception, vol. 3, no. 5, pp. 282–304, 2012. @article{Hermens2012b, Various Op artists have used simple geometrical patterns to create the illusion of motion in their artwork. One explanation for the observed illusion involves retinal shifts caused by small involuntary eye movements that observers make while they try to maintain fixation. Earlier studies have suggested a prominent role of the most conspicuous of these eye movements, small rapid position shifts called microsaccades. Here, we present data that could expand this view with a different interpretation. In three experiments, we recorded participants' eye movements while they tried to maintain visual fixation when being presented with variants of Bridget Riley's Fall, which were manipulated such as to vary the strength of induced motion. In the first two experiments, we investigated the properties of microsaccades for a set of stimuli with known motion strengths. In agreement with earlier observations, microsaccade rates were unaffected by the stimulus pattern and, consequently, the strength of induced motion illusion. In the third experiment, we varied the stimulus pattern across a larger range of parameters and asked participants to rate the perceived motion illusion. The results revealed that motion illusions in patterns resembling Riley's Fall are perceived even in the absence of microsaccades, and that the reported strength of the illusion decreased with the number of microsaccades in the trial. Together, the three experiments suggest that other sources of retinal image instability than microsaccades, such as slow oculomotor drift, should be considered as possible factors contributing to the illusion. |
Katrin Herrmann; David J. Heeger; Marisa Carrasco Feature-based attention enhances performance by increasing response gain Journal Article In: Vision Research, vol. 74, pp. 10–20, 2012. @article{Herrmann2012, Covert spatial attention can increase contrast sensitivity either by changes in contrast gain or by changes in response gain, depending on the size of the attention field and the size of the stimulus (. Herrmann et al., 2010), as predicted by the normalization model of attention (. Reynolds & Heeger, 2009). For feature-based attention, unlike spatial attention, the model predicts only changes in response gain, regardless of whether the featural extent of the attention field is small or large. To test this prediction, we measured the contrast dependence of feature-based attention. Observers performed an orientation-discrimination task on a spatial array of grating patches. The spatial locations of the gratings were varied randomly so that observers could not attend to specific locations. Feature-based attention was manipulated with a 75% valid and 25% invalid pre-cue, and the featural extent of the attention field was manipulated by introducing uncertainty about the upcoming grating orientation. Performance accuracy was better for valid than for invalid pre-cues, consistent with a change in response gain, when the featural extent of the attention field was small (low uncertainty) or when it was large (high uncertainty) relative to the featural extent of the stimulus. These results for feature-based attention clearly differ from results of analogous experiments with spatial attention, yet both support key predictions of the normalization model of attention. |
Constanze Hesse; Keira Ball; Thomas Schenk Visuomotor performance based on peripheral vision is impaired in the visual form agnostic patient DF Journal Article In: Neuropsychologia, vol. 50, no. 1, pp. 90–97, 2012. @article{Hesse2012, The perception-action model states that visual information is processed in different cortical areas depending on the purpose for which the information is acquired. Specifically, it was suggested that the ventral stream mediates visual perception, whereas the dorsal stream primarily processes visual information for the guidance of actions (Goodale & Milner, 1992). Evidence for the model comes from patient studies showing that patients with ventral stream damage (visual form agnosia) and patients with dorsal stream damage (optic ataxia) show divergent performance in action and perception tasks. Whereas DF, a patient suffering from visual form agnosia, was found to perform well in visuomotor tasks despite her inability to use vision for perceptual tasks, patients with optic ataxia show usually the opposite pattern, i.e. good perception but impaired visuomotor control. The finding that both disorders seem to provoke a mirror-reversed pattern of spared and impaired visual functions, led to the belief that optic ataxia and visual form agnosia can be considered as complementary disorders. However, the visuomotor performance of patients with optic ataxia is typically only impaired when they are tested in visual periphery while being often preserved when tested in central vision. Here, we show that DF's visuomotor performance is also only preserved when the target is presented centrally. Her reaching and grasping movements to targets in peripheral vision are abnormal. Our findings indicate that DF's visuomotor performance is quite similar to the visuomotor performance of patients with optic ataxia which undermines previous suggestions that the two disorders form a double-dissociation. |
Matthew D. Hilchey; Raymond M. Klein; Jason Ivanoff Perceptual and motor inhibition of return: Components or flavors? Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 7, pp. 1416–1429, 2012. @article{Hilchey2012, The most common evidence for inhibition of return (IOR) is the robust finding of increased response times to targets that appear at previously cued locations following a cue-target interval exceeding ~300 ms. In a variation on this paradigm, Abrams and Dobkin (Journal of Experimental Psychology: Human Perception and Performance 20:467-477, 1994b) observed that IOR was greater when measured with a saccadic response to a peripheral target than with that to a central arrow, leading to the conclusion that saccadic responses to peripheral targets comprise motoric and perceptual components (the two-components theory for saccadic IOR), whereas saccadic responses to a central target comprise a single motoric component. In contrast, Taylor and Klein (Journal of Experimental Psychology: Human Perception and Performance 26:1639-1656, 2000) discovered that IOR for saccadic responses was equivalent for central and peripheral targets, suggesting a single motoric effect under these conditions. Rooted in methodological differences between the studies, three possible explanations for this discrepancy can be found in the literature. Here, we demonstrate that the empirical discrepancy is rooted in the following methodological difference: Whereas Abrams and Dobkin (Journal of Experimental Psychology: Human Perception and Performance 20:467-477, 1994b) administered central arrow and peripheral onset targets in separate blocks, Taylor and Klein (Journal of Experimental Psychology: Human Perception and Performance 26:1639-1656, 2000) randomly intermixed these stimuli in a single block. Our results demonstrate that (1) blocking central arrow targets fosters a spatial attentional control setting that allows for the long-lasting IOR normally generated by irrelevant peripheral cues to be filtered and (2) repeated sensory stimulation has no direct effect on the magnitude of IOR measured by saccadic responses to targets presented about 1 s after a peripheral cue. |
Matthew D. Hilchey; Raymond M. Klein; Jason Satel; Zhiguo Wang Oculomotor inhibition of return: how soon is it "recoded" into spatiotopic coordinates? Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 6, pp. 1145–1153, 2012. @article{Hilchey2012a, When, in relation to the execution of an eye movement, does the recoding of visual information from retinotopic to spatiotopic coordinates happen? Two laboratories seeking to answer this question using oculomotor inhibition of return (IOR) have generated different answers: Mathôt and Theeuwes (Psychological Science 21:1793-1798, 2010) found evidence for the initial coding of IOR to be retinotopic, while Pertzov, Zohary, and Avidan (Journal of Neuroscience 30:8882-8887, 2010) found evidence for spatiotopic IOR at even shorter postsaccadic intervals than were tested by Mathôt and Theeuwes (Psychological Science 21:1793-1798, 2010). To resolve this discrepancy, we conducted two experiments that combined the methods of the previous two studies while testing as early as possible. We found early spatiotopic IOR in both experiments, suggesting that visual events, including prior fixations, are typically coded into an abstract, allocentric representation of space either before or during eye movements. This type of coding enables IOR to encourage orienting toward novelty and, consequently, to perform the role of a foraging facilitator. |
Anne P. Hillstrom; Helen Scholey; Simon P. Liversedge; Valerie Benson The effect of the first glimpse at a scene on eye movements during search Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 2, pp. 204–210, 2012. @article{Hillstrom2012, Previewing scenes briefly makes finding target objects more efficient when viewing is through a gaze-contingent window (windowed viewing). In contrast, showing a preview of a randomly arranged search display does not benefit search efficiency when viewing during search is of the full display. Here, we tested whether a scene preview is beneficial when the scene is fully visible during search. Scene previews, when presented, were 250 ms in duration. During search, the scene was either fully visible or windowed. A preview always provided an advantage, in terms of decreasing the time to initially fixate and respond to targets and in terms of the total number of fixations. In windowed visibility, a preview reduced the distance of fixations from the target position until at least the fourth fixation. In full visibility, previewing reduced the distance of the second fixation but not of later fixations. The gist information derived from the initial glimpse of a scene allowed for placement of the first one or two fixations at information-rich locations, but when nonfoveal information was available, subsequent eye movements were only guided by online information. |
Dana Schneider; Andrew P. Bayliss; Stefanie I. Becker; Paul E. Dux Eye movements reveal sustained implicit processing of others' mental states Journal Article In: Journal of Experimental Psychology: General, vol. 141, no. 3, pp. 433–438, 2012. @article{Schneider2012, The ability to attribute mental states to others is crucial for social competency. To assess mentalizing abilities, in false-belief tasks participants attempt to identify an actor's belief about an object's location as opposed to the object's actual location. Passing this test on explicit measures is typically achieved by 4 years of age, but recent eye movement studies reveal registration of others' beliefs by 7 to 15 months. Consequently, a 2-path mentalizing system has been proposed, consisting of a late developing, cognitively demanding component and an early developing, implicit/automatic component. To date, investigations on the implicit system have been based on single-trial experiments only or have not examined how it operates across time. In addition, no study has examined the extent to which participants are conscious of the belief states of others during these tasks. Thus, the existence of a distinct implicit mentalizing system is yet to be demonstrated definitively. Here we show that adults engaged in a primary unrelated task display eye movement patterns consistent with mental state attributions across a sustained temporal period. Debriefing supported the hypothesis that this mentalizing was implicit. It appears there indeed exists a distinct implicit mental state attribution system. |
Dana Schneider; Rebecca Lam; Andrew P. Bayliss; Paul E. Dux Cognitive load disrupts implicit theory-of-mind processing Journal Article In: Psychological Science, vol. 23, no. 8, pp. 842–847, 2012. @article{Schneider2012a, Eye movements in Sally-Anne false-belief tasks appear to reflect the ability to implicitly monitor the mental states of other individuals (theory of mind, or ToM). It has recently been proposed that an early-developing, efficient, and automatically operating ToM system subserves this ability. Surprisingly absent from the literature, however, is an empirical test of the influence of domain-general executive processing resources on this implicit ToM system. In the study reported here, a dual-task method was employed to investigate the impact of executive load on eye movements in an implicit Sally-Anne false-belief task. Under no-load conditions, adult participants displayed eye movement behavior consistent with implicit belief processing, whereas evidence for belief processing was absent for participants under cognitive load. These findings indicate that the cognitive system responsible for implicitly tracking beliefs draws at least minimally on executive processing resources. Thus, even the most low- level processing of beliefs appears to reflect a capacity-limited operation. |
Elisa Schneider; Masaki Maruyama; Stanislas Dehaene; Mariano Sigman Eye gaze reveals a fast, parallel extraction of the syntax of arithmetic formulas Journal Article In: Cognition, vol. 125, no. 3, pp. 475–490, 2012. @article{Schneider2012b, Mathematics shares with language an essential reliance on the human capacity for recursion, permitting the generation of an infinite range of embedded expressions from a finite set of symbols. We studied the role of syntax in arithmetic thinking, a neglected component of numerical cognition, by examining eye movement sequences during the calculation of arithmetic expressions. Specifically, we investigated whether, similar to language, an expression has to be scanned sequentially while the nested syntactic structure is being computed or, alternatively, whether this structure can be extracted quickly and in parallel. Our data provide evidence for the latter: fixations sequences were stereotypically organized in clusters that reflected a fast identification of syntactic embeddings. A syntactically relevant pattern of eye movement was observed even when syntax was defined by implicit procedural rules (precedence of multiplication over addition) rather than explicit parentheses. While the total number of fixations was determined by syntax, the duration of each fixation varied with the complexity of the arithmetic operation at each step. These findings provide strong evidence for a syntactic organization for arithmetic thinking, paving the way for further comparative analysis of differences and coincidences in the instantiation of recursion in language and mathematics. |
Fabian Schnier; Markus Lappe Mislocalization of stationary and flashed bars after saccadic inward and outward adaptation of reactive saccades Journal Article In: Journal of Neurophysiology, vol. 107, no. 11, pp. 3062–3070, 2012. @article{Schnier2012, Recent studies have shown that saccadic inward adaptation (i.e., the shortening of saccade amplitude) and saccadic outward adaptation (i.e., the lengthening of saccade amplitude) rely on partially different neuronal mechanisms. There is increasing evidence that these differences are based on differences at the target registration or planning stages since outward but not inward adaptation transfers to hand-pointing and perceptual localization of flashed targets. Furthermore, the transfer of reactive saccade adaptation to long-duration overlap and scanning saccades is stronger after saccadic outward adaptation than that after saccadic inward adaptation, suggesting that modulated target registration stages during outward adaptation are increasingly used in the execution of saccades when the saccade target is visually available for a longer time. The difference in target presentation duration between reactive and scanning saccades is also linked to a difference in perceptual localization of different targets. Flashed targets are mislocalized after inward adaptation of reactive and scanning saccades but targets that are presented for a longer time (stationary targets) are mislocalized stronger after scanning than after reactive saccades. This link between perceptual localization and adaptation specificity suggests that mislocalization of stationary bars should be higher after outward than that after inward adaptation of reactive saccades. In the present study we test this prediction. We show that the relative amount of mislocalization of stationary versus flashed bars is higher after outward than that after inward adaptation of reactive saccades. Furthermore, during fixation stationary and flashed bars were mislocalized after outward but not after inward adaptation. Thus, our results give further evidence for different adaptation mechanisms between inward and outward adaptation and harmonize some recent research. |
Casey A. Schofield; Ashley L. Johnson; Albrecht W. Inhoff; Meredith E. Coles Social anxiety and difficulty disengaging threat: Evidence from eye-tracking Journal Article In: Cognition and Emotion, vol. 26, no. 2, pp. 300–311, 2012. @article{Schofield2012, Theoretical models of social phobia propose that biased attention contributes to the maintenance of symptoms; however these theoretical models make opposing predictions. Specifically, whereas Rapee and Heimberg (1997) suggested the biases are characterised by hypervigilance to threat cues and difficulty disengaging attention from threat, Clark and Wells (1995) suggested that threat cues are largely avoided. Previous research has been limited by the almost exclusive reliance on behavioural response times to experimental tasks to provide an index of attentional biases. The current study evaluated the relationship between the time-course of attention and symptoms of social anxiety and depression. Forty-two young adults completed a dot-probe task with emotional faces while eye- movement data were collected. The results revealed that increased social anxiety was associated with attention to emotional (rather than neutral) faces over time as well as difficulty disengaging attention from angry expressions; some evidence was found for a relationship between heightened depressive symptoms and increased attention to fear faces. |
Jörg Schorer; Florian Loffing; Norbert Hagemann; Joseph Baker Human handedness in interactive situations: Negative perceptual frequency effects can be reversed! Journal Article In: Journal of Sports Sciences, vol. 30, no. 5, pp. 507–513, 2012. @article{Schorer2012, Left-handed performers seem to enjoy an advantage in interactive sports. Researchers suggest this is predominantly due to the relative scarcity of left-handers compared with right-handers. Such negative frequency-dependent advantages are likely to appear in inefficient game-play behaviour against left-handed opponents such as reduced ability to correctly anticipate left-handers' action intentions. We used a pre-post retention design to test whether such negative frequency-dependent perceptual effects can be reversed via effective training. In a video-based test, 30 handball novices anticipated the shot outcome of temporally occluded handball penalties thrown by right- and left-handed players. Between the pre- and post-tests, participants underwent a perceptual training programme to improve prediction accuracy, followed by an unfilled retention test one week later. Participants were divided into two hand-specific training groups (i.e. only right- or left-handed shots were presented during training) and a mixed group (i.e. both right- and left-handed shots were presented). Our results support the negative frequency-dependent advantage hypothesis, as hand-specific perceptual training led to side-specific improvement of anticipation skills. Similarly, findings provide experimental evidence to support the contention that negatively frequency-dependent selection mechanisms contributed to the maintenance of the handedness polymorphism. |