All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2014 |
John M. Henderson; Steven G. Luke Stable individual differences in saccadic eye movements during reading, pseudoreading, scene viewing, and scene search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1390–1400, 2014. @article{Henderson2014, Mean fixation duration and mean saccade amplitude during active viewing tasks differ from person to person. Previous studies have shown that these individual differences tend to be stable across at least some tasks, suggesting that they may reflect underlying traits associated with individuals. However, whether these individual differences are also stable over time has not been established. The present study established stable individual differences in mean fixation duration and mean saccade amplitude across 4 viewing tasks, showed that the observed individual differences are stable over several days, and extended these results to standard deviations of fixation duration and saccade amplitude. The results have implications for theories of eye movement control and for using eye movement characteristics as individual difference measures. |
John M. Henderson; Jennifer Olejarczyk; Steven G. Luke; Joseph Schmidt Eye movement control during scene viewing: Immediate degradation and enhancement effects of spatial frequency filtering Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 486–502, 2014. @article{Henderson2014b, What controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations.$backslash$nWhat controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations. |
Ernesto Guerra; Pia Knoeferle Spatial distance effects on incremental semantic interpretation of abstract sentences: Evidence from eye tracking Journal Article In: Cognition, vol. 133, no. 3, pp. 535–552, 2014. @article{Guerra2014, A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants' reading times for sentences that convey similarity or difference between two abstract nouns (e.g., 'Peace and war are certainly different...'). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., 'peace', 'war'). In Experiments 2 and 3, they turned but remained blank. Participants' reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences. |
Maria J. S. Guerreiro; Jos J. Adam; Pascal W. M. Van Gerven Aging and response interference across sensory modalities Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 3, pp. 836–842, 2014. @article{Guerreiro2014, Advancing age is associated with decrements in selective attention. It was recently hypothesized that age-related differences in selective attention depend on sensory modality. The goal of the present study was to investigate the role of sensory modality in age-related vulnerability to distraction, using a response interference task. To this end, 16 younger (mean age = 23.1 years) and 24 older (mean age = 65.3 years) adults performed four response interference tasks, involving all combinations of visual and auditory targets and distractors. The results showed that response interference effects differ across sensory modalities, but not across age groups. These results indicate that sensory modality plays an important role in vulnerability to distraction, but not in age-related distractibility by irrelevant spatial information. |
Matthew Haigh; Heather J. Ferguson; Andrew J. Stewart An eye-tracking investigation into readers' sensitivity to actual versus expected utility in the comprehension of conditionals Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 1, pp. 166–185, 2014. @article{Haigh2014, The successful comprehension of a utility conditional (i.e., an "if p, then q" statement where p and/or q is valued by one or more agents) requires the construction of a mental representation of the situation described by that conditional and integration of this representation with prior context. In an eye-tracking experiment, we examined the time course of integrating conditional utility information into the broader discourse model. Specifically, the experiment determined whether readers were sensitive, during rapid heuristic processing, to the congruency between the utility of the consequent clause of a conditional (positive or negative) and a reader's subjective expectations based on prior context. On a number of eye-tracking measures we found that readers were sensitive to conditional utility-conditionals for which the consequent utility mismatched the utility that would be anticipated on the basis of prior context resulted in processing disruption. Crucially, this sensitivity emerged on measures that are accepted to indicate early processing within the language comprehension system and suggests that the evaluation of a conditional's utility informs the early stages of conditional processing. |
Carlos M. Hamamé; Juan R. Vidal; Marcela Perrone-Bertolotti; Tomás Ossandón; Karim Jerbi; Philippe Kahane; Olivier Bertrand; Jean Philippe Lachaux Functional selectivity in the human occipitotemporal cortex during natural vision: Evidence from combined intracranial EEG and eye-tracking Journal Article In: NeuroImage, vol. 95, pp. 276–286, 2014. @article{Hamame2014, Eye movements are a constant and essential component of natural vision, yet, most of our knowledge about the human visual system comes from experiments that restrict them. This experimental constraint is mostly in place to control visual stimuli presentation and to avoid artifacts in non-invasive measures of brain activity, however, this limitation can be overcome with intracranial EEG (iEEG) recorded from epilepsy patients. Moreover, the high-frequency components of the iEEG signal (between about 50 and 150. Hz) can provide a proxy of population-level spiking activity in any cortical area during free-viewing. We combined iEEG with high precision eye-tracking to study fine temporal dynamics and functional specificity in the fusiform face (FFA) and visual word form area (VWFA) while patients inspected natural pictures containing faces and text. We defined the first local measure of visual (electrophysiological) responsiveness adapted to free-viewing in humans: amplitude modulations in the high-frequency activity range (50-150. Hz) following fixations (fixation-related high-frequency response). We showed that despite the large size of receptive fields in the ventral occipito-temporal cortex, neural activity during natural vision of realistic cluttered scenes is mostly dependent upon the category of the foveated stimulus - suggesting that category-specificity is preserved during free-viewing and that attention mechanisms might filter out the influence of objects surrounding the fovea. |
Yuko Hara; Justin L. Gardner Encoding of graded changes in spatial specificity of prior cues in human visual cortex Journal Article In: Journal of Neurophysiology, vol. 112, no. 11, pp. 2834–2849, 2014. @article{Hara2014, Prior information about the relevance of spatial locations can vary in specificity; a single location, a subset of locations, or all locations may be of potential importance. Using a contrast-discrimination task with four possible targets, we asked whether performance benefits are graded with the spatial specificity of a prior cue and whether we could quantitatively account for behavioral performance with cortical activity changes measured by blood oxygenation level-dependent (BOLD) imaging. Thus we changed the prior probability that each location contained the target from 100 to 50 to 25% by cueing in advance 1, 2, or 4 of the possible locations. We found that behavioral performance (discrimination thresholds) improved in a graded fashion with spatial specificity. However, concurrently measured cortical responses from retinotopically defined visual areas were not strictly graded; response magnitude decreased when all 4 locations were cued (25% prior probability) relative to the 100 and 50% prior probability conditions, but no significant difference in response magnitude was found between the 100 and 50% prior probability conditions for either cued or uncued locations. Also, although cueing locations increased responses relative to noncueing, this cue sensitivity was not graded with prior probability. Furthermore, contrast sensitivity of cortical responses, which could improve contrast discrimination performance, was not graded. Instead, an efficient-selection model showed that even if sensory responses do not strictly scale with prior probability, selection of sensory responses by weighting larger responses more can result in graded behavioral performance benefits with increasing spatial specificity of prior information. |
James J. Harrison; Tom C. A. Freeman; Petroc Sumner In: Journal of Experimental Psychology: General, vol. 143, no. 5, pp. 1923–1938, 2014. @article{Harrison2014, As a potential exemplar for understanding how volitional actions emerged from reflexes, we studied the relationship between an ancient reflexive gaze stabilization mechanism (optokinetic nystagmus [OKN]) and purposeful eye movements (saccades) that target an object. Traditionally, these have been considered distinct (except in the kinematics of their execution) and have been studied independently. We find that the fast-phases of OKN clearly show properties associated with saccade planning: (a) They are characteristically delayed by irrelevant distractors in an indistinguishable way to saccades (the saccadic inhibition effect), and (b) horizontal OKN fast-phases produce curvature in vertical targeting saccades, just like a competing saccade plan. Thus, we argue that the saccade planning network plays a role in the production of OKN fast-phases, and we question the need for a strict distinction between eye movements that appear to be automatic or volitional. We discuss whether our understanding might benefit from shifting perspective and considering the entire "saccade" system to have developed from an increasingly sophisticated OKN system. |
William J. Harrison; Peter J. Bex Integrating retinotopic features in spatiotopic coordinates Journal Article In: Journal of Neuroscience, vol. 34, no. 21, pp. 7351–7360, 2014. @article{Harrison2014a, The receptive fields of early visual neurons are anchored in retinotopic coordinates (Hubel and Wiesel, 1962). Eye movements shift these receptive fields and therefore require that different populations of neurons encode an object's constituent features across saccades. Whether feature groupings are preserved across successive fixations or processing starts anew with each fixation has been hotly debated (Melcher and Morrone, 2003; Melcher, 2005, 2010; Knapen et al., 2009; Cavanagh et al., 2010a,b; Morris et al., 2010). Here we show that feature integration initially occurs within retinotopic coordinates, but is then conserved within a spatiotopic coordinate frame independent of where the features fall on the retinas. With human observers, we first found that the relative timing of visual features plays a critical role in determining the spatial area over which features are grouped. We exploited this temporal dependence of feature integration to show that features co-occurring within 45 ms remain grouped across eye movements. Our results thus challenge purely feedforward models of feature integration (Pelli, 2008; Freeman and Simoncelli, 2011) that begin de novo after every eye movement, and implicate the involvement of brain areas beyond early visual cortex. The strong temporal dependence we quantify and its link with trans-saccadic object perception instead suggest that feature integration depends, at least in part, on feedback from higher brain areas (Mumford, 1992; Rao and Ballard, 1999; Di Lollo et al., 2000; Moore and Armstrong, 2003; Stanford et al., 2010). |
William J. Harrison; Roger W. Remington; Jason B. Mattingley Visual crowding is anisotropic along the horizontal meridian during smooth pursuit Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–16, 2014. @article{Harrison2014b, Humans make smooth pursuit eye movements to foveate moving objects of interest. It is known that smooth pursuit alters visual processing, but there is currently no consensus on whether changes in vision are contingent on the direction the eyes are moving. We recently showed that visual crowding can be used as a sensitive measure of changes in visual processing, resulting from involvement of the saccadic eye movement system. The present paper extends these results by examining the effect of smooth pursuit eye movements on the spatial extent of visual crowding-the area over which visual stimuli are integrated. We found systematic changes in crowding that depended on the direction of pursuit and the distance of stimuli from the pursuit target. Relative to when no eye movement was made, the spatial extent of crowding increased for objects located contraversive to the direction of pursuit at an eccentricity of approximately 3°. By contrast, crowding for objects located ipsiversive to the direction of pursuit remained unchanged. There was no change in crowding during smooth pursuit for objects located approximately 7° from the fovea. The increased size of the crowding zone for the contraversive direction may be related to the distance that the fovea lags behind the pursuit target during smooth eye movements. Overall, our results reveal that visual perception is altered dynamically according to the intended destination of oculomotor commands. |
Alistair J. Harvey Some effects of alcohol and eye movements on cross-race face learning Journal Article In: Memory, vol. 22, no. 8, pp. 1126–1138, 2014. @article{Harvey2014, This study examines the impact of acute alcohol intoxication on visual scanning in cross-race face learning. The eye movements of a group of white British participants were recorded as they encoded a series of own-and different-race faces, under alcohol and placebo conditions. Intoxication reduced the rate and extent of visual scanning during face encoding, reorienting the focus of foveal attention away from the eyes and towards the nose. Differences in encoding eye movements also varied between own-and different-race face conditions as a function of alcohol. Fixations to both face types were less frequent and more lingering following intoxication, but in the placebo condition this was only the case for different-race faces. While reducing visual scanning, however, alcohol had no adverse effect on memory, only encoding restrictions associated with sober different-race face processing led to poorer recognition. These results support perceptual expertise accounts of own-race face processing, but suggest the adverse effects of alcohol on face learning published previously are not caused by foveal encoding restrictions. The implications of these findings for alcohol myopia theory are discussed. |
Hannah Harvey; Robin Walker Reading with peripheral vision: A comparison of reading dynamic scrolling and static text with a simulated central scotoma Journal Article In: Vision Research, vol. 98, pp. 54–60, 2014. @article{Harvey2014a, Horizontally scrolling text is, in theory, ideally suited to enhance viewing strategies recommended to improve reading performance under conditions of central vision loss such as macular disease, although it is largely unproven in this regard. This study investigated if the use of scrolling text produced an observable improvement in reading performed under conditions of eccentric viewing in an artificial scotoma paradigm. Participants (n=17) read scrolling and static text with a central artificial scotoma controlled by an eye-tracker. There was an improvement in measures of reading accuracy, and adherence to eccentric viewing strategies with scrolling, compared to static, text. These findings illustrate the potential benefits of scrolling text as a potential reading aid for those with central vision loss. |
Nabil Hasshim; Benjamin A. Parris Two-to-one color-response mapping and the presence of semantic conflict in the Stroop task Journal Article In: Frontiers in Psychology, vol. 5, pp. 1157, 2014. @article{Hasshim2014, A series of recent studies have utilized the two-to-one mapping paradigm in the Stroop task. In this paradigm, the word red might be presented in blue when both red and blue share the same-response key (same-response trials). This manipulation has been used to show the separate contributions of (within) semantic category conflict and response conflict to Stroop interference. Such results evidencing semantic category conflict are incompatible with models of the Stroop task that are based on response conflict only. However, the nature of same-response trials is unclear since they are also likely to involve response facilitation given that both dimensions of the stimulus provide evidence toward the sameresponse. In this study we explored this possibility by comparing them with three other trial types. We report strong (Bayesian) evidence for no statistical difference between sameresponse and non-color word neutral trials, faster responses to same-response trials than to non-response set incongruent trials, and no differences between same-response vs. congruent trials when contingency is controlled. Our results suggest that same-response trials are not different from neutral trials indicating that they cannot be used reliably to determine the presence or absence of semantic category conflict. In light of these results, the interpretation of a series of recent studies might have to be reassessed. |
Anna Hatzidaki; Manon W. Jones; M. Santesteban; H. P. Branigan It's not what you see: It's the language you say it in Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 10, pp. 1233–1239, 2014. @article{Hatzidaki2014, In an eye-tracking experiment, we investigated the interplay between visual and linguistic information processing during time-telling, and how this is affected by speaking in a non-native language. We compared time-telling in Greek and English, which differ in time-telling word order (hour vs. minute mentioned first), by contrasting Greek-English bilinguals speaking in their L1-Greek or their L2-English, and English monolingual speakers. All three groups were faster when telling the time for digital than for analogue clocks, and when telling the time for the first half-hour than the second half-hour. Critically, first fixation and gaze duration analyses for the hour and minute regions showed a different pattern for Greek-English bilinguals when speaking in their L1 versus L2, with the latter resembling that of English monolinguals. Our results suggest that bilingual speakers' eye-movement programming was influenced by the type of time-telling utterance specific to the language of production currently in use.$backslash$nIn an eye-tracking experiment, we investigated the interplay between visual and linguistic information processing during time-telling, and how this is affected by speaking in a non-native language. We compared time-telling in Greek and English, which differ in time-telling word order (hour vs. minute mentioned first), by contrasting Greek-English bilinguals speaking in their L1-Greek or their L2-English, and English monolingual speakers. All three groups were faster when telling the time for digital than for analogue clocks, and when telling the time for the first half-hour than the second half-hour. Critically, first fixation and gaze duration analyses for the hour and minute regions showed a different pattern for Greek-English bilinguals when speaking in their L1 versus L2, with the latter resembling that of English monolinguals. Our results suggest that bilingual speakers' eye-movement programming was influenced by the type of time-telling utterance specific to the language of production currently in use. |
Nora A. Herweg; Bernd Weber; Anna-Maria Kasparbauer; Inga Meyhöfer; Maria Steffens; Nikolaos Smyrnis; Ulrich Ettinger Functional magnetic resonance imaging of sensorimotor transformations in saccades and antisaccades Journal Article In: NeuroImage, vol. 102, pp. 848–860, 2014. @article{Herweg2014, Saccades to peripheral targets require a direct visuomotor transformation. In contrast, antisaccades, saccades in opposite direction of a peripheral target, require more complex transformation processes due to the inversion of the spatial vector. Here, the differential neural mechanisms underlying sensorimotor control in saccades and antisaccades were investigated using functional magnetic resonance imaging (fMRI) at 3. T field strength in 22 human volunteers. We combined a task factor (prosaccades: look towards target; antisaccades: look away from target) with a parametric factor of transformation demand (single vs. multiple peripheral targets) in a two-factorial block design. Behaviorally, a greater number of peripheral targets resulted in decreased spatial accuracy and increased reaction times in antisaccades. No effects were seen on the percentage of antisaccade direction errors or on any prosaccade measures. Neurally, a greater number of targets led to increased BOLD signal in the posterior parietal cortex (PPC) bilaterally. This effect was partially qualified by an interaction that extended into somatosensory cortex, indicating greater increases during antisaccades than prosaccades. The results implicate the PPC as a sensorimotor interface that is especially important in nonstandard mapping for antisaccades and point to a supportive role of somatosensory areas in antisaccade sensorimotor control, possibly by means of proprioceptive processes. |
Arvid Herwig; Werner X. Schneider Predicting object features across saccades: Evidence from object recognition and visual search Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 5, pp. 1903–1922, 2014. @article{Herwig2014, When we move our eyes, we process objects in the visual field with different spatial resolution due to the nonhomogeneity of our visual system. In particular, peripheral objects are only coarsely represented, whereas they are represented with high acuity when foveated. To keep track of visual features of objects across eye movements, these changes in spatial resolution have to be taken into account. Here, we develop and test a new framework proposing a visual feature prediction mechanism based on past experience to deal with changes in spatial resolution accompanying saccadic eye movements. In 3 experiments, we first exposed participants to an altered visual stimulation where, unnoticed by participants, 1 object systematically changed visual features during saccades. Experiments 1 and 2 then demonstrate that feature prediction during peripheral object recognition is biased toward previously associated postsaccadic foveal input and that this effect is particularly associated with making saccades. Moreover, Experiment 3 shows that during visual search, feature prediction is biased toward previously associated presaccadic peripheral input. Together, these findings demonstrate that the visual system uses past experience to predict how peripheral objects will look in the fovea, and what foveal search templates should look like in the periphery. As such, they support our framework based on ideomotor theory and shed new light on the mystery of why we are most of the time unaware of acuity limitations in the periphery and of our ability to locate relevant objects in the periphery. |
Constanze Hesse; Keira Ball; Thomas Schenk Pointing in visual periphery: Is DF's dorsal stream intact? Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e91420, 2014. @article{Hesse2014, Observations of the visual form agnosic patient DF have been highly influential in establishing the hypothesis that separate processing streams deal with vision for perception (ventral stream) and vision for action (dorsal stream). In this context, DF's preserved ability to perform visually-guided actions has been contrasted with the selective impairment of visuomotor performance in optic ataxia patients suffering from damage to dorsal stream areas. However, the recent finding that DF shows a thinning of the grey matter in the dorsal stream regions of both hemispheres in combination with the observation that her right-handed movements are impaired when they are performed in visual periphery has opened up the possibility that patient DF may potentially also be suffering from optic ataxia. If lesions to the posterior parietal cortex (dorsal stream) are bilateral, pointing and reaching deficits should be observed in both visual hemifields and for both hands when targets are viewed in visual periphery. Here, we tested DF's visuomotor performance when pointing with her left and her right hand toward targets presented in the left and the right visual field at three different visual eccentricities. Our results indicate that DF shows large and consistent impairments in all conditions. These findings imply that DF's dorsal stream atrophies are functionally relevant and hence challenge the idea that patient DF's seemingly normal visuomotor behaviour can be attributed to her intact dorsal stream. Instead, DF seems to be a patient who suffers from combined ventral and dorsal stream damage meaning that a new account is needed to explain why she shows such remarkably normal visuomotor behaviour in a number of tasks and conditions. |
Simon J. Hickman; Naz Raoof; Rebecca J. McLean; Irene Gottlob Vision and multiple sclerosis Journal Article In: Multiple Sclerosis and Related Disorders, vol. 3, no. 1, pp. 3–16, 2014. @article{Hickman2014, Multiple sclerosis can affect vision in many ways, including optic neuritis, chronic optic neuropathy, retrochiasmal visual field defects, higher order cortical processing, double vision, nystagmus and also by related ocular conditions such as uveitis. There are also side effects from recently introduced multiple sclerosis treatments that can affect vision. This review will discuss all these aspects and how they come together to cause visual symptoms. It will then focus on practical aspects of how to recognise when there is a vision problem in a multiple sclerosis patient and on what treatments are available to improve vision. |
Matthew D. Hilchey; Mahmoud Hashish; Gregory H. MacLean; Jason Satel; Jason Ivanoff; Raymond M. Klein On the role of eye movement monitoring and discouragement on inhibition of return in a go/no-go task Journal Article In: Vision Research, vol. 96, pp. 133–139, 2014. @article{Hilchey2014, Inhibition of return (IOR) most often describes the finding of increased response times to cued as compared to uncued targets in the standard covert orienting paradigm. A perennial question in the IOR literature centers on whether the effect of IOR is on motoric/decision-making processes (output-based IOR), attentional/perceptual processes (input-based IOR), or both. Recent data converge on the idea that IOR is an output-based effect when eye movements are required or permitted whereas IOR is an input-based effect when eye movements are monitored and actively discouraged. The notion that the effects of IOR may be fundamentally different depending on the activation state of the oculomotor system has been challenged by several studies demonstrating that IOR exists as an output-, or output- plus input-based effect in simple keypress tasks not requiring oculomotor responses. Problematically, experiments in which keypress responses are required to visual events rarely use eye movement monitoring let alone the active discouragement of eye movement errors. Here, we return to an experimental method implemented by Ivanoff and Klein (2001) whose results demonstrated that IOR affected output-based processes when, ostensibly, only keypress responses occurred. Unlike Ivanoff and Klein, however, we assiduously monitor and discourage eye movements. We demonstrate that actively discouraging eye movements in keypress tasks changes the form of IOR from output- to input-based and, as such, we strongly encourage superior experimental control over or consideration of the contribution of eye movement activity in simple keypress tasks exploring IOR. |
Matthew D. Hilchey; Raymond M. Klein; Jason Satel Returning to “inhibition of return” by dissociating long-term oculomotor IOR from short-term sensory effects Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1603–1616, 2014. @article{Hilchey2014a, We explored the nature and time course of effects generated by spatially uninformative peripheral cues by measuring these effects with localization responses to peripheral onsets or central arrow targets. In Experiment 1, participants made saccadic eye movements to equiprobable peripheral and central targets. At short cue-target onset asynchronies (CTOAs), responses to cued peripheral stimuli suffered from slowed responding attributable to sensory adaptation while responses to central targets were transiently facilitated, presumably due to cue-elicited oculomotor activation. At the longest CTOA, saccadic responses to central and peripheral targets were indistinguishably delayed, suggesting a common, output/decision effect (inhibition of return; IOR). In Experiment 2, we tested the hypothesis that the generation of this output effect is dependent on the activation state of the oculomotor system by forbidding eye movements and requiring keypress responses to frequent peripheral targets, while probing oculomotor behavior with saccades to infrequent central arrow targets. As predicted, saccades to central arrow targets showed neither the early facilitation nor later inhibitory effects that were robust in Experiment 1. At the long CTOA, manual responses to cued peripheral targets showed the typical delayed responses usually attributed to IOR. We recommend that this late “inhibitory” cueing effect (ICE) be distinguished from IOR because it lacks the cause (oculomotor activation) and effect (response bias) attributed to IOR when it was named by Posner, Rafal, Choate, and Vaughan (1985). |
Renske S. Hoedemaker; Peter C. Gordon Embodied language comprehension: Encoding-based and goal-driven processes Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 2, pp. 914–929, 2014. @article{Hoedemaker2014, Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals. |
Renske S. Hoedemaker; Peter C. Gordon It takes time to prime: Semantic priming in the ocular lexical decision task Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 6, pp. 2179–2197, 2014. @article{Hoedemaker2014a, Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (tau), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT. |
Margit Hofler; Iain D. Gilchrist; Christof Korner Searching the same display twice: Properties of short-term memory in repeated search Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 2, pp. 335–352, 2014. @article{Hofler2014, Consecutive search for different targets in the same display is supported by a short-term memory mechanism: Distractors that have recently been inspected in the first search are found more quickly in the second search when they become the target (Exp. 1). Here, we investigated the properties of this memory process. We found that this recency advantage is robust to a delay between the two searches (Exp. 2) and that it is only slightly disrupted by an interference task between the two searches (Exp. 3). Introducing a concurrent secondary task (Exp. 4) showed that the memory representations formed in the first search are based on identity as well as location information. Together, these findings show that the short-term memory that supports repeated visual search stores a complex combination of item identity and location that is robust to disruption by either time or interference. |
George T. Gitchel; Paul A. Wetzel; Abu Qutubuddin; Mark S. Baron Experimental support that ocular tremor in Parkinson's disease does not originate from head movement Journal Article In: Parkinsonism and Related Disorders, vol. 20, no. 7, pp. 743–747, 2014. @article{Gitchel2014, Introduction: Our recent report of ocular tremor in Parkinson's disease (PD) has raised considerable controversy as to the origin of the tremor. Using an infrared based eye tracker and a magnetic head tracker, we reported that ocular tremor was recordable in PD subjects with no apparent head tremor. However, other investigators suggest that the ocular tremor may represent either transmitted appendicular tremor or subclinical head tremor inducing the vestibulo-ocular reflex (VOR). The present study aimed to further investigate the origin of ocular tremor in PD. Methods: Eye movements were recorded in 8 PD subjects both head free, and with full head restraint by means of a head holding device and a dental impression bite plate. Head movements were recorded independently using both a high sensitivity tri-axial accelerometer and a magnetic tracking system, each synchronized to the eye tracker. Results: Ocular tremor was observed in all 8 PD subjects and was not influenced by head free and head fixed conditions. Both magnetic tracking and accelerometer recordings supported that the ocular tremor was fully independent of head position. Conclusion: The present study findings support our initial findings that ocular tremor is a fundamental feature of PD unrelated to head movements. Although the utility of ocular tremor for diagnostic purposes requires validation, current findings in large cohorts of PD subjects suggestits potential as a reliable clinical biomarker. |
Mackenzie G. Glaholt; Keith Rayner; Eyal M. Reingold A rapid effect of stimulus quality on the durations of individual fixations during reading Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 377–389, 2014. @article{Glaholt2014, We developed a variant of the single fixation replacement paradigm (Yang & McConkie, 2001, 2004) in order to examine the effect of stimulus quality on fixation duration during reading. Subjects' eye movements were monitored while they read passages of text for comprehension. During critical fixations, equal changes to the luminance of the background produced either an increase (Up-Contrast) or a decrease (Down-Contrast) of the contrast of the text. The durations of critical fixations were found to be lengthened in the Down-Contrast but not the Up-Contrast condition. Ex-Gaussian modelling of the distributions of fixation durations showed that the reduction in stimulus quality lengthened the majority of fixations, and a survival analysis estimated the onset of this effect to be approximately 141 ms following fixation onset. Because the stimulus quality of the text during critical fixations could not be predicted or parafoveally previewed prior to foveation, the present effect can be attributed to an immediate effect of stimulus quality on fixation duration. |
Davis M. Glasser; Duje Tadin Modularity in the motion system: Independent oculomotor and perceptual processing of brief moving stimuli Journal Article In: Journal of vision, vol. 14, pp. 1–13, 2014. @article{Glasser2014, In addition to motion perception per se, we utilize motion information for a wide range of brain functions. These varied functions place different demands on the visual system, and therefore a stimulus that provides useful information for one function may be inadequate for another. For example, the direction of motion of large high-contrast stimuli is difficult to discriminate perceptually, but other studies have shown that such stimuli are highly effective at eliciting directional oculomotor responses such as the ocular following response (OFR). Here, we investigated the degree of independence between perceptual and oculomotor processing by determining whether perceptually suppressed moving stimuli can nonetheless evoke reliable eye movements. We measured reflexively evoked tracking eye movements while observers discriminated the motion direction of large high-contrast stimuli. To quantify the discrimination ability of the oculomotor system, we used signal detection theory to generate associated oculometric functions. The results showed that oculomotor sensitivity to motion direction is not predicted by perceptual sensitivity to the same stimuli. In fact, in several cases oculomotor responses were more reliable than perceptual responses. Moreover, a trial-by-trial analysis indicated that, for stimuli tested in this study, oculomotor processing was statistically independent from perceptual processing. Evidently, perceptual and oculomotor responses reflect the activity of independent dissociable mechanisms despite operating on the same input. While results of this kind have traditionally been interpreted in the framework of perception versus action, we propose that these differences reflect a more general principle of modularity. |
Roshani Gnanaseelan; David A. Gonzalez; Ewa Niechwiej-Szwedo Binocular advantage for prehension movements performed in visually enriched environments requiring visual search Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 959, 2014. @article{Gnanaseelan2014, The purpose of this study was to examine the role of binocular vision during a prehension task performed in a visually enriched environment where the target object was surrounded by distractors/obstacles. Fifteen adults reached and grasped for a cylindrical peg while eye movements and upper limb kinematics were recorded. The complexity of the visual environment was manipulated by varying the number of distractors and by varying the saliency of the target. Gaze behavior (i.e., the latency of the primary gaze shift and frequency of gaze shifts prior to reach initiation) was comparable between viewing conditions. In contrast, a binocular advantage was evident in performance accuracy. Specifically, participants picked up the wrong object twice as often during monocular viewing when the complexity of the environment increased. Reach performance was more efficient during binocular viewing, which was demonstrated by shorter reach reaction time and overall movement time. Reaching movements during the approach phase had higher peak velocity during binocular viewing. During monocular viewing reach trajectories exhibited a direction bias during the acceleration phase, which was leftward during left eye viewing and rightward during right eye viewing. This bias can be explained by the presence of esophoria in the covered eye. The grasping interval was also extended by ~20% during monocular viewing; however, the duration of the return phase after the target was picked up was comparable across viewing conditions. In conclusion, binocular vision provides important input for planning and execution of prehension movements in visually enriched environments. Binocular advantage was evident, regardless of set size or target saliency, indicating that adults plan their movements more cautiously during monocular viewing, even in relatively simple environments with a highly salient target. Nevertheless, in visually-normal adults monocular input provides sufficient information to engage in online control to correct the initial errors in movement planning. |
David C. Godlove; Alexander Maier; Geoffrey F. Woodman; Jeffrey D. Schall Microcircuitry of agranular frontal cortex: Testing the generality of the canonical cortical microcircuit Journal Article In: Journal of Neuroscience, vol. 34, no. 15, pp. 5355–5369, 2014. @article{Godlove2014, We investigated whether a frontal area that lacks granular layer IV, supplementary eye field, exhibits features of laminar circuitry similar to those observed in primary sensory areas. We report, for the first time, visually evoked local field potentials (LFPs) and spiking activity recorded simultaneously across all layers of agranular frontal cortex using linear electrode arrays. We calculated current source density from the LFPs and compared the laminar organization of evolving sinks to those reported in sensory areas. Simultaneous, transient synaptic current sinks appeared first in layers III and V followed by more prolonged current sinks in layers I/II and VI. We also found no variation of single- or multi-unit visual response latency across layers, and putative pyramidal neurons and interneurons displayed similar response latencies. Many units exhibited pronounced discharge suppression that was strongest in superficial relative to deep layers. Maximum discharge suppression also occurred later in superficial than in deep layers. These results are discussed in the context of the canonical cortical microcircuit model originally formulated to describe early sensory cortex. The data indicate that agranular cortex resembles sensory areas in certain respects, but the cortical microcircuit is modified in nontrivial ways. |
Hayward J. Godwin; Michael C. Hout; Tamaryn Menneer Visual similarity is stronger than semantic similarity in guiding visual search for numbers Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 3, pp. 689–695, 2014. @article{Godwin2014, Using a visual search task, we explored how behavior is influenced by both visual and semantic information. We recorded participants' eye movements as they searched for a single target number in a search array of single-digit numbers (0-9). We examined the probability of fixating the various distractors as a function of two key dimensions: the visual similarity between the target and each distractor, and the semantic similarity (i.e., the numerical distance) between the target and each distractor. Visual similarity estimates were obtained using multidimensional scaling based on the independent observer similarity ratings. A linear mixed-effects model demonstrated that both visual and semantic similarity influenced the probability that distractors would be fixated. However, the visual similarity effect was substantially larger than the semantic similarity effect. We close by discussing the potential value of using this novel methodological approach and the implications for both simple and complex visual search displays. |
Hayward J. Godwin; Erik D. Reichle; Tamaryn Menneer Coarse-to-fine eye movement behavior during visual search Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 5, pp. 1244–1249, 2014. @article{Godwin2014a, It has previously been argued that, during visual search, eye movement behavior is indicative of an underlying scanning "strategy" that starts on a global, or "coarse," scale but then progressively focuses to a more local, or "fine," scale. This conclusion is motivated by the finding that, as a trial progresses, fixation durations tend to increase and saccade amplitudes tend to decrease. In the present study, we replicate these effects but offer an alternative explanation for them-that they emerge from a few stochastic factors that control eye movement behavior. We report the results of a simulation supporting this hypothesis and discuss implications for future models of visual search. |
Tamar H. Gollan; Elizabeth R. Schotter; Joanne Gomez; Mayra Murillo; Keith Rayner Multiple levels of bilingual language control: Evidence from language intrusions in reading aloud Journal Article In: Psychological Science, vol. 25, no. 2, pp. 585–595, 2014. @article{Gollan2014, Bilinguals rarely produce words in an unintended language. However, we induced such intrusion errors (e.g., saying el instead of he) in 32 Spanish-English bilinguals who read aloud single-language (English or Spanish) and mixed-language (haphazard mix of English and Spanish) paragraphs with English or Spanish word order. These bilinguals produced language intrusions almost exclusively in mixed-language paragraphs, and most often when attempting to produce dominant-language targets (accent-only errors also exhibited reversed language-dominance effects). Most intrusion errors occurred for function words, especially when they were not from the language that determined the word order in the paragraph. Eye movements showed that fixating a word in the nontarget language increased intrusion errors only for function words. Together, these results imply multiple mechanisms of language control, including (a) inhibition of the dominant language at both lexical and sublexical processing levels, (b) special retrieval mechanisms for function words in mixed-language utterances, and (c) attentional monitoring of the target word for its match with the intended language. |
Julie D. Golomb; Colin N. Kupitz; Carina T. Thiemann The influence of object location on identity: A “spatial congruency bias” Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 6, pp. 2262–2278, 2014. @article{Golomb2014a, Objects can be characterized by a number of properties (e.g., shape, color, size, and location). How do our visual systems combine this information, and what allows us to recognize when 2 objects are the same? Previous work has pointed to a special role for location in the binding process, suggesting that location may be automatically encoded even when irrelevant to the task. Here we show that location is not only automatically attended but fundamentally bound to identity representations, influencing object perception in a far more profound way than simply speeding reaction times. Subjects viewed 2 sequentially presented novel objects and performed a same/different identity comparison. Object location was irrelevant to the identity task, but when the 2 objects shared the same location, subjects were more likely to judge them as the same identity. This “congruency bias” reflected an increase in both hits and false alarms when the objects shared the same location, indicating that subjects were unable to suppress the influence of object location—even when maladaptive to the task. Importantly, this bias was driven exclusively by location: Object location robustly and reliably biased identity judgments across 6 experimental scenarios, but the reverse was not true: Object identity did not exert any bias on location judgments. Furthermore, while location biased both shape and color judgments, neither shape nor color biased each other when irrelevant. The results suggest that location provides a unique, automatic, and insuppressible cue for object sameness. |
Julie D. Golomb; Zara E. L'Heureux; Nancy Kanwisher Feature-binding errors after eye movements and shifts of attention Journal Article In: Psychological Science, vol. 25, no. 5, pp. 1067–1078, 2014. @article{Golomb2014, When people move their eyes, the eye-centered (retinotopic) locations of objects must be updated to maintain world-centered (spatiotopic) stability. Here, we demonstrated that the attentional-updating process temporarily distorts the fundamental ability to bind object locations with their features. Subjects were simultaneously presented with four colors after a saccade-one in a precued spatiotopic target location-and were instructed to report the target's color using a color wheel. Subjects' reports were systematically shifted in color space toward the color of the distractor in the retinotopic location of the cue. Probabilistic modeling exposed both crude swapping errors and subtler feature mixing (as if the retinotopic color had blended into the spatiotopic percept). Additional experiments conducted without saccades revealed that the two types of errors stemmed from different attentional mechanisms (attention shifting vs. splitting). Feature mixing not only reflects a new perceptual phenomenon, but also provides novel insight into how attention is remapped across saccades. |
Esther G. González; Linda Lillakas; Naomi Greenwald; Brenda L. Gallie; Martin J. Steinbach Unaffected smooth pursuit but impaired motion perception in monocularly enucleated observers Journal Article In: Vision Research, vol. 101, pp. 151–157, 2014. @article{Gonzalez2014, The objective of this paper was to study the characteristics of closed-loop smooth pursuit eye movements of 15 unilaterally eye enucleated individuals and 18 age-matched controls and to compare them to their performance in two tests of motion perception: relative motion and motion coherence. The relative motion test used a brief (150. ms) small stimulus with a continuously present fixation target to preclude pursuit eye movements. The duration of the motion coherence trials was 1. s, which allowed a brief pursuit of the stimuli. Smooth pursuit data were obtained with a step-ramp procedure. Controls were tested both monocularly and binocularly. The data showed worse performance by the enucleated observers in the relative motion task but no statistically significant differences in motion coherence between the two groups. On the other hand, the smooth pursuit gain of the enucleated participants was as good as that of controls for whom we found no binocular advantage. The data show that enucleated observers do not exhibit deficits in the afferent or sensory pathways or in the efferent or motor pathways of the steady-state smooth pursuit system even though their visual processing of motion is impaired. |
2013 |
Rasmus Aamand; Thomas Dalsgaard; Yi-Ching Lynn Ho; Arne Møller; Andreas Roepstorff; Torben E. Lund A NO way to BOLD?: Dietary nitrate alters the hemodynamic response to visual stimulation Journal Article In: NeuroImage, vol. 83, pp. 397–407, 2013. @article{Aamand2013, Neurovascular coupling links neuronal activity to vasodilation. Nitric oxide (NO) is a potent vasodilator, and in neurovascular coupling NO production from NO synthases plays an important role. However, another pathway for NO production also exists, namely the nitrate-nitrite-NO pathway. On this basis, we hypothesized that dietary nitrate (NO3-) could influence the brain's hemodynamic response to neuronal stimulation. In the present study, 20 healthy male participants were given either sodium nitrate (NaNO3) or sodium chloride (NaCl) (saline placebo) in a crossover study and were shown visual stimuli based on the retinotopic characteristics of the visual cortex. Our primary measure of the hemodynamic response was the blood oxygenation level dependent (BOLD) response measured with high-resolution functional magnetic resonance imaging (0.64×0.64×1.8mm) in the visual cortex. From this response, we made a direct estimate of key parameters characterizing the shape of the BOLD response (i.e. lag and amplitude). During elevated nitrate intake, corresponding to the nitrate content of a large plate of salad, both the hemodynamic lag and the BOLD amplitude decreased significantly (7.0±2% and 7.9±4%, respectively), and the variation across activated voxels of both measures decreased (12.3±4% and 15.3±7%, respectively). The baseline cerebral blood flow was not affected by nitrate. Ourexperiments demonstrate, for the first time, that dietary nitrate may modulate the local cerebral hemodynamic response to stimuli. A faster and smaller BOLD response, with less variation across local cortex, is consistent with an enhanced hemodynamic coupling during elevated nitrate intake. These findings suggest that dietary patterns, via the nitrate-nitrite-NO pathway, may be a potential way to affect key properties of neurovascular coupling. This could have major clinical implications, which remain to be explored. |
Holger Mitterer; Sahyang Kim; Taehong Cho Compensation for complete assimilation in speech perception: The case of Korean labial-to-velar assimilation Journal Article In: Journal of Memory and Language, vol. 69, no. 1, pp. 59–83, 2013. @article{Mitterer2013b, In connected speech, phonological assimilation to neighboring words can lead to pronunciation variants (e.g., 'garden bench'. →. " garde. m bench" ). A large body of literature suggests that listeners use the phonetic context to reconstruct the intended word for assimilation types that often lead to incomplete assimilations (e.g., a pronunciation of " garden" that carries cues for both a labial [m] and an alveolar [n]). In the current paper, we show that a similar context effect is observed for an assimilation that is often complete, Korean labial-to-velar place assimilation. In contrast to the context effects for partial assimilations, however, the context effects seem to rely completely on listeners' experience with the assimilation pattern in their native language. |
B. Cesqui; R. V. Langenberg; Francesco Lacquaniti; A. D'Avella A novel method for measuring gaze orientation in space in unrestrained head conditions Journal Article In: Journal of Vision, vol. 13, no. 8, pp. 28–28, 2013. @article{Cesqui2013, Investigation of eye movement strategies often requires the measurement of gaze orientation without restraining the head. However, most commercial eye-trackers have low tolerance for head movements. Here we present a novel geometry-based method to estimate gaze orientation in space in unrestricted head conditions. The method combines the measurement of eye-in-head orientation—provided by a head-mounted video-based eye-tracker—and head-in-space position and orientation—provided by a motion capture system. The method does not rely on specific assumptions on the configuration of the eye-tracker camera with respect to the eye and uses a central projection to estimate the pupil position from the camera image, thus improving upon previously proposed geometry-based procedures. The geometrical parameters for the mapping between pupil image and gaze orientation are derived with a calibration procedure based on nonlinear constrained optimization. Additionally, the method includes a procedure to correct for possible slippages of the tracker helmet based on a geometrical representation of the pupil-to-gaze mapping. We tested and validated our method on seven subjects in the context of a one-handed catching experiment. We obtained accuracy better than 0.8° and precision better than 0.5° in the measurement of gaze orientation. Our method can be used with any video-based eye-tracking system to investigate eye movement strategies in a broad range of naturalistic experimental scenarios. |
Matthew F. Asher; David J. Tolhurst; Tom Troscianko; Iain D. Gilchrist Regional effects of clutter on human target detection performance. Journal Article In: Journal of vision, vol. 13, no. 5, pp. 25–25, 2013. @article{Asher2013, Clutter is something that is encountered in everyday life, from a messy desk to a crowded street. Such clutter may interfere with our ability to search for objects in such environments, like our car keys or the person we are trying to meet. A number of computational models of clutter have been proposed and shown to work well for artificial and other simplified scene search tasks. In this paper, we correlate the performance of different models of visual clutter to human performance in a visual search task using natural scenes. The models we evaluate are Feature Congestion (Rosenholtz, Li, & Nakano, 2007), Sub-band Entropy (Rosenholtz et al., 2007), Segmentation (Bravo & Farid, 2008), and Edge Density (Mack & Oliva, 2004) measures. The correlations were performed across a range of target-centered subregions to produce a correlation profile, indicating the scale at which clutter was affecting search performance. Overall clutter was rather weakly correlated with performance (r ≈ 0.2). However, different measures of clutter appear to reflect different aspects of the search task: correlations with Feature Congestion are greatest for the actual target patch, whereas the Sub-band Entropy is most highly correlated in a region 12° × 12° centered on the target. |
Louise Marshall; Paul M. Bays Obligatory encoding of task-irrelevant features depletes working memory resources Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 21–21, 2013. @article{Marshall2013, Selective attention is often considered the "gateway" to visual working memory (VWM). However, the extent to which we can voluntarily control which of an object's features enter memory remains subject to debate. Recent research has converged on the concept of VWM as a limited commodity distributed between elements of a visual scene. Consequently, as memory load increases, the fidelity with which each visual feature is stored decreases. Here we used changes in recall precision to probe whether task-irrelevant features were encoded into VWM when individuals were asked to store specific feature dimensions. Recall precision for both color and orientation was significantly enhanced when task-irrelevant features were removed, but knowledge of which features would be probed provided no advantage over having to memorize both features of all items. Next, we assessed the effect an interpolated orientation-or color-matching task had on the resolution with which orientations in a memory array were stored. We found that the presence of orientation information in the second array disrupted memory of the first array. The cost to recall precision was identical whether the interfering features had to be remembered, attended to, or could be ignored. Therefore, it appears that storing, or merely attending to, one feature of an object is sufficient to promote automatic encoding of all its features, depleting VWM resources. However, the precision cost was abolished when the match task preceded the memory array. So, while encoding is automatic, maintenance is voluntary, allowing resources to be reallocated to store new visual information. |
Jennifer Malsert; Nathalie Guyader; Alan Chauvin; Mircea Polosan; David Szekely; Thierry Bougerol; Christian Marendaz Saccadic performance and cortical excitability as trait-markers and state-markers in rapid cycling bipolar disorder: A two-case follow-up study Journal Article In: Frontiers in Psychiatry, vol. 3, pp. 112, 2013. @article{Malsert2013, Background: The understanding of physiopathology and cognitive impairments in mood disorders requires finding objective markers. Mood disorders have often been linked to hypometabolism in the prefrontal dorsolateral cortex, and to GABAergic and glutamatergic neurotransmission dysfunction. The present study aimed to discover whether saccadic tasks (involving DPLFC activity), and cortical excitability (involving GABA/Glutamate neurotransmission) could provide neuropsychophysical markers for mood disorders, and/or of its phases, in patients with rapid cycling bipolar disorders (rcBD). Methods: Two rcBD patients were followed for a cycle, and were compared to nine healthy controls. A saccade task, mixing prosaccades, antisaccades, and nosaccades, and an evaluation of cortical excitability using transcranial magnetic stimulation were performed. Results: We observed a deficit in antisaccade in patients independently of thymic phase, and in nosaccade in the manic phase only. Cortical excitability data revealed global intracortical deficits in all phases, switching according to cerebral hemisphere and thymic phase. Conclusion: Specific patterns of performance in saccade tasks and cortical excitability could characterize mood disorders (trait-markers) and its phases (state-markers). Moreover, a functional relationship between oculometric performance and cortical excitability is discussed. |
Sanjay G. Manohar; Masud Husain Attention as foraging for information and value Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 711, 2013. @article{Manohar2013, What is the purpose of attention? One avenue of research has led to the proposal that attention might be crucial for gathering information about the environment, while other lines of study have demonstrated how attention may play a role in guiding behavior to rewarded options. Many experiments that study attention require participants to make a decision based on information acquired discretely at one point in time. In real-world situations, however, we are usually not presented with information about which option to select in such a manner. Rather we must initially search for information, weighing up reward values of options before we commit to a decision. Here, we propose that attention plays a role in both foraging for information and foraging for value. When foraging for information, attention is guided toward the unknown. When foraging for reward, attention is guided toward high reward values, allowing decision-making to proceed by accept-or-reject decisions on the currently attended option. According to this account, attention can be regarded as a low-cost alternative to moving around and physically interacting with the environment-"teleforaging"-before a decision is made to interact physically with the world. To track the timecourse of attention, we asked participants to seek out and acquire information about two gambles by directing their gaze, before choosing one of them. Participants often made multiple refixations on items before making a decision. Their eye movements revealed that early in the trial, attention was guided toward information, i.e., toward locations that reduced uncertainty about value. In contrast, late in the trial, attention was guided by expected value of the options. At the end of the decision period, participants were generally attending to the item they eventually chose. We suggest that attentional foraging shifts from an uncertainty-driven to a reward-driven mode during the evolution of a decision, permitting decisions to be made by an engage-or-search strategy. |
Sophie Marat; Anis Rahman; Denis Pellerin; Nathalie Guyader; Dominique Houzet Improving visual saliency by adding 'face feature map' and 'center bias' Journal Article In: Cognitive Computation, vol. 5, no. 1, pp. 63–75, 2013. @article{Marat2013, Faces play an important role in guiding visual attention, thus the inclusion of face detection into a classical visual attention model can improve eye movement predictions. In this study, we proposed a visual saliency model to predict eye movements during free viewing of videos. The model is inspired by the biology of the visual system, and breaks down each frame of a video database into three saliency maps, each earmarked for a particular visual feature. (i) A ‘static' saliency map emphasizes regions that differ from their context in terms of luminance, orientation and spatial frequency. (ii) A ‘dynamic' saliency map emphasizes moving regions with values proportional to motion amplitude. (iii) A ‘face' saliency map emphasizes areas where a face is detected with a value proportional to the confidence of the detection. In parallel, a behavioral experiment was carried out to record eye movements of participants when viewing the videos. These eye movements were compared with the models' saliency maps to quantify their efficiency.We also examined the influence of center bias on the saliency maps, and incorporated it into the model in a suitable way. Finally, we proposed an efficient fusion method of all these saliency maps. Consequently, the fused master saliency map developed in this research is a good predictor of participants' eye positions. |
Marco Marelli; Simona Amenta; Elena Angela Morone; Davide Crepaldi Meaning is in the beholder's eye: Morpho-semantic effects in masked priming Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 3, pp. 534–541, 2013. @article{Marelli2013, A substantial body of literature indicates that, at least at some level of processing, complex words are broken down into their morphemes solely on the basis of their orthographic form (e.g., Rastle, Davis, & New, Psychonomic Bulletin and Review 11:1090-1098, 2004). Recent evidence has shown that this process might not be obligatory, as indicated by the fact that morpho-orthographic effects were not found in a cross-case same-different task-that is, when lexical access was not necessarily required (Duñabeitia, Kinoshita, Carreiras, & Norris, Language and Cognitive Processes 26:509-529, 2011). In this study, we employed a task that required understanding a series of words and, thus, implied lexical access. Masked primes were shown very briefly right before the appearance of the target word; prime-target pairs entertained a morpho-semantic (dealer-DEAL), a morpho-orthographic (corner-CORN), or a purely orthographic (brothel-BROTH) relationship. Eye fixation times clearly indicated facilitation for transparent pairs, but not for opaque pairs (or for orthographic pairs, which were used as a baseline). Conversely, the usual morpho-orthographic pattern was found in a control experiment, employing a lexical decision task. These results indicate that the access to a morpho-orthographic level of representation is not always necessary for lexical identification, which challenges models of visual word identification that cannot account for task-induced effects. |
Jan Bernard C. Marsman; Remco Renken; Koen V. Haak; Frans W. Cornelissen Linking cortical visual processing to viewing behaviour using fMRI Journal Article In: Frontiers in Systems Neuroscience, vol. 7, pp. 109, 2013. @article{Marsman2013, One characteristic of natural visual behavior in humans is the frequent shifting of eye position. It has been argued that the characteristics of these eye movements can be used to distinguish between distinct modes of visual processing (Unema et al., 2005). These viewing modes would be distinguishable on the basis of the eye-movement parameters fixation duration and saccade amplitude and have been hypothesized to reflect the differential involvement of dorsal and ventral systems in saccade planning and information processing. According to this hypothesis, on the one hand, while in a "pre-attentive" or ambient mode, primarily scanning eye movements are made; in this mode fixation are relatively brief and saccades tends to be relatively large. On the other hand, in "attentive" focal mode, fixations last longer and saccades are relatively small, and result in viewing behavior which could be described as detailed inspection. Thus far, no neuroscientific basis exists to support the idea that such distinct viewing modes are indeed linked to processing in distinct cortical regions. Here, we used fixation-based event-related (FIBER) fMRI in combination with independent component analysis (ICA) to investigate the neural correlates of these viewing modes. While we find robust eye-movement-related activations, our results do not support the theory that the above mentioned viewing modes modulate dorsal and ventral processing. Instead, further analyses revealed that eye-movement characteristics such as saccade amplitude and fixation duration did differentially modulate activity in three clusters in early, ventromedial and ventrolateral visual cortex. In summary, we conclude that evaluating viewing behavior is crucial for unraveling cortical processing in natural vision. |
Jun Maruta; Kristin J. Heaton; Elisabeth M. Kryskow; Alexis L. Maule; Jamshid Ghajar Dynamic visuomotor synchronization: Quantification of predictive timing Journal Article In: Behavior Research Methods, vol. 45, no. 1, pp. 289–300, 2013. @article{Maruta2013, When a moving target is tracked visually, spatial and temporal predictions are used to circumvent the neural delay required for the visuomotor processing. In particular, the internally generated predictions must be synchronized with the external stimulus during continuous tracking. We examined the utility of a circular visual-tracking paradigm for assessment of predictive timing, using normal human subjects. Disruptions of gaze-target synchronization were associated with anticipatory saccades that caused the gaze to be temporarily ahead of the target along the circular trajectory. These anticipatory saccades indicated preserved spatial prediction but suggested impaired predictive timing. We quantified gaze-target synchronization with several indices, whose distributions across subjects were such that instances of extremely poor performance were identifiable outside the margin of error determined by test-retest measures. Because predictive timing is an important element of attention functioning, the visual-tracking paradigm and dynamic synchronization indices described here may be useful for attention assessment. |
Tommaso Mastropasqua; Massimo Turatto Perceptual grouping enhances visual plasticity Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e53683, 2013. @article{Mastropasqua2013, Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. |
Sebastiaan Mathôt; Jan Theeuwes A reinvestigation of the reference frame of the tilt-adaptation aftereffect Journal Article In: Scientific Reports, vol. 3, pp. 1152, 2013. @article{Mathot2013, The tilt-adaptation aftereffect (TAE) is the phenomenon that prolonged perception of a tilted ‘adapter' stimulus affects the perceived tilt ofa subsequent ‘tester' stimulus. Although it is clear that TAE is strongest when adapter and tester are presented at the same location, the reference frame ofthe effect is debated. Some authors have reported that TAE is spatiotopic (world centred): It occurs when adapter and tester are presented at the same display location, even when this corresponds to different retinal locations. Others have reported that TAE is exclusively retinotopic (eye centred): It occurs only when adapter and tester are presented at the same retinal location, even when this corresponds to different display locations. Because this issue is crucial for models oftranssaccadic perception, we reinvestigated the reference frame ofTAE.We report that TAE is exclusively retinotopic, supporting the notion that there is no transsaccadic integration of low-level visual information. |
Sebastiaan Mathôt; Lotje Linden; Jonathan Grainger; Françoise Vitu The pupillary light response reveals the focus of covert visual attention. Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e78168, 2013. @article{Mathot2013a, The pupillary light response is often assumed to be a reflex that is not susceptible to cognitive influences. In line with recent converging evidence, we show that this reflexive view is incomplete, and that the pupillary light response is modulated by covert visual attention: Covertly attending to a bright area causes a pupillary constriction, relative to attending to a dark area under identical visual input. This attention-related modulation of the pupillary light response predicts cuing effects in behavior, and can be used as an index of how strongly participants attend to a particular location. Therefore, we suggest that pupil size may offer a new way to continuously track the focus of covert visual attention, without requiring a manual response from the participant. The theoretical implication of this finding is that the pupillary light response is neither fully reflexive, nor under complete voluntary control, but is instead best characterized as a stereotyped response to a voluntarily selected target. In this sense, the pupillary light response is similar to saccadic and smooth pursuit eye movements. Together, eye movements and the pupillary light response maximize visual acuity, stabilize visual input, and selectively filter visual information as it enters the eye. |
Benjamin P. Meek; Keri Locheed; Jane M. Lawrence-Dewar; Paul Shelton; Jonathan J. Marotta Posterior cortical atrophy: An investigation of scan paths generated during face matching tasks Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 309, 2013. @article{Meek2013, When viewing a face, healthy individuals focus more on the area containing the eyes and upper nose in order to retrieve important featural and configural information. In contrast, individuals with face blindness (prosopagnosia) tend to direct fixations toward individual facial features-particularly the mouth. Presented here is an examination of face perception deficits in individuals with Posterior Cortical Atrophy (PCA). PCA is a rare progressive neurodegenerative disorder that is characterized by atrophy in occipito-parietal and occipito-temporal cortices. PCA primarily affects higher visual processing, while memory, reasoning, and insight remain relatively intact. A common symptom of PCA is a decreased effective field of vision caused by the inability to "see the whole picture." Individuals with PCA and healthy control participants completed a same/different discrimination task in which images of faces were presented as cue-target pairs. Eye-tracking equipment and a novel computer-based perceptual task-the Viewing Window paradigm-were used to investigate scan patterns when faces were presented in open view or through a restricted-view, respectively. In contrast to previous prosopagnosia research, individuals with PCA each produced unique scan paths that focused on non-diagnostically useful locations. This focus on non-diagnostically useful locations was also present when using a restricted viewing aperture, suggesting that individuals with PCA have difficulty processing the face at either the featural or configural level. In fact, it appears that the decreased effective field of view in PCA patients is so severe that it results in an extreme dependence on local processing, such that a feature-based approach is not even possible. |
M. Meeter; Stefan Van der Stigchel Visual priming through a boost of the target signal: Evidence from saccadic landing positions Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 7, pp. 1336–1341, 2013. @article{Meeter2013, The present study focuses on the effects of priming on visual selection. Repetition breeds success. This is the case over the long run, when repeating a certain act leads to learning, but also in the very short one: Over and over again, it has been found that the action that was performed last is primed to be performed again. In the visual search literature, such repetition priming has been studied extensively. Priming in visual search has been found for different target features: for the layout of the search scene, for its size, for the to-be given response, and for interactions between these factors. Using reaction time measures, as is typically done in priming studies, these possibilities cannot be disentangled from one another, since the measures reflect both pre- and post attentional processing and cannot dissociate the strength of the individual signals of target and distractor. Priming refers to a broad range of behavioral phenomena. It would be hard to argue that an enhancement of the target signal is the only mechanism involved in priming. For instance, distractor repetition speeds search even when target features are not repeated, suggesting that some form of distractor suppression or discounting also plays a role. |
Noya Meital; Sebastian Peter Korinth; Avi Karni Plasticity in the adult oculomotor system: Offline consolidation phase gains in saccade sequence learning Journal Article In: Brain Research, vol. 1528, pp. 42–48, 2013. @article{Meital2013, When do adults gain in learning an oculomotor sequence? Here we show that oculomotor training can result not only in performance gains within the training session, but also induce robust offline gains in both speed and accuracy. Participants were trained and tested over two consecutive days to perform a sequence of successive saccades. Saccades were directed to four target letters, presented simultaneously at fixed positions. A two alternative-forced choice question, after each trial, ensured that all targets were perceived. Eye tracking measures were tested at the beginning and end of the training session as well as at 24 h post-training. Practice resulted in within-session gains in accuracy and a reduction of target fixation duration (although total trial duration remained unchanged). In addition, the total average path length traveled by the eye increased, reflecting a decrease in undershoot saccades. At 24 h post-training, however, additional gains were expressed in both speed and accuracy of performance; the total trial duration as well as the fixation-position-offsets and the number of corrective saccades decreased. The expression of delayed gains indicates offline skill consolidation processes in the eye-movement control system. Our results show that the optimization of some aspect, specifically saccade speed parameters, of oculomotor sequence performance evolves mainly offline, during the post-training consolidation phase, a pattern suggestive of learning in an expert system. |
Weston Pack; Thom Carney; Stanley A. Klein Involuntary attention enhances identification accuracy for unmasked low contrast letters using non-predictive peripheral cues Journal Article In: Vision Research, vol. 89, pp. 79–89, 2013. @article{Pack2013, There is controversy regarding whether or not involuntary attention improves response accuracy at a cued location when the cue is non-predictive and if these cueing effects are dependent on backward masking. Various perceptual and decisional mechanisms of performance enhancement have been proposed, such as signal enhancement, noise reduction, spatial uncertainty reduction, and decisional processes. Herein we review a recent report of mask-dependent accuracy improvements with low contrast stimuli and demonstrate that the experiments contained stimulus artifacts whereby the cue impaired perception of low contrast stimuli, leading to an absence of improved response accuracy with unmasked stimuli. Our experiments corrected these artifacts by implementing an isoluminant cue and increasing its distance relative to the targets. The results demonstrate that cueing effects are robust for unmasked stimuli presented in the periphery, resolving some of the controversy concerning cueing enhancement effects from involuntary attention and mask dependency. Unmasked low contrast and/or short duration stimuli as implemented in these experiments may have a short enough iconic decay that the visual system functions similarly as if a mask were present leading to improved accuracy with a valid cue. |
Daniel S. Pages; Jennifer M. Groh Looking at the ventriloquist: Visual outcome of eye movements calibrates sound localization Journal Article In: PLoS ONE, vol. 8, no. 8, pp. e72562, 2013. @article{Pages2013, A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity. |
Maciej Pajak; Antje Nuthmann Object-based saccadic selection during scene perception: Evidence from viewing position effects Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–21, 2013. @article{Pajak2013, The goal of the present study was to further test the hypothesis that objects are important units of saccade targeting and, by inference, attentional selection in real-world scene perception. To this end, we investigated where people fixate within objects embedded in natural scenes. Previously, we reported a preferred viewing location (PVL) close to the center of objects (Nuthmann & Henderson, 2010). Here, we qualify this basic finding by showing that the PVL is affected by object size and the distance between the object and the previous fixation (i.e., launch site distance). Moreover, we examined how within-object fixation position affected subsequent eye-movement behavior on the object. Unexpectedly, there was no refixation optimal viewing position (OVP) effect for objects in scenes. Where viewers initially placed their eyes on an object did not affect the likelihood of refixating that object, suggesting that some refixations on objects in scenes are made for reasons other than insufficient visual information. A fixation-duration inverted-optimal viewing (IOVP) effect was found for large objects: Fixations located at object center were longer than those falling near the edges of an object. Collectively, these findings lend further support to the notion of object-based saccade targeting in scenes. |
Simon Palmer; Uwe Mattler Masked stimuli modulate endogenous shifts of spatial attention Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 486–503, 2013. @article{Palmer2013, Unconscious stimuli can influence participants' motor behavior but also more complex mental processes. Recent research has gradually extended the limits of effects of unconscious stimuli. One field of research where such limits have been proposed is spatial cueing, where exogenous automatic shifts of attention have been distinguished from endogenous controlled processes which govern voluntary shifts of attention. Previous evidence suggests unconscious effects on mechanisms of exogenous shifts of attention. Here, we applied a cue-priming paradigm to a spatial cueing task with arbitrary cues by centrally presenting a masked symmetrical prime before every cue stimulus. We found priming effects on response times in target discrimination tasks with the typical dynamic of cue-priming effects (Experiments 1 and 2) indicating that central symmetrical stimuli which have been associated with endogenous orienting can modulate shifts of spatial attention even when they are masked. Prime-Cue Congruency effects of perceptual dissimilar prime and cue stimuli (Experiment 3) suggest that these effects cannot be entirely reduced to perceptual repetition priming of cue processing. In addition, priming effects did not differ between participants with good and poor prime recognition performance consistent with the view that unconscious stimulus features have access to processes of endogenous shifts of attention. |
Simon Palmer; Uwe Mattler On the source and scope of priming effects of masked stimuli on endogenous shifts of spatial attention Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 528–544, 2013. @article{Palmer2013a, Unconscious stimuli can influence participants' motor behavior as well as more complex mental processes. Previous cue-priming experiments demonstrated that masked cues can modulate endogenous shifts of spatial attention as measured by choice reaction time tasks. Here, we applied a signal detection task with masked luminance targets to determine the source and the scope of effects of masked stimuli. Target-detection performance was modulated by prime-cue congruency, indicating that prime-cue congruency modulates signal enhancement at early levels of target processing. These effects, however, were only found when the prime was perceptually similar to the cue indicting that primes influence early target processing in an indirect way by facilitating cue processing. Together with previous research we conclude that masked stimuli can modulate perceptual and post-central levels of processing. Findings mark a new limit of the effects of unconscious stimuli which seem to have a smaller scope than conscious stimuli. |
Jinger Pan; Ming Yan; Jochen Laubrock; Hua Shu; Reinhold Kliegl Eye-voice span during rapid automatized naming of digits and dice in Chinese normal and dyslexic children Journal Article In: Developmental Science, vol. 16, no. 6, pp. 967–979, 2013. @article{Pan2013, We measured Chinese dyslexic and control children's eye movements during rapid automatized naming (RAN) with alphanumeric (digits) and symbolic (dice surfaces) stimuli. Both types of stimuli required identical oral responses, controlling for effects associated with speech production. Results showed that naming dice was much slower than naming digits for both groups, but group differences in eye-movement measures and in the eye-voice span (i.e. the distance between the currently fixated item and the voiced item) were generally larger in digit-RAN than in dice-RAN. In addition, dyslexics were less efficient in parafoveal processing in these RAN tasks. Since the two RAN tasks required the same phonological output and on the assumption that naming dice is less practiced than naming digits in general, the results suggest that the translation of alphanumeric visual symbols into phonological codes is less efficient in dyslexic children. The dissociation of the print-to-sound conversion and phonological representation suggests that the degree of automaticity in translation from visual symbols to phonological codes in addition to phonological processing per se is also critical to understanding dyslexia. |
Muriel T. N. Panouillères; N. Alahyane; C. Urquizar; Roméo Salemme; Norbert Nighoghossian; B. Gaymard; C. Tilikete; D. Pélisson Effects of structural and functional cerebellar lesions on sensorimotor adaptation of saccades Journal Article In: Experimental Brain Research, vol. 231, no. 1, pp. 1–11, 2013. @article{Panouilleres2013, The cerebellum is critically involved in the adaptation mechanisms that maintain the accuracy of goal-directed acts such as saccadic eye movements. Two categories of saccades, each relying on different adaptation mechanisms, are defined: reactive (externally triggered) saccades and voluntary (internally triggered) saccades. The contribution of the medio-posterior part of the cerebellum to reactive saccades adaptation has been clearly demonstrated, but the evidence that other parts of the cerebellum are also involved is limited. Moreover, the cerebellar substrates of voluntary saccades adaptation have only been marginally investigated. Here, we addressed these two questions by investigating the adaptive capabilities of patients with cerebellar or pre-cerebellar stroke. We recruited three groups of patients presenting focal lesions located, respectively, in the supero-anterior cerebellum, the infero-posterior cerebellum and the lateral medulla (leading to a Wallenberg syndrome including motor dysfunctions similar to those resulting from lesion of the medio-posterior cerebellum). Adaptations of reactive saccades and of voluntary saccades were tested during separate sessions in all patients and in a group of healthy participants. The functional lesion of the medio-posterior cerebellum in Wallenberg syndrome strongly impaired the adaptation of both reactive and voluntary saccades. In contrast, patients with lesion in the supero-anterior part of the cerebellum presented a specific adaptation deficit of voluntary saccades. Finally, patients with an infero-posterior cerebellar lesion showed mild adaptation deficits. We conclude that the medio-posterior cerebellum is critical for the adaptation of both saccade categories, whereas the supero-anterior cerebellum is specifically involved in the adaptation of voluntary saccades. |
Muriel T. N. Panouillères; Solène Frismand; Olivier Sillan; Christian Urquizar; Alain Vighetto; Denis Péisson; Caroline Tilikete Saccades and eye-head coordination in ataxia with oculomotor apraxia type 2 Journal Article In: Cerebellum, vol. 12, no. 4, pp. 557–567, 2013. @article{Panouilleres2013a, Ataxia with oculomotor apraxia type 2 (AOA2) is one of the most frequent autosomal recessive cerebellar ataxias. Oculomotor apraxia refers to horizontal gaze failure due to deficits in voluntary/reactive eye movements. These deficits can manifest as increased latency and/or hypometria of saccades with a staircase pattern and are frequently associated with compensatory head thrust movements. Oculomotor disturbances associated with AOA2 have been poorly studied mainly because the diagnosis of oculomotor apraxia was based on the presence of compensatory head thrusts. The aim of this study was to characterise the nature of horizontal gaze failure in patients with AOA2 and to demonstrate oculomotor apraxia even in the absence of head thrusts. Five patients with AOA2, without head thrusts, were tested in saccadic tasks with the head restrained or free to move and their performance was compared to a group of six healthy participants. The most salient deficit of the patients was saccadic hypometria with a typical staircase pattern. Saccade latency in the patients was longer than controls only for memory-guided saccades. In the head-free condition, head movements were delayed relative to the eye and their amplitude and velocity were strongly reduced compared to controls. Our study emphasises that in AOA2, hypometric saccades with a staircase pattern are a more reliable sign of oculomotor apraxia than head thrust movements. In addition, the variety of eye and head movements' deficits suggests that, although the main neural degeneration in AOA2 affects the cerebellum, this disease affects other structures. |
Muriel T. N. Panouillères; Valérie Gaveau; Camille Socasau; Christian Urquizar; Denis Pélisson Brain processing of visual information during fast eye movements maintains motor performance Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e54641, 2013. @article{Panouilleres2013b, Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution–by shifting a visual target at saccade onset and blanking it at saccade offset–induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation. |
Masayuki Matsumoto; Masahiko Takada Distinct representations of cognitive and motivational signals in midbrain dopamine neurons Journal Article In: Neuron, vol. 79, no. 5, pp. 1011–1024, 2013. @article{Matsumoto2013, Dopamine is essential to cognitive functions. However, despite abundant studies demonstrating that dopamine neuron activity is related to reinforcement and motivation, little is known about what signals dopamine neurons convey to promote cognitive processing. We therefore examined dopamine neuron activity in monkeys performing a delayed matching-to-sample task that required working memory and visual search. We found that dopamine neurons responded to task events associated with cognitive operations. A subset of dopamine neurons were activated by visual stimuli if the monkey had to store the stimuli in working memory. These neurons were located dorsolaterally in the substantia nigra pars compacta, whereas ventromedial dopamine neurons, some in the ventral tegmental area, represented reward prediction signals. Furthermore, dopamine neurons monitored visual search performance, becoming active when the monkey made an internal judgment that the search was successfully completed. Our findings suggest an anatomical gradient of dopamine signals along the dorsolateral-ventromedial axis of the ventral midbrain. |
Maria Matziridi; Eli Brenner; Jeroen B. J. Smeets In: PLoS ONE, vol. 8, no. 4, pp. e62436, 2013. @article{Matziridi2013, A stimulus that is flashed around the time of a saccade tends to be mislocalized in the direction of the saccade target. Our question is whether the mislocalization is related to the position of the saccade target within the image or to the gaze position at the end of the saccade. We separated the two with a visual illusion that influences the perceived distance to the target of the saccade and thus saccade endpoint without affecting the perceived position of the saccade target within the image. We asked participants to make horizontal saccades from the left to the right end of the shaft of a Müller-Lyer figure. Around the time of the saccade, we flashed a bar at one of five possible positions and asked participants to indicate its location by touching the screen. As expected, participants made shorter saccades along the fins-in (<->) configuration than along the fins-out (>-<) configuration of the figure. The illusion also influenced the mislocalization pattern during saccades, with flashes presented with the fins-out configuration being perceived beyond flashes presented with the fins-in configuration. The difference between the patterns of mislocalization for bars flashed during the saccade for the two configurations corresponded quantitatively with a prediction based on compression towards the saccade endpoint considering the magnitude of the effect of the illusion on saccade amplitude. We conclude that mislocalization is related to the eye position at the end of the saccade, rather than to the position of the saccade target within the image. |
Ashleigh M. Maxcey-Richard; Andrew Hollingworth The strategic retention of task-relevant objects in visual working memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 3, pp. 760–772, 2013. @article{MaxceyRichard2013, The serial and spatially extended nature of many real-world visual tasks suggests the need for control over the content of visual working memory (VWM). We examined the management of VWM in a task that required participants to prioritize individual objects for retention during scene viewing. There were 5 principal findings: (a) Strategic retention of task-relevant objects was effective and was dissociable from the current locus of visual attention; (b) strategic retention was implemented by protection from interference rather than by preferential encoding; (c) this prioritization was flexibly transferred to a new object as task demands changed; (d) no-longer-relevant items were efficiently eliminated from VWM; and (e) despite this level of control, attended and fixated objects were consolidated into VWM regardless of task relevance. These results are consistent with a model of VWM control in which each fixated object is automatically encoded into VWM, replacing a portion of the content in VWM. However, task-relevant objects can be selectively protected from replacement. |
Olivia M. Maynard; Marcus R. Munafò; Ute Leonards Visual attention to health warnings on plain tobacco packaging in adolescent smokers and non-smokers Journal Article In: Addiction, vol. 108, no. 2, pp. 413–419, 2013. @article{Maynard2013, AIMS: Previous research with adults indicates that plain packaging increases visual attention to health warnings in adult non-smokers and weekly smokers, but not daily smokers. The present research extends this study to adolescents aged 14-19 years. DESIGN: Mixed-model experimental design, with smoking status as a between-subjects factor and pack type (branded or plain pack) and eye gaze location (health warning or branding) as within-subjects factors. SETTING: Three secondary schools in Bristol, UK. PARTICIPANTS: A convenience sample of adolescents comprising never-smokers (n = 26), experimenters (n = 34), weekly smokers (n = 13) and daily smokers (n = 14). MEASUREMENTS: Number of eye movements to health warnings and branding on plain and branded packs. FINDINGS: Analysis of variance, irrespective of smoking status revealed more eye movements to health warnings than branding on plain packs, but an equal number of eye movements to both regions on branded packs (P = 0.033). This was observed among experimenters (P < 0.001) and weekly smokers (P = 0.047), but not among never-smokers or daily smokers. CONCLUSION: Among experimenters and weekly smokers, plain packaging increases visual attention to health warnings and away from branding. Daily smokers, even relatively early in their smoking careers, seem to avoid the health warnings on cigarette packs. Adolescent never-smokers attend the health warnings preferentially on both types of packs, a finding which may reflect their decision not to smoke. |
Ulrich Mayr; David Kuhns; Miranda Rieter Eye movements reveal dynamics of task control Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 2, pp. 489–509, 2013. @article{Mayr2013, With the goal to determine the cognitive architecture that underlies flexible changes of control settings, we assessed within-trial and across-trial dynamics of attentional selection by tracking of eye movements in the context of a cued task-switching paradigm. Within-trial dynamics revealed a switch-induced, discrete delay in onset of task-congruent fixations, a result that is consistent with a higher level configuration process. Next, we derived predictions about the trial-to-trial dynamic coupling of control settings from competing models, assuming that control is achieved either through task-level competition or through higher level configuration processes. Empirical coupling dynamics between trial n-1 eye movements and trial n response times–estimated through mixed linear modeling–revealed a pattern that was consistent with the higher level configuration model. The results indicate that a combination of eye movement data and mixed modeling methods can yield new constraints on models of flexible control. This general approach can be useful in any domain in which theoretical progress depends on high-resolution information about dynamic relationships within individuals. |
Kathryn L. McCabe; Jessica L. Melville; Dominique Rich; Paul A. Strutt; Gavin Cooper; Carmel M. Loughland; Ulrich Schall; Linda E. Campbell Divergent patterns of social cognition performance in autism and 22q11.2 deletion syndrome (22q11DS) Journal Article In: Journal of Autism and Developmental Disorders, vol. 43, no. 8, pp. 1926–1934, 2013. @article{McCabe2013, Individuals with developmental disorders frequently report a range of social cognition deficits including difficulties identifying facial displays of emotion. This study examined the specificity of face emotion processing deficits in adolescents with either autism or 22q11DS compared to typically developing (TD) controls. Two tasks (face emotion recognition and weather scene recognition) were used to explore group differences in visual scanpath strategy and concurrent recognition accuracy. For faces, the autism and 22q11DS groups demonstrated lower emotion recognition accuracy and fewer fixations compared to the TD group. Individuals with autism demonstrated fewer fixations to some weather scene stimuli compared to 22q11DS and TD groups, yet achieved a level of recognition accuracy comparable to the TD group. These findings provide evidence for a divergent pattern of social cognition dysfunction in autism and 22q11DS. |
Michael B. McCamy; Niamh Collins; Jorge Otero-Millan; Mohammed Al-Kalbani; Stephen L. Macknik; Davis Coakley; Xoana G. Troncoso; Gerard Boyle; Vinodh Narayanan; Thomas R. R. Wolf; Susana Martinez-Conde Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system Journal Article In: PeerJ, vol. 1, pp. 1–18, 2013. @article{McCamy2013, Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called "fixational eye movements", which include microsaccades, drift, and ocular microtremor (OMT). Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013). OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin), with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004). Due to OMT's small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades. |
Michael B. McCamy; Ali Najafian Jazi; Jorge Otero-Millan; Stephen L. Macknik; Susana Martinez-Conde The effects of fixation target size and luminance on microsaccades and square-wave jerks Journal Article In: PeerJ, vol. 1, pp. 1–12, 2013. @article{McCamy2013a, A large amount of classic and contemporary vision studies require subjects to fixate a target. Target fixation serves as a normalizing factor across studies, promoting the field's ability to compare and contrast experiments. Yet, fixation target parameters, including luminance, contrast, size, shape and color, vary across studies, potentially affecting the interpretation of results. Previous research on the effects of fixation target size and luminance on the control of fixation position rendered conflicting results, and no study has examined the effects of fixation target characteristics on square-wave jerks, the most common type of saccadic intrusion. Here we set out to determine the effects of fixation target size and luminance on the characteristics of microsaccades and square-wave jerks, over a large range of stimulus parameters. Human subjects fixated a circular target with varying luminance and size while we recorded their eye movements with an infrared video tracker (EyeLink 1000, SR Research). We detected microsaccades and SWJs automatically with objective algorithms developed previously. Microsaccade rates decreased linearly and microsaccade magnitudes increased linearly with target size. The percent of microsaccades forming part of SWJs decreased, and the time from the end of the initial SWJ saccade to the beginning of the second SWJ saccade (SWJ inter-saccadic interval; ISI) increased with target size. The microsaccadic preference for horizontal direction also decreased moderately with target size . Target luminance did not affect significantly microsaccades or SWJs, however. In the absence of a fixation target, microsaccades became scarcer and larger, while SWJ prevalence decreased and SWJ ISIs increased. Thus, the choice of fixation target can affect experimental outcomes, especially in human factors and in visual and oculomotor studies. These results have implications for previous and future research conducted under fixation conditions, and should encourage forthcoming studies to report the size of fixation targets to aid the interpretation and replication of their results. |
Richard McFarland; Hettie Roebuck; Yin Yan; Bonaventura Majolo; Wu Li; Kun Guo Social interactions through the eyes of macaques and humans Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e56437, 2013. @article{McFarland2013, Group-living primates frequently interact with each other to maintain social bonds as well as to compete for valuable resources. Observing such social interactions between group members provides individuals with essential information (e.g. on the fighting ability or altruistic attitude of group companions) to guide their social tactics and choice of social partners. This process requires individuals to selectively attend to the most informative content within a social scene. It is unclear how non-human primates allocate attention to social interactions in different contexts, and whether they share similar patterns of social attention to humans. Here we compared the gaze behaviour of rhesus macaques and humans when free-viewing the same set of naturalistic images. The images contained positive or negative social interactions between two conspecifics of different phylogenetic distance from the observer; i.e. affiliation or aggression exchanged by two humans, rhesus macaques, Barbary macaques, baboons or lions. Monkeys directed a variable amount of gaze at the two conspecific individuals in the images according to their roles in the interaction (i.e. giver or receiver of affiliation/aggression). Their gaze distribution to non-conspecific individuals was systematically varied according to the viewed species and the nature of interactions, suggesting a contribution of both prior experience and innate bias in guiding social attention. Furthermore, the monkeys' gaze behavior was qualitatively similar to that of humans, especially when viewing negative interactions. Detailed analysis revealed that both species directed more gaze at the face than the body region when inspecting individuals, and attended more to the body region in negative than in positive social interactions. Our study suggests that monkeys and humans share a similar pattern of role-sensitive, species- and context-dependent social attention, implying a homologous cognitive mechanism of social attention between rhesus macaques and humans. |
Eugene McSorley; Carien M. Van Reekum The time course of implicit affective picture processing: An eye movement study Journal Article In: Emotion, vol. 13, no. 4, pp. 769–773, 2013. @article{McSorley2013, Consistent with a negativity bias account, neuroscientific and behavioral evidence demonstrates modulation of even early sensory processes by unpleasant, potentially threat-relevant information. The aim of this research is to assess the extent to which pleasant and unpleasant visual stimuli presented extrafoveally capture attention and impact eye movement control. We report an experiment examining deviations in saccade metrics in the presence of emotional image distractors that are close to a nonemotional target. We additionally manipulate the saccade latency to test when the emotional distractor has its biggest impact on oculomotor control. The results demonstrate that saccade landing position was pulled toward unpleasant distractors, and that this pull was due to the quick saccade responses. Overall, these findings support a negativity bias account of early attentional control and call for the need to consider the time course of motivated attention when affect is implicit. |
Angelina Paolozza; Rebecca Titman; Donald Brien; Douglas P. Munoz; James N. Reynolds Altered accuracy of saccadic eye movements in children with fetal alcohol spectrum disorder Journal Article In: Alcoholism: Clinical and Experimental Research, vol. 37, no. 9, pp. 1491–1498, 2013. @article{Paolozza2013, Background: Prenatal exposure to alcohol is a major, preventable cause of neurobehavioral dysfunction in children worldwide. The measurement and quantification of saccadic eye movements is a powerful tool for assessing sensory, motor, and cognitive function. The quality of the motor process of an eye movement is known as saccade metrics. Saccade accuracy is 1 component of metrics, which to function optimally requires several cortical brain structures as well as an intact cerebellum and brain-stem. The cerebellum has frequently been reported to be damaged by prenatal alcohol exposure. This study, therefore, tested the hypothesis that children with fetal alcohol spectrum disorder (FASD) will exhibit deficits in the accuracy of saccades.; Methods: A group of children with FASD (n = 27) between the ages of 8 and 16 and typically developing control children (n = 27) matched for age and sex, completed 3 saccadic eye movement tasks of increasing difficulty. Eye movement performance during the tasks was captured using an infrared eye tracker. Saccade metrics (e.g., velocity, amplitude, accuracy) were quantified and compared between the 2 groups for the 3 different tasks.; Results: Children with FASD were more variable in saccade endpoint accuracy, which was reflected by statistically significant increases in the error of the initial saccade endpoint and the frequency of additional, corrective saccades required to achieve final fixation. This increased variability in accuracy was amplified when the cognitive demand of the tasks increased. Children with FASD also displayed a statistically significant increase in response inhibition errors.; Conclusions: These data suggest that children with FASD may have deficits in eye movement control and sensory-motor integration including cerebellar circuits, thereby impairing saccade accuracy. |
Alexander Pastukhov; Victoria Vonau; Solveiga Stonkute; Jochen Braun Spatial and temporal attention revealed by microsaccades Journal Article In: Vision Research, vol. 85, pp. 45–57, 2013. @article{Pastukhov2013, We compared the spatial and temporal allocation of attention as revealed by microsaccades. Observers viewed several concurrent "rapid serial visual presentation" (RSVP) streams in the periphery while maintaining fixation. They continually attended to, and discriminated targets in one particular, cued stream. Over and above this continuous allocation, spatial attention transients ("attention shifts") were prompted by changes in the cued stream location and temporal attention transients ("attentional blinks") by successful target discriminations. Note that the RSVP paradigm avoided the preparatory suppression of microsaccades in anticipation of stimulus or task events, which had been prominent in earlier studies. Both stream changes and target discriminations evoked residual modulations of microsaccade rate and direction, which were consistent with the presumed attentional dynamics in each case (i.e., attention shift and attentional blink, respectively). Interestingly, even microsaccades associated with neither stream change nor target discrimination reflected the continuous allocation of attention, inasmuch as their direction was aligned with the meridian of the target stream. We conclude that attentional allocation shapes microsaccadic activity continuously, not merely during dynamic episodes such as attentional shifts or blinks. |
Kevin B. Paterson; Victoria A. McGowan; Timothy R. Jordan Filtered text reveals adult age differences in reading: Evidence from eye movements Journal Article In: Psychology and Aging, vol. 28, no. 2, pp. 352–364, 2013. @article{Paterson2013, Sensitivity to certain spatial frequencies declines with age and this may have profound effects on reading performance. However, the spatial frequency content of text actually used by older adults (aged 65+), and how this differs from that used by young adults (aged 18-30), remains to be determined. To investigate this issue, the eye movement behavior of young and older adult readers was assessed using a gaze-contingent moving-window paradigm in which text was shown normally within a region centered at the point of gaze, whereas text outside this region was filtered to contain only low, medium, or high spatial frequencies. For young adults, reading times were affected by spatial frequency content when windows of normal text extended up to nine characters wide. Within this processing region, the reading performance of young adults was affected little when text outside the window contained either only high or medium spatial frequencies, but was disrupted substantially when text contained only low spatial frequencies. By contrast, the reading performance of older adults was affected by spatial frequency content when windows extended up to 18 characters wide. Moreover, within this extended processing region, reading performance was disrupted when text contained any one band of spatial frequencies, but was disrupted most of all when text contained only high spatial frequencies. These findings indicate that older adults are sensitive to the spatial frequency content of text from a much wider region than young adults, and rely much more than young adults on coarse-scale components of text when reading. |
Kevin B. Paterson; Victoria A. McGowan; Timothy R. Jordan Effects of adult aging on reading filtered text: Evidence from eye movements Journal Article In: PeerJ, vol. 1, pp. 1–16, 2013. @article{Paterson2013a, Objectives. Sensitivity to spatial frequencies changes with age and this may have profound effects on reading. But how the actual contributions to reading performance made by the spatial frequency content of text differs between young (18-30 years) and older (65+ years) adults remains to be fully determined. Accordingly, we manipulated the spatial frequency content of text and used eye movement measures to assess the effects on reading performance in both age groups. Method. Sentences were displayed as normal or filtered to contain only very low, low, medium, high, or very high spatial frequencies. Reading time and eye movements were recorded as participants read each sentence. Results. Both age groups showed good overall reading ability and high levels of comprehension. However, for young adults, normal performance was impaired only by low and very low spatial frequencies, whereas normal performance for older adults was impaired by all spatial frequencies but least of all by medium. Conclusion. While both young and older adults read and comprehended well, reading ability was supported by different spatial frequencies in each age group. Thus, although spatial frequency sensitivity can change with age, adaptive responses to this change can help maintain reading performance in later life. |
Kevin B. Paterson; Victoria A. McGowan; Timothy R. Jordan Aging and the control of binocular fixations during reading Journal Article In: Psychology and Aging, vol. 28, no. 3, pp. 789–795, 2013. @article{Paterson2013b, Older adults (65 ⫹ years) often have greater difficulty in reading than young adults (18–30 years). However, the extent to which this difficulty is attributable to impaired eye-movement control is uncertain. To address this issue, the alignment and location of the two eyes' fixations during reading were monitored for young and older adults. Older adults showed typical patterns of reading difficulty but the results revealed no age differences in the alignment or location of the two eyes' fixations. Thus, the difficulty older adults experience in reading is not related to oculomotor control, which appears to be preserved into older age. |
Pierre-Vincent Paubel; Philippe Averty; Éric Raufaste Effects of an automated conflict solver on the visual activity of air traffic controllers Journal Article In: International Journal of Aviation Psychology, vol. 23, no. 2, pp. 181–196, 2013. @article{Paubel2013, ERASMUS is a "subliminal" automated aid system designed to reduce air traffic controllers' workload. Prior experiments showed that ERASMUS reduced subjective ratings of mental workload. In this article, the effect of ERASMUS on objective measures of controllers' visual activity was tested in a fully realistic simulation environment. The eye movements of 7 controllers were recorded during experimental traffic sequences, with and without ERASMUS. Consistent with a reduced workload hypothesis, results showed medium to large effects of ERASMUS on the amplitude of saccades, on the time spent gazing at aircraft, and on the distribution of attention over the visual scene. |
Christopher J. Peck; Brian Lau; C. Daniel Salzman The primate amygdala combines information about space and value Journal Article In: Nature Neuroscience, vol. 16, no. 3, pp. 340–348, 2013. @article{Peck2013, A stimulus predicting reinforcement can trigger emotional responses, such as arousal, and cognitive ones, such as increased attention toward the stimulus. Neuroscientists have long appreciated that the amygdala mediates spatially nonspecific emotional responses, but it remains unclear whether the amygdala links motivational and spatial representations. To test whether amygdala neurons encode spatial and motivational information, we presented reward-predictive cues in different spatial configurations to monkeys and assessed how these cues influenced spatial attention. Cue configuration and predicted reward magnitude modulated amygdala neural activity in a coordinated fashion. Moreover, fluctuations in activity were correlated with trial-to-trial variability in spatial attention. Thus, the amygdala integrates spatial and motivational information, which may influence the spatial allocation of cognitive resources. These results suggest that amygdala dysfunction may contribute to deficits in cognitive processes normally coordinated with emotional responses, such as the directing of attention toward the location of emotionally relevant stimuli. |
Yan Ma; B. Douglas Ward; Kristina M. Ropella; Edgar A. Deyoe Comparison of randomized multifocal mapping and temporal phase mapping of visual cortex for clinical use Journal Article In: NeuroImage: Clinical, vol. 3, pp. 143–154, 2013. @article{Ma2013, fMRI is becoming an important clinical tool for planning and guidance of surgery to treat brain tumors, arteriovenous malformations, and epileptic foci. For visual cortex mapping, the most popular paradigm by far is temporal phase mapping, although random multifocal stimulation paradigms have drawn increased attention due to their ability to identify complex response fields and their random properties. In this study we directly compared temporal phase and multifocal vision mapping paradigms with respect to clinically relevant factors including: time efficiency, mapping completeness, and the effects of noise. Randomized, multifocal mapping accurately decomposed the response of single voxels to multiple stimulus locations and made correct retinotopic assignments as noise levels increased despite decreasing sensitivity. Also, multifocal mapping became less efficient as the number of stimulus segments (locations) increased from 13 to 25 to 49 and when duty cycle was increased from 25% to 50%. Phase mapping, on the other hand, activated more extrastriate visual areas, was more time efficient in achieving statistically significant responses, and had better sensitivity as noise increased, though with an increase in systematic retinotopic misassignments. Overall, temporal phase mapping is likely to be a better choice for routine clinical applications though random multifocal mapping may offer some unique advantages for selected applications. |
Diane E. MacKenzie; David A. Westwood Observation patterns of dynamic occupational performance Journal Article In: Canadian Journal of Occupational Therapy, vol. 80, no. 2, pp. 92–100, 2013. @article{MacKenzie2013, Background. Visual observation is a key component of both formal and informal occupational performance assessment, but it is unknown how therapists gather this visual information. Purpose. The purpose of this study was to explore observational behaviour of occupational therapists and non–health care professionals when watching videos of simulated clients post-stroke participating in everyday activity. Method. Ten licensed occupational therapists and 10 age-, gender-, and education level–matched participants completed this eye-tracking study. Findings. Contrary to our past work with static image viewing, we found limited evidence of differences in eye movement characteristics between the two groups, although results did support the role of bottom-up information, such as visual motion, as a determinant of looking behaviour. Implications. These results suggest that understanding observational behaviour in therapists can be aided with eye-tracking methodology, but future studies should probe a broad range of factors that might influence observational behaviour and performance, such as assessment goals, knowledge, and therapist experience. |
Diane E. MacKenzie; David A. Westwood Occupational therapists and observation: What are you looking at? Journal Article In: OTJR: Occupation, Participation and Health, vol. 33, no. 1, pp. 4–11, 2013. @article{MacKenzie2013a, Visual observation is a fundamental skill underlying all occupational performance assessments in occupational therapy. The purpose of this study was to determine whether eye movement patterns differ between occupational therapists and non-healthcare professionals during observation of static images portraying a client post-stroke (domain-specific content) or naturalistic scenes (domain-irrelevant content). Ten licensed occupational therapists (OT group) and 10 participants matched for age, gender, and education level (NonOT group) completed the study. Participants viewed two counterbalanced blocks of 10 images (scene and stroke) under the pretext of preparing for a memory test. The OT group differed in the viewing strategies during observation and in how they directed their eyes (higher frequency of fixations, shorter fixation durations, and increased saccade count) for domain-specific and domain-irrelevant images alike. Observation patterns used by occupational therapists are presumably related to top-down influences that are not necessarily related to domain-specific knowledge but perhaps to general experience with performing assessments using observational methods. |
Laurent Madelain; James P. Herman; Mark R. Harwood Saccade adaptation goes for the goal Journal Article In: Journal of Vision, vol. 13, no. 4, pp. 1–15, 2013. @article{Madelain2013, The oculomotor system maintains saccade accuracy by adjusting saccades that are consistently inaccurate. Four experiments were performed to determine the relative contribution of background and target postsaccadic displacement. Unlike typical saccade adaptation experiments, we used natural image scenes and masked target and background displacements during the saccade to exclude motion signals from allowing detection of the displacements. We found that the background had no effect on saccade gain while the target drove gain changes. Only when the target was blanked after the saccade did we observe some adaptation in the direction of the background displacement. We conclude that target selection is critical to saccade adaptation, and operates effectively against natural image backgrounds. |
Adrian Madsen; Amy Rouinfar; Adam M. Larson; Lester C. Loschky; N. Sanjay Rebello Can short duration visual cues influence students' reasoning and eye movements in physics problems? Journal Article In: Physical Review Special Topics - Physics Education Research, vol. 9, pp. 020104, 2013. @article{Madsen2013, We investigate the effects of visual cueing on students' eye movements and reasoning on introductory physics problems with diagrams. Participants in our study were randomly assigned to either the cued or noncued conditions, which differed by whether the participants saw conceptual physics problems overlaid with dynamic visual cues. Students in the cued condition were shown an initial problem, and if they answered that incorrectly, they were shown a series of problems each with selection and integration cues overlaid on the problem diagrams. Students in the noncued condition were also provided a series of problems, but without any visual cues. We found that significantly more participants in the cued condition answered the problems overlaid with visual cues correctly on one of the four problem sets used and a subsequent uncued problem (the transfer problem) on a different problem set. We also found that those in the cued condition spent significantly less time looking at "novicelike" areas of the diagram in the transfer problem on three of the four problem sets and significantly more time looking at the "expertlike" areas of the diagram in the transfer problem on one problem set. Thus, the use of visual cues to influence reasoning and visual attention in physics problems is promising. |
Willem M. Mak; Ted J. M. Sanders The role of causality in discourse processing: Effects of expectation and coherence relations Journal Article In: Language and Cognitive Processes, vol. 28, no. 9, pp. 1414–1437, 2013. @article{Mak2013, Research on the processing of causality has shown that causally related sentences lead to faster reading, better recall, and better comprehension than sentences that are not causally related. In this study, we investigate two ways in which causality can influence processing: through the expectation that readers may have of a causal relation and the ease with which the sentences can be related in a causal way on the basis of their content. We ran two eye tracking experiments to investigate the online effects of these factors. In the experiments we looked at the influence of these factors on the process of establishing referential and relational coherence. Experiment 1 shows that immediate effects of causal relatedness on referential processing occur even with a connective that is not explicitly causal (when). Moreover, the results show that the early effect only occurs when readers expect a causal relation. Experiment 1 also shows that causal expectations facilitate the processing of causally related sentences. Experiment 2 shows that this is only the case when the content of the second clause actually allows a causal interpretation. The data show that causal expectations have differential effects on the processing of referential and relational coherence. Referential coherence is influenced proactively by the focusing of one of the referents in the context. Relational coherence, on the other hand, is influenced retroactively: only when there turns out to be a causal link between the sentences is processing facilitated by causal expectation. |
Ryan T. Maloney; Tamara L. Watson; Colin W. G. Clifford Human cortical and behavioral sensitivity to patterns of complex motion at eccentricity Journal Article In: Journal of Neurophysiology, vol. 110, no. 11, pp. 2545–2556, 2013. @article{Maloney2013, Complex patterns of image motion (contracting, expanding, rotating, and spiraling fields) are important in the coordination of visually guided behaviors. Whereas specialized detectors in monkey visual cortex show selectivity for particular patterns of complex motion, their representation in human visual cortex remains unclear. In the present study, functional magnetic resonance imaging (fMRI) was used to investigate the sensitivity of functionally defined regions of human visual cortex to parametrically modulated complex motion trajectories, coupled with complementary psychophysical testing. A unique stimulus design made it possible to disambiguate the neural responses and psychophysical sensitivity to complex motions per se from the distribution of local motions relative to the fovea, which are known to enhance cortical activity when presented radial to fixation. This involved presenting several small, separate motion fields in the periphery in a manner that distinguished them from global optic flow patterns. The patterns were morphed through complex motion space in a systematic time-locked fashion when presented in the scanner. Anisotropies were observed in the fMRI signal, marked by an enhanced response to expanding vs. contracting fields, even in early visual cortex. Anisotropies in the psychophysical sensitivity measures followed a similar pattern that was correlated with activity in areas hV4, V5/MT, and MST. This represents the first systematic examination of complex motion perception at both a behavioral and neural level in human observers. The characteristic processing anisotropy revealed in both data sets can inform models of complex motion processing, particularly with respect to computations performed in early visual cortex. |
Florian Perdreau; Patrick Cavanagh The artistís advantage: Better integration of object information across eye movements Journal Article In: i-Perception, vol. 4, no. 6, pp. 380–395, 2013. @article{Perdreau2013, Over their careers, figurative artists spend thousands of hours analyzing objects and scene layout. We examined what impact this extensive training has on the ability to encode complex scenes, comparing participants with a wide range of training and drawing skills on a possible versus impossible objects task. We used a gaze-contingent display to control the amount of information the participants could sample on each fixation either from central or peripheral visual field. Test objects were displayed and participants reported, as quickly as possible, whether the object was structurally possible or not. Our results show that when viewing the image through a small central window, performance improved with the years of training, and to a lesser extent with the level of skill. This suggests that the extensive training itself confers an advantage for integrating object structure into more robust object descriptions. |
Manuel Perea Why does the APA recommend the use of serif fonts? Journal Article In: Psicothema, vol. 25, no. 1, pp. 13–17, 2013. @article{Perea2013, Background: The publication norms of the American Psychological Association recommend the use of a serif font in the manuscripts (Times New Roman). However, there seems to be no well-substantiated reason why serif fonts would produce any advantage during letter/word processing. Method: This study presents an experiment in which sentences were presented either with a serif or sans serif font from the same family while participants' eye movements were monitored. Results: Results did not reveal any differences of type of font in eye movement measures –except for a minimal effect in the number of progressive saccades. Conclusions: There is no reason why the APA publication norms recommend the use of serif fonts other than uniformity in the elaboration/presentation of the manuscripts. |
Laura Pérez Zapata; J. A. Aznar-Casanova; H. Supèr Two stages of programming eye gaze shifts in 3-D space Journal Article In: Vision Research, vol. 86, pp. 15–26, 2013. @article{PerezZapata2013, Accurate saccadic and vergence eye movements towards selected visual targets are fundamental to perceive the 3-D environment. Despite this importance, shifts in eye gaze are not always perfect given that they are frequently followed by small corrective eye movements. The oculomotor system receives dis- tinct information from various visual cues that may cause incongruity in the planning of a gaze shift. To test this idea, we analyzed eye movements in humans performing a saccade task in a 3-D setting. We show that saccades and vergence movements towards peripheral targets are guided by monocular (perceptual) cues. Approximately 200 ms after the start of fixation at the perceived target, a fixational saccade corrected the eye positions to the physical target location. Our findings suggest that shifts in eye gaze occur in two phases; a large eye movement toward the perceived target location followed by a corrective saccade that directs the eyes to the physical target location. |
Adam M. Perkins; Ulrich Ettinger; K. Weaver; Anne Schmechtig; A. Schrantee; P. D. Morrison; A. Sapara; V. Kumari; Steve C. R. Williams; P. J. Corr In: Translational Psychiatry, vol. 3, pp. e246, 2013. @article{Perkins2013, Clinically effective drugs against human anxiety and fear systematically alter the innate defensive behavior of rodents, suggesting that in humans these emotions reflect defensive adaptations. Compelling experimental human evidence for this theory is yet to be obtained. We report the clearest test to date by investigating the effects of 1 and 2 mg of the anti-anxiety drug lorazepam on the intensity of threat-avoidance behavior in 40 healthy adult volunteers (20 females). We found lorazepam modulated the intensity of participants' threat-avoidance behavior in a dose-dependent manner. However, the pattern of effects depended upon two factors: type of threat-avoidance behavior and theoretically relevant measures of personality. In the case of flight behavior (one-way active avoidance), lorazepam increased intensity in low scorers on the Fear Survey Schedule tissue-damage fear but reduced it in high scorers. Conversely, in the case of risk-assessment behavior (two-way active avoidance), lorazepam reduced intensity in low scorers on the Spielberger trait anxiety but increased it in high scorers. Anti-anxiety drugs do not systematically affect rodent flight behavior; therefore, we interpret this new finding as suggesting that lorazepam has a broader effect on defense in humans than in rodents, perhaps by modulating general perceptions of threat intensity. The different patterning of lorazepam effects on the two behaviors implies that human perceptions of threat intensity are nevertheless distributed across two different neural streams, which influence effects observed on one-way or two-way active avoidance demanded by the situation. |
Melanie Perron; Annie Roy-Charland Analysis of eye movements in the judgment of enjoyment and non-enjoyment smiles Journal Article In: Frontiers in Psychology, vol. 4, pp. 659, 2013. @article{Perron2013, Enjoyment smiles are more often associated with the simultaneous presence of the Cheek raiser and Lip corner puller action units, and these units' activation is more often symmetric. Research on the judgment of smiles indicated that individuals are sensitive to these types of indices, but it also suggested that their ability to perceive these specific indices might be limited. The goal of the current study was to examine perceptual-attentional processing of smiles by using eye movement recording in a smile judgment task. Participants were presented with three types of smiles: a symmetric Duchenne, a non-Duchenne, and an asymmetric smile. Results revealed that the Duchenne smiles were judged happier than those with characteristics of non-enjoyment. Asymmetric smiles were also judged happier than the non-Duchenne smiles. Participants were as effective in judging the latter smiles as not really happy as they were in judging the symmetric Duchenne smiles as happy. Furthermore, they did not spend more time looking at the eyes or mouth regardless of types of smiles. While participants made more saccades between each side of the face for the asymmetric smiles than the symmetric ones, they judged the asymmetric smiles more often as really happy than not really happy. Thus, processing of these indices do not seem limited to perceptual-attentional difficulties as reflected in viewing behavior. |
Yoni Pertzov; Paul M. Bays; Sabine Joseph; Masud Husain Rapid forgetting prevented by retrospective attention cues Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1224–1231, 2013. @article{Pertzov2013, Recent studies have demonstrated that memory performance can be enhanced by a cue which indicates the item most likely to be subsequently probed, even when that cue is delivered seconds after a stimulus array is extinguished. Although such retro-cuing has attracted considerable interest, the mechanisms underlying it remain unclear. Here, we tested the hypothesis that retro-cues might protect an item from degradation over time. We employed two techniques that previously have not been deployed in retro-cuing tasks. First, we used a sensitive, continuous scale for reporting the orientation of a memorized item, rather than binary measures (change or no change) typically used in previous studies. Second, to investigate the stability of memory across time, we also systematically varied the duration between the retro-cue and report. Although accuracy of reporting uncued objects rapidly declined over short intervals, retro-cued items were significantly more stable, showing negligible decline in accuracy across time and protection from forgetting. Retro-cuing an object's color was just as advantageous as spatial retro-cues. These findings demonstrate that during maintenance, even when items are no longer visible, attention resources can be selectively redeployed to protect the accuracy with which a cued item can be recalled over time, but with a corresponding cost in recall for uncued items. |
Claudia Peschke; Claus C. Hilgetag; Bettina Olk Influence of stimulus type on effects of flanker, flanker position, and trial sequence in a saccadic eye movement task Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 11, pp. 2253–2267, 2013. @article{Peschke2013, Using the flanker paradigm in a task requiring eye movement responses, we examined how stimulus type (arrows vs. letters) modulated effects of flanker and flanker position. Further, we examined trial sequence effects and the impact of stimulus type on these effects. Participants responded to a central target with a left- or rightward saccade. We reasoned that arrows, being overlearned symbols of direction, are processed with less effort and are therefore linked more easily to a direction and a required response than are letters. The main findings demonstrate that (a) flanker effects were stronger for arrows than for letters, (b) flanker position more strongly modulated the flanker effect for letters than for arrows, and (c) trial sequence effects partly differed between the two stimulus types. We discuss these findings in the context of a more automatic and effortless processing of arrow relative to letter stimuli. |
Anders Petersen; Søren Kyllingsbæk Eye movements and practice effects in the attentional dwell time paradigm Journal Article In: Experimental Psychology, vol. 60, no. 1, pp. 22–33, 2013. @article{Petersen2013a, In the attentional dwell time paradigm by Duncan, Ward, and Shapiro (1994), two backward masked targets are presented at different spatial locations and separated by a varying time interval. Results show that report of the second target is severely impaired when the time interval is less than 500 ms which has been taken as a direct measure of attentional dwell time in human vision. However, we show that eye movements may have confounded the estimate of the dwell time and that the measure may not be robust as previously suggested. The latter is supported by evidence suggesting that intensive training strongly attenuates the dwell time because of habituation to the masks. Thus, this article points to eye movements and masking as two potential methodological pitfalls that should be considered when using the attentional dwell time paradigm to investigate the temporal dynamics of attention. |
Anders Petersen; Søren Kyllingsbæk; Claus Bundesen Attentional dwell times for targets and masks Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–12, 2013. @article{Petersen2013, Studies on the temporal dynamics of attention have shown that the report of a masked target (T2) is severely impaired when the target is presented with a delay (stimulus onset asynchrony) of less than 500 ms after a spatially separate masked target (T1). This is known as the attentional dwell time. Recently, we have proposed a computational model of this effect building on the idea that a stimulus retained in visual short-term memory (VSTM) takes up visual processing resources that otherwise could have been used to encode subsequent stimuli into VSTM. The resources are locked until the stimulus in VSTM has been recoded, which explains the long dwell time. Challenges for this model and others are findings by Moore, Egeth, Berglan, and Luck (1996) suggesting that the dwell time is substantially reduced when the mask of T1 is removed. Here we suggest that the mask of T1 modulates performance not by noticeably affecting the dwell time but instead by acting as a distractor drawing processing resources away from T2. This is consistent with our proposed model in which targets and masks compete for attentional resources and attention dwells on both. We tested the model by replicating the study by Moore et al., including a new condition in which T1 is omitted but the mask of T1 is retained. Results from this and the original study by Moore et al. are modeled with great precision. |
Matthew F. Peterson; Miguel P. Eckstein Individual differences in eye movements during face identification reflect observer-specific optimal points of fixation Journal Article In: Psychological Science, vol. 24, no. 7, pp. 1216–1225, 2013. @article{Peterson2013, In general, humans tend to first look just below the eyes when identifying another person. Does everybody look at the same place on a face during identification, and, if not, does this variability in fixation behavior lead to functional consequences? In two conditions, observers had their free eye movements recorded while they performed a face-identification task. In another condition, the same observers identified faces while their gaze was restricted to specific locations on each face. We found substantial differences, which persisted over time, in where individuals chose to first move their eyes. Observers' systematic departure from a canonical, theoretically optimal fixation point did not correlate with performance degradation. Instead, each individual's looking preference corresponded to an idiosyncratic performance-maximizing point of fixation: Those who looked lower on the face performed better when forced to fixate the lower part of the face. The results suggest an observer-specific synergy between the face-recognition and eye movement systems that optimizes face-identification performance. |
Judith Peth; Johann S. C. Kim; Matthias Gamer Fixations and eye-blinks allow for detecting concealed crime related memories Journal Article In: International Journal of Psychophysiology, vol. 88, no. 1, pp. 96–103, 2013. @article{Peth2013, The Concealed Information Test (CIT) is a method of forensic psychophysiology that allows for revealing concealed crime related knowledge. Such detection is usually based on autonomic responses but there is a huge interest in other measures that can be acquired unobtrusively. Eye movements and blinks might be such measures but their validity is unclear. Using a mock crime procedure with a manipulation of the arousal during the crime as well as the delay between crime and CIT, we tested whether eye tracking measures allow for detecting concealed knowledge. Guilty participants showed fewer but longer fixations on central crime details and this effect was even present after stimulus offset and accompanied by a reduced blink rate. These ocular measures were partly sensitive for induction of emotional arousal and time of testing. Validity estimates were moderate but indicate that a significant differentiation between guilty and innocent subjects is possible. Future research should further investigate validity differences between gaze measures during a CIT and explore the underlying mechanisms. |
Kati Pettersson; Sharman Jagadeesan; Kristian Lukander; Andreas Henelius; Edward Hæggström; Kiti Müller Algorithm for automatic analysis of electro-oculographic data Journal Article In: BioMedical Engineering Online, vol. 12, no. 1, pp. 1–17, 2013. @article{Pettersson2013, BACKGROUND: Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. METHODS: The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. RESULTS: The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. CONCLUSION: The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. |
Sotiris Plainis; Dionysia Petratou; Trisevgeni Giannakopoulou; Hema Radhakrishnan; Ioannis G. Pallikaris; W. Neil Charman Interocular differences in visual latency induced by reduced-aperture monovision Journal Article In: Ophthalmic and Physiological Optics, vol. 33, no. 2, pp. 123–129, 2013. @article{Plainis2013, PURPOSE: To explore the interocular differences in the temporal responses of the eyes induced by the monocular use of small-aperture optics designed to aid presbyopes by increasing their depth-of-focus. METHODS: Monocular and binocular pattern-reversal visual evoked potentials (VEPs) were measured at a mean photopic field luminance of 30 cd/m(2) in seven normal subjects with either natural pupils or when the non-dominant eye wore a small-aperture contact lens (aperture diameter 1.5, 2.5 or 3.5 mm, or an annular opaque stop of inner and outer diameters 1.5 and 4.0 mm respectively). Responses were also measured with varying stimulus luminance (5, 13.9, 27.2 and 45 cd/m(2)) and a fixed 3.0 mm artificial pupil. RESULTS: Mean natural pupil diameters were 4.7 and 4.4 mm under monocular and binocular conditions respectively. The small-aperture contact lenses reduced the amplitude of the P100 component of the VEP and increased its latency. Inter-ocular differences in latency rose to about 20-25 ms when the pupil diameter of the non-dominant eye was reduced to 1.5 mm. The measurements with fixed pupil and varying luminance suggested that the observed effects were explicable in terms of the changes in retinal illuminance produced by the restrictions in pupil area. CONCLUSIONS: The anisocoria induced by small-aperture approaches to aid presbyopes produces marked interocular differences in visual latency. The literature of the Pulfrich effect suggests that such differences can lead to distortions in the perception of relative movement and, in some cases, to possible hazard. |
Sotiris Plainis; Dionysia Petratou; Trisevgeni Giannakopoulou; Hema Radhakrishnan; Ioannis G. Pallikaris; W. Neil Charman Small-aperture monovision and the Pulfrich experience: Absence of neural adaptation effects Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75987, 2013. @article{Plainis2013a, PURPOSE: To explore whether adaptation reduces the interocular visual latency differences and the induced Pulfrich effect caused by the anisocoria implicit in small-aperture monovision. METHODS: Anisocoric vision was simulated in two adults by wearing in the non-dominant eye for 7 successive days, while awake, an opaque soft contact lens (CL) with a small, central, circular aperture. This was repeated with aperture diameters of 1.5 and 2.5 mm. Each day, monocular and binocular pattern-reversal Visual Evoked Potentials (VEP) were recorded. Additionally, the Pulfrich effect was measured: the task of the subject was to state whether a a 2-deg spot appeared in front or behind the plane of a central cross when moved left-to-right or right-to-left on a display screen. The retinal illuminance of the dominant eye was varied using neutral density (ND) filters to establish the ND value which eliminated the Pulfrich effect for each lens. All experiments were performed at luminance levels of 5 and 30 cd/m(2). RESULTS: Interocular differences in monocular VEP latency (at 30 cd/m(2)) rose to about 12-15 ms and 20-25 ms when the CL aperture was 2.5 and 1.5 mm, respectively. The effect was more pronounced at 5 cd/m(2) (i.e. with larger natural pupils). A strong Pulfrich effect was observed under all conditions, with the effect being less striking for the 2.5 mm aperture. No neural adaptation appeared to occur: neither the interocular differences in VEP latency nor the ND value required to null the Pulfrich effect reduced over each 7-day period of anisocoric vision. CONCLUSIONS: Small-aperture monovision produced marked interocular differences in visual latency and a Pulfrich experience. These were not reduced by adaptation, perhaps because the natural pupil diameter of the dominant eye was continually changing throughout the day due to varying illumination and other factors, making adaptation difficult. |
Marc Pomplun; Tyler W. Garaas; Marisa Carrasco The effects of task difficulty on visual search strategy in virtual 3D displays Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–22, 2013. @article{Pomplun2013, Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy'' conjunction search task and a "difficult'' shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy'' task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult'' task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. |
Holger Mitterer; Eva Reinisch No delays in application of perceptual learning in speech recognition: Evidence from eye tracking Journal Article In: Journal of Memory and Language, vol. 69, no. 4, pp. 527–545, 2013. @article{Mitterer2013, Three eye-tracking experiments tested at what processing stage lexically-guided retuning of a fricative contrast affects perception. One group of participants heard an ambiguous fricative between /s/ and /f/ replace /s/ in s-final words, the other group heard the same ambiguous fricative replacing /f/ in f-final words. In a test phase, both groups of participants heard a range of ambiguous fricatives at the end of Dutch minimal pairs (e.g., roos-. roof, 'rose'-'robbery'). Participants who heard the ambiguous fricative replacing /f/ during exposure chose at test the f-final words more often than the other participants. During this test-phase, eye-tracking data showed that the effect of exposure exerted itself as soon as it could possibly have occurred, 200. ms after the onset of the fricative. This was at the same time as the onset of the effect of the fricative itself, showing that the perception of the fricative is changed by perceptual learning at an early level. Results converged in a time-window analysis and a Jackknife procedure testing the time at which effects reached a given proportion of their maxima. This indicates that perceptual learning affects early stages of speech processing, and supports the conclusion that perceptual learning is indeed perceptual rather than post-perceptual. |