All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2013 |
Kyoung-Min Lee; Kyung-Ha Ahn The frontal eye fields limit the capacity of visual short-term memory in rhesus monkeys Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e59606, 2013. @article{Lee2013a, The frontal eye fields (FEF) in rhesus monkeys have been implicated in visual short-term memory (VSTM) as well as control of visual attention. Here we examined the importance of the area in the VSTM capacity and the relationship between VSTM and attention, using the chemical inactivation technique and multi-target saccade tasks with or without the need of target-location memory. During FEF inactivation, serial saccades to targets defined by color contrast were unaffected, but saccades relying on short-term memory were impaired when the target count was at the capacity limit of VSTM. The memory impairment was specific to the FEF-coded retinotopic locations, and subject to competition among targets distributed across visual fields. These results together suggest that the FEF plays a crucial role during the entry of information into VSTM, by enabling attention deployment on targets to be remembered. In this view, the memory capacity results from the limited availability of attentional resources provided by FEF: The FEF can concurrently maintain only a limited number of activations to register the targets into memory. When lesions render part of the area unavailable for activation, the number would decrease, further reducing the capacity of VSTM. |
Mallorie Leinenger; Keith Rayner Eye movements while reading biased homographs: Effects of prior encounter and biasing context on reducing the subordinate bias effect Journal Article In: Journal of Cognitive Psychology, vol. 25, no. 6, pp. 665–681, 2013. @article{Leinenger2013, Readers experience processing difficulties when reading biased homographs preceded by subordinate- biasing contexts. Attempts to overcome this processing deficit have often failed to reduce the subordinate bias effect (SBE). In the present studies, we examined the processing of biased homographs preceded by 10 single-sentence, subordinate-biasing contexts, and varied whether this preceding context contained a prior instance of the homograph or a control word/phrase. Having previously encountered the homograph earlier in the sentence reduced the SBE for the subsequent encounter, whereas simply instantiating the subordinate meaning produced processing difficulty. We compared these reductions in reading times to differences in processing time between dominant-biased repeated and nonrepeated 15 conditions in order to verify that the reductions observed in the subordinate cases did not simply reflect a general repetition benefit. Our results indicate that a strong, subordinate-biasing context can interact during lexical access to overcome the activation from meaning frequency and reduce the SBE during reading. |
Alwine Lenzner; Wolfgang Schnotz; Andreas Müller The role of decorative pictures in learning Journal Article In: Instructional Science, vol. 41, no. 5, pp. 811–831, 2013. @article{Lenzner2013, Three experiments with students from 7th and 8th grade were performed to investigate the effects of decorative pictures in learning as compared to instructional pictures. Pictures were considered as instructional, when they were primarily informative, and as decorative, when they were primarily aesthetically appealing. The experiments investigated, whether and to what extent decorative pictures affect the learner's distribution of attention, whether they have an effect on the affective and motivational state and whether they affect the learning outcomes. The first experiment indicated with eye-tracking methodology that decorative pictures receive only a bit initial attention as part of the learner's initial orientation and are largely ignored afterwards, which suggests that they have only a minor distracting effect if any. The second experiment showed that despite the small amount of attention they receive, decorative pictures seem to induce better mood, alertness and calmness with learners. The third experiment indicated that decorative pictures did not intensify students' situational interest, but reduced perceived difficulty of the learning material. Regarding outcomes of learning, decorative pictures were altogether neither harmful nor beneficial for learning. However, they moderated the beneficial effect of instructional pictures–in essence: the multimedia effect. The moderating effect was especially pronounced when learners had lower prior knowledge. The findings are discussed from the perspective of cognitive, affective and motivational psychology. Perspectives of further research are pointed out. |
Carly J. Leonard; Benjamin M. Robinson; Samuel T. Kaiser; Britta Hahn; Clara McClenon; Alexander N. Harvey; Steven J. Luck; James M. Gold Testing sensory and cognitive explanations of the antisaccade deficit in schizophrenia Journal Article In: Journal of Abnormal Psychology, vol. 122, no. 4, pp. 1111–1120, 2013. @article{Leonard2013, Recent research has suggested that people with schizophrenia (PSZ) have sensory deficits, especially in the magnocellular pathway, and this has led to the proposal that dysfunctional sensory processing may underlie higher-order cognitive deficits. Here we test the hypothesis that the antisaccade deficit in PSZ reflects dysfunctional magnocellular processing rather than impaired cognitive processing, as indexed by working memory capacity. This is a plausible hypothesis because oculomotor regions have direct magnocellular inputs, and the stimuli used in most antisaccade tasks strongly activate the magnocellular visual pathway. In the current study, we examined both prosaccade and antisaccade performance in PSZ (N = 22) and matched healthy control subjects (HCS; N = 22) with Gabor stimuli designed to preferentially activate the magnocellular pathway, the parvocellular pathway, or both pathways. We also measured working memory capacity. PSZ exhibited impaired antisaccade performance relative to HCS across stimulus types, with impairment even for stimuli that minimized magnocellular activation. Although both sensory thresholds and working memory capacity were impaired in PSZ, only working memory capacity was correlated with antisaccade accuracy, consistent with a cognitive rather than sensory origin for the antisaccade deficit. |
Ute Leonards; Samantha Stone; Christine Mohr Line bisection by eye and by hand reveal opposite biases Journal Article In: Experimental Brain Research, vol. 228, no. 4, pp. 513–525, 2013. @article{Leonards2013, The vision-for-action literature favours the idea that the motor output of an action-whether manual or oculomotor-leads to similar results regarding object handling. Findings on line bisection performance challenge this idea: healthy individuals bisect lines manually to the left of centre and to the right of centre when using eye fixation. In case that these opposite biases for manual and oculomotor action reflect more universal compensatory mechanisms that cancel each other out to enhance overall accuracy, one would like to observe comparable opposite biases for other material. In the present study, we report on three independent experiments in which we tested line bisection (by hand, by eye fixation) not only for solid lines, but also for letter lines; the latter, when bisected manually, is known to result in a rightward bias. Accordingly, we expected a leftward bias for letter lines when bisected via eye fixation. Analysis of bisection biases provided evidence for this idea: manual bisection was more rightward for letter as compared to solid lines, while bisection by eye fixation was more leftward for letter as compared to solid lines. Support for the eye fixation observation was particularly obvious in two of the three studies, for which comparability between eye and hand action was increasingly adjusted (paper-pencil versus touch screen for manual action). These findings question the assumption that ocular motor and manual output are always inter-changeable, but rather suggest that at least for some situations ocular motor and manual output biases are orthogonal to each other, possibly balancing each other out. |
Benjamin D. Lester; Paul Dassonville Shifts of visuospatial attention do not cause the spatial distortions of the Roelofs effect Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 4–4, 2013. @article{Lester2013, When a visible frame is offset left or right of an observer's objective midline, subjective midline is pulled toward the frame's center, resulting in an illusion of perceived space known as the Roelofs effect. However, a large frame is not necessary to generate the effect-even a small peripheral stimulus is sufficient, raising the possibility that the effect would be brought about by any stimulus that draws attention away from the midline. To assess the relationship between attention and distortions of perceived space, we adopted a paradigm that included a spatial cue that attracted the participant's attention, and an occasional probe whose location was to be reported. If shifts of attention cause the Roelofs effect, the probe's perceived location should vary with the locus of attention. Exogenous attentional cues caused a Roelofs-like effect, but these cues created an asymmetry in the visual display that may have driven the effect directly. In contrast, there was no mislocation after endogenous cues that contained no asymmetry in the visual display. A final experiment used color-contingent attentional cues to eliminate the confound between cue location and asymmetry in the visual display, and provided a clear demonstration that the Roelofs effect is caused by an asymmetric visual display, independent of any shift of attention. |
Joshua Levy; Tom Foulsham; Alan Kingstone Monsters are people too Journal Article In: Biology Letters, vol. 9, pp. 1–4, 2013. @article{Levy2013, Animals, including dogs, dolphins, monkeys and man, follow gaze. What mediates this bias towards the eyes? One hypothesis is that primates possess a distinct neural module that is uniquely tuned for the eyes of others. An alternative explanation is that configural face processing drives fixations to the middle of peoples' faces, which is where the eyes happen to be located. We distinguish between these two accounts. Observers were presented with images of people, non-human creatures with eyes in the middle of their faces (`humanoids') or creatures with eyes positioned elsewhere (`monsters'). There was a profound and significant bias towards looking early and often at the eyes of humans and humanoids and also, critically, at the eyes of monsters. These findings demonstrate that the eyes, and not the middle of the head, are being targeted by the oculomotor system. |
Matthew W. Lowder; Wonil Choi; Peter C. Gordon Word recognition during reading: The interaction between lexical repetition and frequency Journal Article In: Memory & Cognition, vol. 41, no. 5, pp. 738–751, 2013. @article{Lowder2013a, Memory studies utilizing long-term repetition priming have generally demonstrated that priming is greater for low-frequency than for high-frequency words and that this effect persists if words intervene between the prime and the target. In contrast, word-recognition studies utilizing masked short-term repetition priming have typically shown that the magnitude of repetition priming does not differ as a function of word frequency and does not persist across intervening words. We conducted an eyetracking-while-reading experiment to determine which of these patterns more closely resembles the relationship between frequency and repetition during the natural reading of a text. Frequency was manipulated using proper names that were either high-frequency (e.g., Stephen) or low-frequency (e.g., Dominic). The critical name was later repeated in the sentence, or a new name was introduced. First-pass reading times and skipping rates on the critical name revealed robust repetition-by-frequency interactions, such that the magnitude of the repetition-priming effect was greater for low-frequency than for high-frequency names. In contrast, measures of later processing showed effects of repetition that did not depend on lexical frequency. These results are interpreted within a framework that conceptualizes eye-movement control as being influenced in different ways by lexical- and discourse-level factors. |
Matthew W. Lowder; Peter C. Gordon It's hard to offend the college: Effects of sentence structure on figurative-language processing Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 4, pp. 993–1011, 2013. @article{Lowder2013, Previous research has given inconsistent evidence about whether familiar metonyms are more difficult to process than literal expressions. In 2 eye-tracking-while-reading experiments, we tested the hypothesis that the difficulty associated with processing metonyms would depend on sentence structure. Experiment 1 examined comprehension of familiar place-for-institution metonyms (e.g., college) when they were an argument of the main verb and showed that they are more difficult to process in a figurative context (e.g., offended the college) than in a literal context (e.g., photographed the college). Experiment 2 demonstrated that when they are arguments of the main verb, familiar metonyms are more difficult to process than frequency-and-length-matched nouns that refer to people (e.g., offended the leader), but that this difficulty was reduced when the metonym appeared as part of an adjunct phrase (e.g., offended the honor of the college). The results support the view that figurative-language processing is moderated by sentence structure. When the metonym was an argument of the verb, the results were consistent with the pattern predicted by the indirect-access model of figurative-language comprehension. In contrast, when the metonym was part of an adjunct phrase, the results were consistent with the pattern predicted by the direct-access model. |
Sara Lucke; Harald Lachnit; Stephan Koenig; Metin Uengoer The informational value of contexts affects context-dependent learning Journal Article In: Learning and Behavior, vol. 41, no. 3, pp. 285–297, 2013. @article{Lucke2013, In two predictive-learning experiments, we investigated the role of the informational value of contexts for the formation of context-dependent behavior. During Phase 1 of each experiment, participants received either a conditional discrimination in which contexts were relevant (Group Relevant) or a simple discrimination in which contexts were irrelevant (Group Irrelevant). Each experiment also included an ABA renewal procedure. Participants received Z+ in context A during Phase 1, extinction of Z in context B during Phase 2, and were tested with Z in context A during a test phase. In each experiment, extinction of Z proceeded faster and was followed by stronger response recovery in Group Relevant than in Group Irrelevant. In Experiment 2, which included recording of eye-gaze behavior, dwell times on contexts were longer in Group Relevant than in Group Irrelevant. Our results support the idea that relevant contexts receive more attention, leading to stronger context specificity of learning. |
Steven G. Luke; Kiel Christianson The influence of frequency across the time course of morphological processing: Evidence from the transposed-letter effect Journal Article In: Journal of Cognitive Psychology, vol. 25, no. 7, pp. 781–799, 2013. @article{Luke2013, The role that morphology plays in lexical access has been the subject of much debate, as has the influence of word frequency on morphological processing. The effect of frequency on morphological processing across the time course of lexical access was investigated using the transposed-letter effect. The results of two experiments (one masked-priming experiment and one eye-tracking experiment) outline a process in which morphological structure can be detected quickly and independently of frequency. The present study is also the first to show that transpositions that cross morpheme boundaries can be as disruptive as letter substitutions in inflected words, replicating earlier results with derived and compound words. |
Steven G. Luke; John M. Henderson Oculomotor and cognitive control of eye movements in reading: Evidence from mindless reading Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 6, pp. 1230–1242, 2013. @article{Luke2013a, In the present study, we investigated the influence of cognitive factors on eye-movement behaviors in reading. Participants performed two tasks: a normal-reading task, as well as a mindless-reading task in which letters were replaced with unreadable block shapes. This mindless-reading task served as an oculomotor control condition, simulating the visual aspects of reading but removing higher-level linguistic processing. Fixation durations, word skipping, and some regressions were influenced by cognitive factors, whereas eye movements within words appeared to be less open to cognitive control. Implications for models of eye-movement control in reading are discussed. |
Steven G. Luke; Antje Nuthmann; John M. Henderson Eye movement control in scene viewing and reading: Evidence from the stimulus onset delay paradigm Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 1, pp. 10–15, 2013. @article{Luke2013b, The present study used the stimulus onset delay paradigm to investigate eye movement control in reading and in scene viewing in a within-participants design. Short onset delays (0, 25, 50, 200, and 350 ms) were chosen to simulate the type of natural processing difficulty encountered in reading and scene viewing. Fixation duration increased linearly with delay duration, and the effect was equivalent for both tasks. Although fixations were longer in scene viewing, the effects of onset delay were highly consistent across tasks. These results suggest that reading and scene viewing share a common mechanism for saccade planning and control. |
Steven G. Luke; Joseph Schmidt; John M. Henderson Temporal oculomotor inhibition of return and spatial facilitation of return in a visual encoding task Journal Article In: Frontiers in Psychology, vol. 4, pp. 400, 2013. @article{Luke2013c, Oculomotor inhibition of return (O-IOR) is an increase in saccade latency prior to an eye movement to a recently fixated location compared to other locations. It has been proposed that this temporal O-IOR may have spatial consequences, facilitating foraging by inhibiting return to previously attended regions. In order to test this possibility, participants viewed arrays of objects and of words while their eye movements were recorded. Temporal O-IOR was observed, with equivalent effects for object and word arrays, indicating that temporal O-IOR is an oculomotor phenomenon independent of array content. There was no evidence for spatial inhibition of return (IOR). Instead, spatial facilitation of return was observed: participants were significantly more likely than chance to make return saccades and to re-fixate just-visited locations. Further, the likelihood of making a return saccade to an object or word was contingent on the amount of time spent viewing that object or word before leaving it. This suggests that, unlike temporal O-IOR, return probability is influenced by cognitive processing. Taken together, these results are inconsistent with the hypothesis that IOR functions as a foraging facilitator. The results also provide strong evidence for a different oculomotor bias that could serve as a foraging facilitator: saccadic momentum, a tendency to repeat the most recently executed saccade program. We suggest that models of visual attention could incorporate saccadic momentum in place of IOR. |
J. R. Lukos; J. Snider; M. E. Hernandez; E. Tunik; S. Hillyard; Howard Poizner Parkinson's disease patients show impaired corrective grasp control and eye-hand coupling when reaching to grasp virtual objects Journal Article In: Neuroscience, vol. 254, pp. 205–221, 2013. @article{Lukos2013, The effect of Parkinson's disease (PD) on hand-eye coordination and corrective response control during reach-to-grasp tasks remains unclear. Moderately impaired PD patients (n= 9) and age-matched controls (n= 12) reached to and grasped a virtual rectangular object, with haptic feedback provided to the thumb and index fingertip by two 3-degree of freedom manipulanda. The object rotated unexpectedly on a minority of trials, requiring subjects to adjust their grasp aperture. On half the trials, visual feedback of finger positions disappeared during the initial phase of the reach, when feedforward mechanisms are known to guide movement. PD patients were tested without (OFF) and with (ON) medication to investigate the effects of dopamine depletion and repletion on eye-hand coordination online corrective response control. We quantified eye-hand coordination by monitoring hand kinematics and eye position during the reach. We hypothesized that if the basal ganglia are important for eye-hand coordination and online corrections to object perturbations, then PD patients tested OFF medication would show reduced eye-hand spans and impoverished arm-hand coordination responses to the perturbation, which would be further exasperated when visual feedback of the hand was removed. Strikingly, PD patients tracked their hands with their gaze, and their movements became destabilized when having to make online corrective responses to object perturbations exhibiting pauses and changes in movement direction. These impairments largely remained even when tested in the ON state, despite significant improvement on the Unified Parkinson's Disease Rating Scale. Our findings suggest that basal ganglia-cortical loops are essential for mediating eye-hand coordination and adaptive online responses for reach-to-grasp movements, and that restoration of tonic levels of dopamine may not be adequate to remediate this coordinative nature of basal ganglia-modulated function. © 2013 IBRO. |
Yingyi Luo; Ming Yan; Xiaolin Zhou Prosodic boundaries delay the processing of upcoming lexical information during silent sentence reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 3, pp. 915–930, 2013. @article{Luo2013, Prosodic boundaries can be used to guide syntactic parsing in both spoken and written sentence comprehension, but it is unknown whether the processing of prosodic boundaries affects the processing of upcoming lexical information. In 3 eye-tracking experiments, participants read silently sentences that allow for 2 possible syntactic interpretations when there is no comma or other cue specifying which interpretation should be taken. In Experiments 1 and 2, participants heard a low-pass filtered auditory version of the sentence, which provided a prosodic boundary cue prior to each sentence. In Experiment 1, we found that the boundary cue helped syntactic disambiguation after the cue and led to longer fixation durations on regions right before the cue than on identical regions without prosodic boundary information. In Experiments 2 and 3, we used a gaze-contingent display-change paradigm to manipulate the parafoveal visibility of the first constituent character of the target word after the disambiguating position. Results of Experiment 2 showed that previewing the first character significantly reduced the reading time of the target word, but this preview benefit was greatly reduced when the prosodic boundary cue was introduced at this position. In Experiment 3, instead of the acoustic cues, a visually presented comma was inserted at the disambiguating position in each sentence. Results showed that the comma effect on lexical processing was essentially the same as the effect of prosodic boundary cue. These findings demonstrate that processing a prosodic boundary could impair the processing of parafoveal information during sentence reading. |
Victor Kuperman; Julie A. Van Dyke Reassessing word frequency as a determinant of word recognition for skilled and unskilled readers Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 3, pp. 802–823, 2013. @article{Kuperman2013, The importance of vocabulary in reading comprehension emphasizes the need to accurately assess an individual's familiarity with words. The present article highlights problems with using occurrence counts in corpora as an index of word familiarity, especially when studying individuals varying in reading experience. We demonstrate via computational simulations and norming studies that corpus-based word frequencies systematically overestimate strengths of word representations, especially in the low-frequency range and in smaller-size vocabularies. Experience-driven differences in word familiarity prove to be faithfully captured by the subjective frequency ratings collected from responders at different experience levels. When matched on those levels, this lexical measure explains more variance than corpus-based frequencies in eye-movement and lexical decision latencies to English words, attested in populations with varied reading experience and skill. Furthermore, the use of subjective frequencies removes the widely reported (corpus) Frequency × Skill interaction, showing that more skilled readers are equally faster in processing any word than the less skilled readers, not disproportionally faster in processing lower frequency words. This finding challenges the view that the more skilled an individual is in generic mechanisms of word processing, the less reliant he or she will be on the actual lexical characteristics of that word. |
Miyoung Kwon; Anirvan S. Nandy; Bosco S. Tjan Rapid and persistent adaptability of human oculomotor control in response to simulated central vision loss Journal Article In: Current Biology, vol. 23, no. 17, pp. 1663–1669, 2013. @article{Kwon2013, The central region of the human retina, the fovea, provides high-Acuity vision. The oculomotor system continually brings targets of interest into the fovea via ballistic eye movements (saccades). Thus, the fovea serves both as the locus for fixations and as the oculomotor reference for saccades. This highly automated process of foveation is functionally critical to vision and is observed from infancy [1, 2]. How would the oculomotor system adjust to a loss of foveal vision (central scotoma)? Clinical observations of patients with central vision loss [3, 4] suggest a lengthy adjustment period [5], but the nature and dynamics of this adjustment remain unclear. Here, we demonstrate that the oculomotor system can spontaneously and rapidly adopt a peripheral locus for fixation and can rereference saccades to this locus in normally sighted individuals whose central vision is blocked by an artificial scotoma. Once developed, the fixation locus is retained over weeks in the absence of the simulated scotoma. Our data reveal a basic guiding principle of the oculomotor system that prefers control simplicity over optimality. We demonstrate the importance of a visible scotoma on the speed of the adjustment and suggest a possible rehabilitation regimen for patients with central vision loss. |
Evelyne Lagrou; Robert J. Hartsuiker; Wouter Duyck Interlingual lexical competition in a spoken sentence context: Evidence from the visual world paradigm Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 5, pp. 963–972, 2013. @article{Lagrou2013, We used the visual world paradigm to examine interlingual lexical competition when Dutch-English bilinguals listened to low-constraining sentences in their nonnative (L2; Experiment 1) and native (L1; Experiment 2) languages. Additionally, we investigated the influence of the degree of cross-lingual phonological similarity. When listening in L2, participants fixated more on competitor pictures of which the onset of the name was phonologically related to the onset of the name of the target in the nontarget language (e.g., fles, "bottle", given target flower) than on phonologically unrelated distractor pictures. Even when they listened in L1, this effect was also observed when the onsets of the names of the target picture (in L1) and the competitor picture (in L2) were phonologically very similar. These findings provide evidence for interlingual competition during the comprehension of spoken sentences, both in L2 and in L1. |
Alexandre Lang; Marine Vernet; Qing Yang; Christophe Orssaud; Alain Londero; Zoï Kapoula Differential auditory-oculomotor interactions in patients with right vs. left sided subjective tinnitus: A saccade study Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 47, 2013. @article{Lang2013, Subjective tinnitus (ST) is a frequent but poorly understood medical condition. Recent studies demonstrated abnormalities in several types of eye movements (smooth pursuit, optokinetic nystagmus, fixation, and vergence) in ST patients. The present study investigates horizontal and vertical saccades in patients with tinnitus lateralized predominantly to the left or to the right side. Compared to left sided ST, tinnitus perceived on the right side impaired almost all the parameters of saccades (latency, amplitude, velocity, etc.) and noticeably the upward saccades. Relative to controls, saccades from both groups were more dysmetric and were characterized by increased saccade disconjugacy (i.e., poor binocular coordination). Although the precise mechanisms linking ST and saccadic control remain unexplained, these data suggest that ST can lead to detrimental auditory, visuomotor, and perhaps vestibular interactions. |
Elke B. Lange; Ralf Engbert Differentiating between verbal and spatial encoding using eye-movement recordings Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 9, pp. 1840–1857, 2013. @article{Lange2013, Visual information processing is guided by an active mechanism generating saccadic eye movements to salient stimuli. Here we investigate the specific contribution of saccades to memory encoding of verbal and spatial properties in a serial recall task. In the first experiment, participants moved their eyes freely without specific instruction. We demonstrate the existence of qualitative differences in eye-movement strategies during verbal and spatial memory encoding. While verbal memory encoding was characterized by shifting the gaze to the to-be-encoded stimuli, saccadic activity was suppressed during spatial encoding. In the second experiment, participants were required to suppress saccades by fixating centrally during encoding or to make precise saccades onto the memory items. Active suppression of saccades had no effect on memory performance, but tracking the upcoming stimuli decreased memory performance dramatically in both tasks, indicating a resource bottleneck between display-controlled saccadic control and memory encoding. We conclude that optimized encoding strategies for verbal and spatial features are underlying memory performance in serial recall, but such strategies work on an involuntary level only and do not support memory encoding when they are explicitly required by the task. |
Junpeng Lao; Luca Vizioli; Roberto Caldara Culture modulates the temporal dynamics of global/local processing Journal Article In: Culture and Brain, vol. 1, no. 2-4, pp. 158–174, 2013. @article{Lao2013, Cultural differences in the way individuals from Western Caucasian (WC) and East Asian (EA) societies perceive and attend to visual information have been consistently reported in recent years. WC observers favor and perceive most efficiently the salient, local visual information by directing attention to focal objects. In contrast, EA observers show a bias towards global information, by preferentially attending elements in the background. However, the underlying neural mechanisms and the temporal dynamics of this striking cultural contrast have yet to be clarified. The combination of Navon figures, which contain both global and local features, and the measurement of neural adaptation constitute an ideal way to probe this issue. We recorded the electrophysiological signals of WC and EA observers while they actively matched culturally neutral geometric Navon shapes. In each trial, participants sequentially viewed and categorized an adapter shape followed by a target shape, as being either: identical; global congruent; local congruent; and different. We quantified the repetition suppression, a reduction in neural activity in stimulus sensitive regions following stimulus repetition, using a single-trial approach. A robust data-driven spatio-temporal analysis revealed at 80 ms a significant interaction between the culture of the observers and shape adaptation. EA observers showed sensitivity to global congruency on the attentional P1 component, whereas WC observers showed discrimination for global shapes at later stages. Our data revealed an early sensitivity to global and local shape cate- gorization, which is modulated by culture. This neural tuning could underlie more complex behavioral differences observed across human populations. |
Jochen Laubrock Laubrock; Anke Cajar Cajar; Ralf Engbert Control of fixation duration during scene viewing by interaction of foveal and peripheral processing Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 1–20, 2013. @article{Laubrock2013, Processing in our visual system is functionally segregated, with the fovea specialized in processing fine detail (high spatial frequencies) for object identification, and the periphery in processing coarse information (low frequencies) for spatial orienting and saccade target selection. Here we investigate the consequences of this functional segregation for the control of fixation durations during scene viewing. Using gaze-contingent displays, we applied high-pass or low-pass filters to either the central or the peripheral visual field and compared eye-movement patterns with an unfiltered control condition. In contrast with predictions from functional segregation, fixation durations were unaffected when the critical information for vision was strongly attenuated (foveal low-pass and peripheral high-pass filtering); fixation durations increased, however, when useful information was left mostly intact by the filter (foveal high-pass and peripheral low-pass filtering). These patterns of results are difficult to explain under the assumption that fixation durations are controlled by foveal processing difficulty. As an alternative explanation, we developed the hypothesis that the interaction of foveal and peripheral processing controls fixation duration. To investigate the viability of this explanation, we implemented a computational model with two compartments, approximating spatial aspects of processing by foveal and peripheral activations that change according to a small set of dynamical rules. The model reproduced distributions of fixation durations from all experimental conditions by variation of few parameters that were affected by specific filtering conditions. |
C. Hemptinne; Adrian Ivanoiu; Philippe Lefèvre; Marcus Missal How does Parkinson's disease and aging affect temporal expectation and the implicit timing of eye movements? Journal Article In: Neuropsychologia, vol. 51, no. 2, pp. 340–348, 2013. @article{Hemptinne2013, Anticipatory eye movements are often evoked by the temporal expectation of an upcoming event. Temporal expectation is based on implicit timing about when a future event could occur. Implicit timing emerges from observed temporal regularities in a changing stimulus without any voluntary estimate of elapsed time, unlike explicit timing. The neural bases of explicit and implicit timing are likely different. It has been shown that the basal ganglia (BG) play a central role in explicit timing. In order to determine the influence of BG in implicit timing, we investigated the influence of early Parkinson's disease (PD) and aging on the latency of anticipatory eye movements. We hypothesized that a deficit of implicit timing should yield inadequate temporal expectations, and consequently abnormally timed anticipatory eye movements compared with age-matched controls. To test this hypothesis, we used an oculomotor paradigm where anticipation of a salient target event plays a central role. Participants pursued a visual target that moved along a circular path at a constant velocity. After a randomly short (1200. ms) or long (2400. ms) forward path, the target reversed direction, returned to its starting position and stopped. Target motion reversal caused an abrupt 'slip' of the pursued target image on the retina and was a particularly salient event evoking anticipatory eye movements. Anticipatory eye movements were less frequent in PD patients. However, the timing of anticipation of target motion reversal was statistically similar in PD patients and control subjects. Other eye movements showed statistically significant differences between PD and controls, but these differences could be attributed to other factors. We conclude that all anticipatory eye movements are not similarly impaired in PD and that implicit timing of salient events seems largely unaffected by this disease. The results support the hypothesis that implicit and explicit timing are differently affected by BG dysfunction. |
Maria De Luca; Maria Pontillo; Silvia Primativo; Donatella Spinelli; Pierluigi Zoccolotti The eye-voice lead during oral reading in developmental dyslexia Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 696, 2013. @article{DeLuca2013, In reading aloud, the eye typically leads over voice position. In the present study, eye movements and voice utterances were simultaneously recorded and tracked during the reading of a meaningful text to evaluate the eye-voice lead in 16 dyslexic and 16 same-age control readers. Dyslexic children were slower than control peers in reading texts. Their slowness was characterized by a great number of silent pauses and sounding-out behaviors and a small lengthening of word articulation times. Regarding eye movements, dyslexic readers made many more eye fixations (and generally smaller rightward saccades) than controls. Eye movements and voice (which were shifted in time because of the eye-voice lead) were synchronized in dyslexic readers as well as controls. As expected, the eye-voice lead was significantly smaller in dyslexic than control readers, confirming early observations by Buswell (1921) and Fairbanks (1937). The eye-voice lead was significantly correlated with several eye movements and voice parameters, particularly number of fixations and silent pauses. The difference in performance between dyslexic and control readers across several eye and voice parameters was expressed by a ratio of about 2. We propose that referring to proportional differences allows for a parsimonious interpretation of the reading deficit in terms of a single deficit in word decoding. The possible source of this deficit may call for visual or phonological mechanisms, including Goswami's temporal sampling framework. |
Jelmer P. De Vries; Ignace T. C. Hooge; Alexander H. Wertheim; Frans A. J. Verstraten Background, an important factor in visual search Journal Article In: Vision Research, vol. 86, pp. 128–138, 2013. @article{DeVries2013, The ability to detect an object depends on the contrast between the object and its background. Despite this, many models of visual search rely solely on the properties of target and distractors, and do not take the background into account. Yet, both target and distractors have their individual contrasts with the background. These contrasts generally differ, because the target and distractors are different in at least one feature. Therefore, background is likely to play an important role in visual search. In three experiments we manipulated the properties of the background (luminance, orientation and spatial frequency, respectively) while keeping the target and distractors constant. In the first experiment, in which target and distractors had a different luminance, changing the background luminance had an extensive effect on search times. When background luminance was in between that of the target and distractors, search times were always short. Interestingly, when the background was darker than both the target and the distractors, search times were much longer than when the background was lighter. Manipulating orientation and spatial frequency of the background, on the other hand, resulted in search times that were longest for small target-background differences. Thus, background plays an important role in search. This role depends on the individual contrast of both target and distractors with the background and the type of feature contrast (luminance, orientation or spatial frequency). |
Thomas Deffieux; Youliana Younan; Nicolas Wattiez; Mickael Tanter; Pierre Pouget; Jean-François Aubry Low-intensity focused ultrasound modulates monkey visuomotor behavior Journal Article In: Current Biology, vol. 23, pp. 2430–2433, 2013. @article{Deffieux2013, In vivo feasibility of using low-intensity focused ultrasound (FUS) to transiently modulate the function of regional brain tissue has been recently tested in anesthetized lagomorphs [1] and rodents [2-4]. Hypothetically, ultrasonic stimulation of the brain possesses several advantages [5]: it does not necessitate surgery or genetic alteration but could ultimately confer spatial resolutions superior to other noninvasive methods. Here, we gauged the ability of noninvasive FUS to causally modulate high-level cognitive behavior. Therefore, we examined how FUS might interfere with prefrontal activity in two awake macaque rhesus monkeys that had been trained to perform an antisaccade (AS) task. We show that ultrasound significantly modulated AS latencies. Such effects proved to be dependent on FUS hemifield of stimulation (relative latency increases most for ipsilateral AS). These results are interpreted in terms of a modulation of saccade inhibition to the contralateral visual field due to the disruption of processing across the frontal eye fields. Our study demonstrates for the first time the feasibility of using FUS stimulation to causally modulate behavior in the awake nonhuman primate brain. This result supports the use of this approach to study brain function. Neurostimulation with ultrasound could be used for exploratory and therapeutic purposes noninvasively, with potentially unprecedented spatial resolution. |
Louis F. Dell'Osso; Jonathan B. Jacobs Normal pursuit-system limitations — first discovered in infantile nystagmus syndrome Journal Article In: Journal of Eye Movement Research, vol. 6, no. 1, pp. 1–24, 2013. @article{DellOsso2013, Infantile nystagmus syndrome (INS) patients occasionally have impaired pursuit. Model and patient data identified relative timing between target motion initiation and INS-waveform saccades as the cause. We used a new stimulus, the “step-pause-ramp” (SPR), to induce saccades proximal to target-velocity onset and test their effect on normal pursuit. Our OMS model predicted that proximal saccades impaired normal ramp responses, as in INS. Eye movements of subjects were calibrated monocularly and recorded binocularly; data were analyzed using OMtools software. Proximal saccades caused lengthened target acquisition times and steady-state position errors, confirming the model's predictions. Spontaneous pursuit oscillation supported the hypothesis that INS is caused by loss of smooth-pursuit damping. Snooth pursuit may be impaired by saccades overlapping targetmotion onset. |
Alixia Demichelis; Gérard Olivier; Alain Berthoz Motor transfer from map ocular exploration to locomotion during spatial navigation from memory Journal Article In: Experimental Brain Research, vol. 224, no. 4, pp. 605–611, 2013. @article{Demichelis2013, Spatial navigation from memory can rely on two different strategies: a mental simulation of a kinesthetic spatial navigation (egocentric route strategy) or visual-spatial memory using a mental map (allocentric survey strategy). We hypothesized that a previously performed "oculomotor navigation" on a map could be used by the brain to perform a locomotor memory task. Participants were instructed to (1) learn a path on a map through a sequence of vertical and horizontal eyes movements and (2) walk on the slabs of a "magic carpet" to recall this path. The main results showed that the anisotropy of ocular movements (horizontal ones being more efficient than vertical ones) influenced performances of participants when they change direction on the central slab of the magic carpet. These data suggest that, to find their way through locomotor space, subjects mentally repeated their past ocular exploration of the map, and this visuo-motor memory was used as a template for the locomotor performance. |
Virginie Desestret; Nathalie Streichenberger; Muriel T. N. Panouillères; Denis Pélisson; B. Plus; Charles Duyckaerts; Dennis K. Burns; Christian Scheiber; Alain Vighetto; Caroline Tilikete An elderly woman with difficulty reading and abnormal eye movements Journal Article In: Journal of Neuro-Ophthalmology, vol. 33, no. 3, pp. 296–301, 2013. @article{Desestret2013, A 73-year-old woman was evaluated in our neuro- ophthalmology clinic with a 1-year history of progressive difficulty reading. The patient's visual acuity, pupillary reactions to light and near stimulation, visual fields, and fundi were normal. Examination of her eye movements revealed a supranuclear vertical gaze abnormality, charac- terized by lack of upward saccades but intact downward saccades. The patient also had had difficulty initiating voluntary, especially leftward horizontal saccades on command, but reactive horizontal saccades were relatively well preserved. She was able to follow a pencil light moved by the examiner using small saccades (saccadic smooth pursuit) and her vestibulo-ocular reflex (VOR) was intact. She had apraxia of lid closure. The patient had no cognitive deficit, behavioral or social disturbance, aphasia, alexia, limb apraxia, postural ataxia, pyramidal signsorparkinsonism. |
Joost C. Dessing; Michael Vesia; J. Douglas Crawford The role of areas MT+/V5 and SPOC in spatial and temporal control of manual interception: An rTMS study Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 15, 2013. @article{Dessing2013, Manual interception, such as catching or hitting an approaching ball, requires the hand to contact a moving object at the right location and at the right time. Many studies have examined the neural mechanisms underlying the spatial aspects of goal-directed reaching, but the neural basis of the spatial and temporal aspects of manual interception are largely unknown. Here, we used repetitive transcranial magnetic stimulation (rTMS) to investigate the role of the human middle temporal visual motion area (MT+/V5) and superior parieto-occipital cortex (SPOC) in the spatial and temporal control of manual interception. Participants were required to reach-to-intercept a downward moving visual target that followed an unpredictably curved trajectory, presented on a screen in the vertical plane. We found that rTMS to MT+/V5 influenced interceptive timing and positioning, whereas rTMS to SPOC only tended to increase the spatial variance in reach end points for selected target trajectories. These findings are consistent with theories arguing that distinct neural mechanisms contribute to spatial, temporal, and spatiotemporal control of manual interception. |
Saurabh Dhawan; Heiner Deubel; Donatas Jonikaitis Inhibition of saccades elicits attentional suppression Journal Article In: Journal of Vision, vol. 13, no. 6, pp. 1–12, 2013. @article{Dhawan2013, Visuospatial attention has been shown to have a central role in planning and generation of saccades but what role, if any, it plays in inhibition of saccades remains unclear. In this study, we used an oculomotor delayed match- or nonmatch-to-sample task in which a cued location has to be encoded and memorized for one of two very different goals-to plan a saccade to it or to avoid making a saccade to it. We measured the spatial allocation of attention during the delay and found that while marking a location as a future saccade target resulted in an attentional benefit at that location, marking it as forbidden to saccades led to an attentional cost. Additionally, saccade trajectories were found to deviate away more from the "don't look" location than from a saccade-irrelevant distractor confirming greater inhibition of an actively forbidden location in oculomotor programming. Our finding that attention is suppressed at locations forbidden to saccades confirms and complements the claim of a selective and obligatory coupling between saccades and attention-saccades at the memorized location could neither be planned nor suppressed independent of a corresponding effect on attentional performance. |
L. L. Di Stasi; M. Marchitto; A. Antolí; J. J. Cañas Saccadic peak velocity as an alternative index of operator attention: A short review Journal Article In: European Review of Applied Psychology, vol. 63, no. 6, pp. 335–343, 2013. @article{DiStasi2013, Introduction Automation research has identified the need to monitor operator attentional states in real time as a basis for determining the most appropriate type and level of automated assistance for operators doing complex tasks. Objective The development of a methodology that is able to detect on-line operator attentional state variations could represent a good starting point to solve this critical issue. Results We present a short review of the literature on different indices of attentional state and discuss a series of experiments that demonstrates the validity and sensitivity of a specific eye movement index: saccadic peak velocity (PV). PV was able to detect variations in mental state while doing complex and ecological tasks, ranging from air traffic control simulated tasks to driving simulator sessions. Conclusion This research could provide several guidelines for designing adaptive systems (able to allocate tasks between operators and machine in a dynamic way) and early fatigue-and-distraction warning systems to reduce accident risk. © 2013 Elsevier Masson SAS. All rights reserved. |
Leandro Luigi Di Stasi; Adoración Antolí; José J. Cañas Evaluating mental workload while interacting with computer-generated artificial environments Journal Article In: Entertainment Computing, vol. 4, no. 1, pp. 63–69, 2013. @article{DiStasi2013a, The need to evaluate user behaviour and cognitive efforts when interacting with complex simulations plays a crucial role in many information and communications technologies. The aim of this paper is to propose the use of eye-related measures as indices of mental workload in complex tasks. An experiment was conducted using the FireChief® microworld in which user mental workload was manipulated by changing the interaction strategy required to perform a common task. There were significant effects of the attentional state of users on visual scanning behavior. Longer fixations were found for the more demanding strategy, slower saccades were found as the time-on-task increased, and pupil diameter decreased when an environmental change was introduced. Questionnaire and performance data converged with the psychophysiological ones. These results provide additional empirical support for the ability of some eye-related indices to discriminate variations in the attentional state of the user in visual-dynamic complex tasks and show their potential diagnostic capacity in the field of applied ergonomics. |
Leandro Luigi Di Stasi; Andrés Catena; José J. Cañas; Stephen L. Macknik; Susana Martinez-Conde Saccadic velocity as an arousal index in naturalistic tasks Journal Article In: Neuroscience and Biobehavioral Reviews, vol. 37, no. 5, pp. 968–975, 2013. @article{DiStasi2013b, Experimental evidence indicates that saccadic metrics vary with task difficulty and time-on-task in naturalistic scenarios. We explore historical and recent findings on the correlation of saccadic velocity with task parameters in clinical, military, and everyday situations, and its potential role in ergonomics. We moreover discuss the hypothesis that changes in saccadic velocity indicate variations in sympathetic nervous system activation; that is, variations in arousal. |
Leandro Luigi Di Stasi; Michael B. Mccamy; Andrés Catena; Stephen L. Macknik; José J. Cañas; Susana Martinez-Conde Microsaccade and drift dynamics reflect mental fatigue Journal Article In: European Journal of Neuroscience, vol. 38, no. 3, pp. 2389–2398, 2013. @article{DiStasi2013c, Our eyes are always in motion. Even during periods of relative fixation we produce so-called 'fixational eye movements', which include microsaccades, drift and tremor. Mental fatigue can modulate saccade dynamics, but its effects on microsaccades and drift are unknown. Here we asked human subjects to perform a prolonged and demanding visual search task (a simplified air traffic control task), with two difficulty levels, under both free-viewing and fixation conditions. Saccadic and microsaccadic velocity decreased with time-on-task whereas drift velocity increased, suggesting that ocular instability increases with mental fatigue. Task difficulty did not influence eye movements despite affecting reaction times, performance errors and subjective complexity ratings. We propose that variations in eye movement dynamics with time-on-task are consistent with the activation of the brain's sleep centers in correlation with mental fatigue. Covariation of saccadic and microsaccadic parameters moreover supports the hypothesis of a common generator for microsaccades and saccades. We conclude that changes in fixational and saccadic dynamics can indicate mental fatigue due to time-on-task, irrespective of task complexity. These findings suggest that fixational eye movement dynamics have the potential to signal the nervous system's activation state. |
Joao C. Dias; Paul Sajda; J. P. Dmochowski; Lucas C. Parra EEG precursors of detected and missed targets during free-viewing search Journal Article In: Journal of Vision, vol. 13, no. 13, pp. 1–19, 2013. @article{Dias2013, When scanning a scene, the target of our search may be in plain sight and yet remain unperceived. Conversely, at other times the target may be perceived in the periphery prior to fixation. There is ample behavioral and neurophysiological evidence to suggest that in some constrained visual-search tasks, targets are detected prior to fixational eye movements. However, limited human data are available during unconstrained search to determine the time course of detection, the brain areas involved, and the neural correlates of failures to detect a foveated target. Here, we recorded and analyzed electroencephalographic (EEG) activity during free-viewing visual search, varying the task difficulty to compare neural signatures for detected and unreported ("missed") targets. When carefully controlled to remove eye-movement-related potentials, saccade-locked EEG shows that: (a) "Easy" targets may be detected as early as 150 ms prior to foveation, as indicated by a premotor potential associated with a button response; (b) object-discriminating occipital activity emerges during the saccade to target; and (c) success and failures to detect a target are accompanied by a modulation in alpha-band power over fronto-central areas as well as altered saccade dynamics. Taken together, these data suggest that target detection during free viewing can begin prior to and continue during a saccade, with failure or success in reporting a target possibly resulting from inhibition or activation of fronto-central processing areas associated with saccade control. |
Christopher A. Dickinson; Gregory J. Zelinsky New evidence for strategic differences between static and dynamic search tasks: An individual observer analysis of eye movements Journal Article In: Frontiers in Psychology, vol. 4, pp. 8, 2013. @article{Dickinson2013, Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers' oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d' values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays. |
Brian W. Dillon; Alan Mishler; Shayne Sloggett; Colin Phillips Contrasting interference profiles for agreement and anaphora: Experimental and modeling evidence Journal Article In: Journal of Memory and Language, vol. 69, no. 2, pp. 85–103, 2013. @article{Dillon2013, We investigated the relationship between linguistic representation and memory access by comparing the processing of two linguistic dependencies that require comprehenders to check that the subject of the current clause has the correct morphological features: subject–verb agreement and reflexive anaphors in English. In two eye-tracking experiments we examined the impact of structurally illicit noun phrases on the computation of reflexive and subject–verb agreement. Experiment 1 directly compared the two dependencies within participants. Results show a clear difference in the intrusion profile associated with each dependency: agreement resolution displays clear intrusion effects in comprehension (as found by Pearlmutter et al., 1999, Wagers et al., 2009), but reflexives show no such intrusion effect from illicit antecedents (Sturt, 2003, Xiang et al., 2009). Experiment 2 replicated the lack of intrusion for reflexives, confirming the reliability of the pattern and examining a wider range of feature combinations. In addition, we present modeling evidence that suggests that the reflexive results are best captured by a memory retrieval mechanism that uses primarily syntactic information to guide retrievals for the anaphor's antecedent, in contrast to the mixed morphological and syntactic cues used resolve subject–verb agreement dependencies. Despite the fact that agreement and reflexive dependencies are subject to a similar morphological agreement constraint, in online processing comprehenders appear to implement this constraint in distinct ways for the two dependencies. |
Steve Dipaola; Caitlin Riebe; James T. Enns Following the masters: Portrait viewing and appreciation is guided by selective detail Journal Article In: Perception, vol. 42, no. 6, pp. 608–630, 2013. @article{Dipaola2013, A painted portrait differs from a photo in that selected regions are often rendered in much sharper detail than other regions. Artists believe these choices guide viewer gaze and influence their appreciation of the portrait, but these claims are difficult to test because increased portrait detail is typically associated with greater meaning, stronger lighting, and a more central location in the composition. In three experiments we monitored viewer gaze and recorded viewer preferences for portraits rendered with a parameterised non-photorealistic technique to mimic the style of Rembrandt (DiPaola, 2009 International Journal of Art and Technology 2 82-93). Results showed that viewer gaze was attracted to and held longer by regions of relatively finer detail (experiment 1), and also by textural highlighting (experiment 2), and that artistic appreciation increased when portraits strongly biased gaze (experiment 3). These findings have implications for understanding both human vision science and visual art. |
Michael Dorr; Peter J. Bex Peri-saccadic natural vision Journal Article In: Journal of Neuroscience, vol. 33, no. 3, pp. 1211–1217, 2013. @article{Dorr2013, The fundamental role of the visual system is to guide behavior in natural environments. To optimize information transmission, many animals have evolved a non-homogeneous retina and serially sample visual scenes by saccadic eye movements. Such eye movements, however, introduce high-speed retinal motion and decouple external and internal reference frames. Until now, these processes have only been studied with unnatural stimuli, eye movement behavior, and tasks. These experiments confound retinotopic and geotopic coordinate systems and may probe a non-representative functional range. Here we develop a real-time, gaze-contingent display with precise spatiotemporal control over high-definition natural movies. In an active condition, human observers freely watched nature documentaries and indicated the location of periodic narrow-band contrast increments relative to their gaze position. In a passive condition under central fixation, the same retinal input was replayed to each observer by updating the video's screen position. Comparison of visual sensitivity between conditions revealed three mechanisms that the visual system has adapted to compensate for peri-saccadic vision changes. Under natural conditions we show that reduced visual sensitivity during eye movements can be explained simply by the high retinal speed during a saccade without recourse to an extra-retinal mechanism of active suppression; we give evidence for enhanced sensitivity immediately after an eye movement indicative of visual receptive fields remapping in anticipation of forthcoming spatial structure; and we demonstrate that perceptual decisions can be made in world rather than retinal coordinates. |
Trafton Drew; Melissa L. -H. Võ; Alex Olwal; Francine Jacobson; Steven E. Seltzer; Jeremy M. Wolfe Scanners and drillers: Characterizing expert visual search through volumetric images Journal Article In: Journal of Vision, vol. 13, no. 10, pp. 1–13, 2013. @article{Drew2013, Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a "stack" of 2-D chest CT "slices." At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: "drilling" and "scanning." Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated. |
Feng Du; Yue Qi; Xingshan Li; Kan Zhang Dual processes of oculomotor capture by abrupt onset: Rapid involuntary capture and sluggish voluntary prioritization Journal Article In: PLoS ONE, vol. 8, no. 11, pp. e80678, 2013. @article{Du2013, The present study showed that there are two distinctive processes underlying oculomotor capture by abrupt onset. When a visual mask between the cue and the target eliminates the unique luminance transient of an onset, the onset still attracts attention in a top-down fashion. This memory-based prioritization of onset is voluntarily controlled by the knowledge of target location. But when there is no visual mask between the cue and the target, the onset captures attention mainly in a bottom-up manner. This transient-driven capture of onset is involuntary because it occurs even when the onset is completely irrelevant to the target location. In addition, the present study demonstrated distinctive temporal characteristics for these two processes. The involuntary capture driven by luminance transients is rapid and brief, whereas the memory- based voluntary prioritization of onset is more sluggish and long-lived. |
Stéphanie Ducrot; Joël Pynte; Alain Ghio; Bernard Lété Visual and linguistic determinants of the eyes' initial fixation position in reading development Journal Article In: Acta Psychologica, vol. 142, no. 3, pp. 287–298, 2013. @article{Ducrot2013, Two eye-movement experiments with one hundred and seven first- through fifth-grade children were conducted to examine the effects of visuomotor and linguistic factors on the recognition of words and pseudowords presented in central vision (using a variable-viewing-position technique) and in parafoveal vision (shifted to the left or right of a central fixation point). For all groups of children, we found a strong effect of stimulus location, in both central and parafoveal vision. This effect corresponds to the children's apparent tendency, for peripherally located targets, to reach a position located halfway between the middle and the left edge of the stimulus (preferred viewing location, PVL), whether saccading to the right or left. For centrally presented targets, refixation probability and lexical-decision time were the lowest near the word's center, suggesting an optimal viewing position (OVP). The viewing-position effects found here were modulated (1) by print exposure, both in central and parafoveal vision; and (2) by the intrinsic qualities of the stimulus (lexicality and word frequency) for targets in central vision but not for parafoveally presented targets. |
Carolin Dudschig; Jan L. Souman; Martin Lachmair; Irmgard Vega; Barbara Kaup Reading "sun" and looking up: The influence of language on saccadic eye movements in the vertical dimension Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e56872, 2013. @article{Dudschig2013, Traditionally, language processing has been attributed to a separate system in the brain, which supposedly works in an abstract propositional manner. However, there is increasing evidence suggesting that language processing is strongly interrelated with sensorimotor processing. Evidence for such an interrelation is typically drawn from interactions between language and perception or action. In the current study, the effect of words that refer to entities in the world with a typical location (e.g., sun, worm) on the planning of saccadic eye movements was investigated. Participants had to perform a lexical decision task on visually presented words and non-words. They responded by moving their eyes to a target in an upper (lower) screen position for a word (non-word) or vice versa. Eye movements were faster to locations compatible with the word's referent in the real world. These results provide evidence for the importance of linguistic stimuli in directing eye movements, even if the words do not directly transfer directional information. |
Magda L. Dumitru; Gitte H. Joergensen; Alice G. Cruickshank; Gerry T. M. Altmann Language-guided visual processing affects reasoning: The role of referential and spatial anchoring Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 562–571, 2013. @article{Dumitru2013, Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/. or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process. |
Jon Andoni Duñabeitia; María Dimitropoulou; Adelina Estévez; Manuel Carreiras The influence of reading expertise in mirror-letter perception: Evidence from beginning and expert readers Journal Article In: Mind, Brain, and Education, vol. 7, no. 2, pp. 124–135, 2013. @article{Dunabeitia2013, The visual word recognition system recruits neuronal systems originally developed for object perception which are characterized by orientation insensitivity to mirror reversals. It has been proposed that during reading acquisition beginning readers have to "unlearn" this natural tolerance to mirror reversals in order to efficiently discriminate letters and words. Therefore, it is supposed that this unlearning process takes place in a gradual way and that reading expertise modulates mirror-letter discrimination. However, to date no supporting evidence for this has been obtained. We present data from an eye-movement study that investigated the degree of sensitivity to mirror-letters in a group of beginning readers and a group of expert readers. Participants had to decide which of the two strings presented on a screen corresponded to an auditorily presented word. Visual displays always included the correct target word and one distractor word. Results showed that those distractors that were the same as the target word except for the mirror lateralization of two internal letters attracted participants' attention more than distractors created by replacement of two internal letters. Interestingly, the time course of the effects was found to be different for the two groups, with beginning readers showing a greater tolerance (decreased sensitivity) to mirror-letters than expert readers. Implications of these findings are discussed within the framework of preceding evidence showing how reading expertise modulates letter identification. |
Paola E. Dussias; Jorge R. Valdés Kroff; Rosa E. Guzzardo Tamargo; Chip Gerfen When gender and looking go hand in hand Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 2, pp. 353–387, 2013. @article{Dussias2013, In a recent study, Lew-Williams and Fernald ( 2007 ) showed that native Spanish speakers use grammatical gender information encoded in Spanish articles to facilitate the processing of upcoming nouns. In this article, we report the results of a study investigating whether gram- matical gender facilitates noun recognition during second language (L2) processing. Sixteen monolingual Spanish participants (control group) The and 18 English-speaking learners of Spanish (evenly divided into high and low Spanish profi ciency) saw two-picture visual scenes in which items matched or did not match in gender. Participants' eye movements were recorded while they listened to 28 sentences in which masculine and feminine target items were preceded by an article that agreed in gender with the two pictures or agreed only with one of the pictures. An additional group of 15 Italian learners of Spanish was tested to examine whether the presence of gender in the fi rst language (L1) modulates the degree to which gender is used during L2 processing. Data were analyzed by comparing the proportion of eye fi xations on the objects in each condition. Monolingual Spanish speakers looked sooner at the referent on different-gender trials than on same-gender trials, replicating results reported in past literature. Italian-Spanish bilinguals exhibited a gender anticipatory effect, but only for the feminine condition. For the masculine condition, partici- pants waited to hear the noun before identifying the referent. Like the Spanish monolinguals, the highly profi cient English-Spanish speakers showed evidence of using gender information during online process- ing, whereas the less profi cient learners did not. The results suggest that both profi ciency in the L2 and similarities between the L1 and the L2 modulate the usefulness of morphosyntactic information during speech processing. |
R. Becket Ebitz; Karli K. Watson; Michael L. Platt Oxytocin blunts social vigilance in the rhesus macaque Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 28, pp. 11630–11635, 2013. @article{Ebitz2013, Exogenous application of the neuromodulatory hormone oxytocin (OT) promotes prosocial behavior and can improve social function. It is unclear, however, whether OT promotes prosocial behavior per se, or whether it facilitates social interaction by reducing a state of vigilance toward potential social threats. To disambiguate these two possibilities, we exogenously delivered OT to male rhesus macaques, which have a characteristic pattern of species-typical social vigilance, and examined their performance in three social attention tasks. We first determined that, in the absence of competing task demands or goals, OT increased attention to faces and eyes, as in humans. By contrast, OT reduced species typical social vigilance for unfamiliar, dominant, and emotional faces in two additional tasks. OT eliminated the emergence of a typical state of vigilance when dominant face images were available during a social image choice task. Moreover, OT improved performance on a reward-guided saccade task, despite salient social distractors: OT reduced the interference of unfamiliar faces, particularly emotional ones, when these faces were task irrelevant. Together, these results demonstrate that OT suppresses vigilance toward potential social threats in the rhesus macaque. We hypothesize that a basic role for OT in regulating social vigilance may have facilitated the evolution of prosocial behaviors in humans. |
Miguel P. Eckstein; Stephen C. Mack; Dorion B. Liston; Lisa Bogush; Randolf Menzel; Richard J. Krauzlis Rethinking human visual attention: Spatial cueing effects and optimality of decisions by honeybees, monkeys and humans Journal Article In: Vision Research, vol. 85, pp. 5–9, 2013. @article{Eckstein2013, Visual attention is commonly studied by using visuo-spatial cues indicating probable locations of a target and assessing the effect of the validity of the cue on perceptual performance and its neural correlates. Here, we adapt a cueing task to measure spatial cueing effects on the decisions of honeybees and compare their behavior to that of humans and monkeys in a similarly structured two-alternative forced-choice perceptual task. Unlike the typical cueing paradigm in which the stimulus strength remains unchanged within a block of trials, for the monkey and human studies we randomized the contrast of the signal to simulate more real world conditions in which the organism is uncertain about the strength of the signal. A Bayesian ideal observer that weights sensory evidence from cued and uncued locations based on the cue validity to maximize overall performance is used as a benchmark of comparison against the three animals and other suboptimal models: probability matching, ignore the cue, always follow the cue, and an additive bias/single decision threshold model. We find that the cueing effect is pervasive across all three species but is smaller in size than that shown by the Bayesian ideal observer. Humans show a larger cueing effect than monkeys and bees show the smallest effect. The cueing effect and overall performance of the honeybees allows rejection of the models in which the bees are ignoring the cue, following the cue and disregarding stimuli to be discriminated, or adopting a probability matching strategy. Stimulus strength uncertainty also reduces the theoretically predicted variation in cueing effect with stimulus strength of an optimal Bayesian observer and diminishes the size of the cueing effect when stimulus strength is low. A more biologically plausible model that includes an additive bias to the sensory response from the cued location, although not mathematically equivalent to the optimal observer for the case stimulus strength uncertainty, can approximate the benefits of the more computationally complex optimal Bayesian model. We discuss the implications of our findings on the field's common conceptualization of covert visual attention in the cueing task and what aspects, if any, might be unique to humans. |
Mary Ann Evans; Jean Saint-Aubin Vocabulary acquisition without adult explanations in repeated shared book reading: An eye movement study Journal Article In: Journal of Educational Psychology, vol. 105, no. 3, pp. 596–608, 2013. @article{Evans2013, When preschoolers listen to storybooks, are their eye movements related to their vocabulary acquisition in this context? This study addressed this question with 36 four-year-old French-speaking participants by assessing their general receptive vocabulary knowledge and knowledge of low-frequency words in 3 storybooks. These books were read verbatim to them 7 times over a 2-week interval. At the first and seventh reading, children's eye movements were tracked. Results revealed considerable stability in eye movements, with children spending the vast majority of their viewing time on the illustrations at both time points. Children made modest vocabulary gains on the words in the books, and as expected, these gains were related to their general receptive vocabulary. Most importantly, viewing time during the first reading on depictions of corresponding nouns in the story partially mediated the advantage that overall receptive vocabulary held. As such, this study points to active matching of picture with text during shared book reading and children's processing style when listening to stories as a mechanism for vocabulary acquisition. |
Ashley Farris-Trimble; Bob McMurray Test–retest reliability of eye tracking in the visual world paradigm for the study of real-time spoken word recognition Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 56, no. 4, pp. 1328, 2013. @article{FarrisTrimble2013, Purpose: Researchers have begun to use eye tracking in the visual world paradigm (VWP) to study clinical differences in language processing, but the reliability of such laboratory tests has rarely been assessed. In this article, the authors assess test-retest reliability of the VWP for spoken word recognition. Methods: Participants performed an auditory VWP task in repeated sessions and a visual-only VWP task in a third session. The authors performed correlation and regression analyses on several parameters to determine which reflect reliable behavior and which are predictive of behavior in later sessions. Results: Results showed that the fixation parameters most closely related to timing and degree of fixations were moderately-to-strongly correlated across days, whereas the parameters related to rate of increase or decrease of fixations to particular items were less strongly correlated. Moreover, when including factors derived from the visual-only task, the performance of the regression model was at least moderately correlated with Day 2 performance on all parameters (R > .30). Conclusion: The VWP is stable enough (with some caveats) to serve as an individual measure. These findings suggest guidelines for future use of the paradigm and for areas of improvement in both methodology and analysis. |
Joost Felius; Zainab A. Muhanna Visual deprivation and foveation characteristics both underlie visual acuity deficits in idiopathic infantile nystagmus Journal Article In: Investigative Ophthalmology & Visual Science, vol. 54, no. 5, pp. 3520–3525, 2013. @article{Felius2013, PURPOSE. Children with idiopathic infantile nystagmus (IIN) exhibit visual acuity deficits that have been modeled in terms of foveation characteristics of the nystagmus waveform. Here we present evidence for an additional component of acuity loss associated with the deprivation experienced during the sensitive period of visual development. METHODS. Binocular grating visual acuity and eye movement recordings were obtained from 56 children with IIN (age 4.8 6 3.2 years) and documented waveform history from longitudinal visits. Visual acuity was modeled in terms of foveation characteristics (Nystagmus Optimal Fixation Function, NOFF) and of each child's time course of pendular nystagmus during the sensitive period. RESULTS. Mean visual acuity was 0.25 6 0.19 logMAR below age norms, and the mean foveation fraction was 0.28 (NOFF ¼?0.9 6 2.3 logits). Nystagmus had a median onset at age 3 months and transitioned to waveforms with extended foveation at age 35 months. The best fit of the model showed the following: Poor foveation (0.01 foveation fraction) was associated with 0.60 logMAR acuity deficit; this deficit gradually reduced to zero for increasingly better foveation; pendular nystagmus during each decile of the sensitive period was associated with an additional 0.022 logMAR deficit. The model accounted for 57% of the variance in visual acuity and provided a better fit than either component alone. CONCLUSIONS. Visual acuity in IIN is explained better if, besides the child's foveation characteristics, an additional component is taken into account representing the nystagmus- induced visual deprivation during the sensitive period. These findings may have implications for the timing of treatment decisions in children with IIN. |
Gerardo Fernández; Pablo Mandolesi; Nora P. Rotstein; Oscar Colombo; Osvaldo Agamennoni; Luis E. Politi Eye movement alterations during reading in patients with early Alzheimer disease Journal Article In: Investigative Ophthalmology & Visual Science, vol. 54, no. 13, pp. 8345–8352, 2013. @article{Fernandez2013, Purpose: Eye movements follow a reproducible pattern during normal reading. Each eye movement ends up in a fixation point, which allows the brain to process the incoming information and to program the following saccade. Alzheimer disease (AD) produces eye movement abnormalities and disturbances in reading. In this work we investigated whether eye movement alterations during reading might be already present at very early stages of the disease. Methods: Twenty female and male adult patients with the diagnosis of probable AD and 20 age-matching individuals with no evidence of cognitive decline participated in the study. Participants were seated in front of a 20-inch LCD monitor and single sentences were presented on it. Eye movements were recorded with an eyetracker, with a sampling rate of 1000 Hz and an eye position resolution of 20-s arc. Results: Analysis of eye movements during reading revealed that patients with early AD decreased the amount of single fixations, increased their total number of first and second pass fixations, the amount of saccade regressions and the number of words skipped, compared with healthy individuals (controls). They also reduced the size of outgoing saccades, simultaneously increasing fixation duration. Conclusions: The present study shows that patients with mild AD evidenced marked alterations in eye movement behavior during reading, even at early stages of the disease. Hence, evaluation of eye movement behavior during reading might provide a useful tool for a more precise early diagnosis of AD and for dynamical monitoring of the pathology. |
Andrés Fernández-Martín; Aida Gutiérrez-García; Manuel G. Calvo A smile radiates outwards and biases the eye expression Journal Article In: Spanish Journal of Psychology, vol. 16, pp. 1–11, 2013. @article{FernandezMartin2013, <p>This study investigated how extrafoveally seen smiles influence the viewers' perception of non-happy eyes in a face. A smiling mouth appeared in composite faces with incongruent (angry, fearful, neutral, etc.) eyes, thus producing blended expressions, or they appeared in intact faces with genuine expressions. Overt attention to the eye region was spatially cued, foveal vision of the mouth was blocked by gaze-contingent masking, and the distance between the eyes and the mouth was varied. Participants evaluated whether the eyes were happy or not. Results indicated that the same non-happy eyes were more likely to be judged as happy, and more slowly to be judged as not happy, in presence more than in absence of a smile. As (a) the smiling mouth was highly salient regardless of type of eyes, (b) the influence on the eyes increased gradually as a function of eye-mouth proximity, and (c) the effect occurred in the absence of fixations on the mouth, we conclude that a salient smile radiates outwards to other face regions through a projection mechanism, thus making the eye expression look happy.</p> |
Fernanda Ferreira; Alice Foucart; Paul E. Engelhardt Language processing in the visual world: Effects of preview, visual complexity, and prediction Journal Article In: Journal of Memory and Language, vol. 69, no. 3, pp. 165–182, 2013. @article{Ferreira2013, This study investigates how people interpret spoken sentences in the context of a relevant visual world by focusing on garden-path sentences, such as Put the book on the chair in the bucket, in which the prepositional phrase on the chair is temporarily ambiguous between a goal and modifier interpretation. In three comprehension experiments, listeners heard these types of sentences (along with disambiguated controls) while viewing arrays of objects. These experiments demonstrate that a classic garden-path effect is obtained only when listeners have a preview of the display and when the visual context contains relatively few objects. Results from a production experiment suggest that listeners accrue knowledge that may allow them to have certain expectations of the upcoming utterance based on visual information. Taken together, these findings have theoretical implications for both the role of prediction as an adaptive comprehension strategy, and for how comprehension tendencies change under variable visual and temporal processing demands. |
Jamie Ferri; Joseph Schmidt; Greg Hajcak; Turhan Canli Neural correlates of attentional deployment within unpleasant pictures Journal Article In: NeuroImage, vol. 70, pp. 268–277, 2013. @article{Ferri2013, Attentional deployment is an emotion regulation strategy that involves shifting attentional focus towards or away from particular aspects of emotional stimuli. Previous studies have highlighted the prevalence of attentional deployment and demonstrated that it can have a significant impact on brain activity and behavior. However, little is known about the neural correlates of this strategy. The goal of the present studies was to examine the effect of attentional deployment on neural activity by directing attention to more or less arousing portions of unpleasant images. In Studies 1 and 2, participants passively viewed counterbalanced blocks of unpleasant images without a focus, unpleasant images with an arousing focus, unpleasant images with a non-arousing focus, neutral images without a focus, and neutral images with a non-arousing focus for 4000. ms each. In Study 2, eye-tracking data were collected on all participants during image acquisition. In both studies, affect ratings following each block indicated that participants felt significantly less negative affect after viewing unpleasant images with a non-arousing focus compared to unpleasant images with an arousing focus. In both studies, the unpleasant non-arousing focus condition compared to the unpleasant arousing focus condition was associated with increased activity in frontal and parietal regions implicated in inhibitory control and visual attention. In Study 2, the unpleasant non-arousing focus condition compared to the unpleasant arousing focus condition was associated with reduced activity in the amygdala and visual cortex. Collectively these data suggest that attending to a non-arousing portion of an unpleasant image successfully reduces subjective negative affect and recruits fronto-parietal networks implicated in inhibitory control. Moreover, when ensuring task compliance by monitoring eye movements, attentional deployment modulates amygdala activity. |
Yariv Festman; Jos J. Adam; Jay Pratt; Martin H. Fischer Continuous hand movement induces a far-hand bias in attentional priority Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 4, pp. 644–649, 2013. @article{Festman2013, Previous research on the interaction between manual action and visual perception has focused on discrete movements or static postures and discovered better performance near the hands (the near-hand effect). However, in everyday behaviors, the hands are usually moving continuously between possible targets. Therefore, the current study explored the effects of continuous hand motion on the allocation of visual attention. Eleven healthy adults performed a visual discrimination task during cyclical concealed hand movements underneath a display. Both the current hand position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the hand was distant from but moving toward the visual probe location (a far-hand effect). Implications of this novel observation are discussed. |
Ian C. Fiebelkorn; Adam C. Snyder; Manuel R. Mercier; John S. Butler; S. Molholm; John J. Foxe Cortical cross-frequency coupling predicts perceptual outcomes Journal Article In: NeuroImage, vol. 69, pp. 126–137, 2013. @article{Fiebelkorn2013, Functional networks are comprised of neuronal ensembles bound through synchronization across multiple intrinsic oscillatory frequencies. Various coupled interactions between brain oscillators have been described (e.g., phase-amplitude coupling), but with little evidence that these interactions actually influence perceptual sensitivity. Here, electroencephalographic (EEG) recordings were made during a sustained-attention task to demonstrate that cross-frequency coupling has significant consequences for perceptual outcomes (i.e., whether participants detect a near-threshold visual target). The data reveal that phase-detection relationships at higher frequencies are dependent on the phase of lower frequencies, such that higher frequencies alternate between periods when their phase is either strongly or weakly predictive of visual-target detection. Moreover, the specific higher frequencies and scalp topographies linked to visual-target detection also alternate as a function of lower-frequency phase. Cross-frequency coupling between lower (i.e., delta and theta) and higher frequencies (e.g., low- and high-beta) thus results in dramatic fluctuations of visual-target detection. |
Ruth Filik; Hartmut Leuthold The role of character-based knowledge in online narrative comprehension: Evidence from eye movements and ERPs Journal Article In: Brain Research, vol. 1506, pp. 94–104, 2013. @article{Filik2013, Little is known about the on-line evaluation of information relating to well-known story characters during text comprehension. For example, it is not clear in how much detail readers represent character-based information, and the time course over which this information is utilized during on-line language comprehension. We describe an event-related potential (ERP) study (Experiment 1) and an eye-tracking study (Experiment 2) investigating whether, and when, readers utilize their prior knowledge of a character in processing event information. Participants read materials in which an event was described that either did or did not fit with the character's typical behavior. ERPs elicited by the critical word revealed an N400 effect when the action described did not fit with the character's typical behavior. Results from early eye movement measures supported these findings, and later measures suggested that such violations were more easily accommodated for well-known fictional characters than real-world characters. |
Christopher D. Fiorillo; Minryung R. Song; Sora R. Yun Multiphasic temporal dynamics in responses of midbrain dopamine neurons to appetitive and aversive stimuli Journal Article In: Journal of Neuroscience, vol. 33, no. 11, pp. 4710–4725, 2013. @article{Fiorillo2013, The transient response of dopamine neurons has been described as reward prediction error (RPE), with activation or suppression by events that are better or worse than expected, respectively. However, at least a minority of neurons are activated by aversive or high-intensity stimuli, casting doubt on the generality of RPE in describing the dopamine signal. To overcome limitations of previous studies, we studied neuronal responses to a wider variety of high-intensity and aversive stimuli, and we quantified and controlled aversiveness through a choice task in which macaques sacrificed juice to avoid aversive stimuli. Whereas most previous work has portrayed the RPE as a single impulse or "phase," here we demonstrate its multiphasic temporal dynamics. Aversive or high-intensity stimuli evoked a triphasic sequence of activation-suppression-activation extending over a period of 40-700 ms. The initial activation at short latencies (40-120 ms) reflected sensory intensity. The influence of motivational value became dominant between 150 and 250 ms, with activation in the case of appetitive stimuli, and suppression in the case of aversive and neutral stimuli. The previously unreported late activation appeared to be a modest "rebound" after strong suppression. Similarly, strong activation by reward was often followed by suppression. We suggest that these "rebounds" may result from overcompensation by homeostatic mechanisms in some cells. Our results are consistent with a realistic RPE, which evolves over time through a dynamic balance of excitation and inhibition. |
Christopher D. Fiorillo; Sora R. Yun; Minryung R. Song Diversity and homogeneity in responses of midbrain dopamine neurons Journal Article In: Journal of Neuroscience, vol. 33, no. 11, pp. 4693–4709, 2013. @article{Fiorillo2013a, Dopamine neurons of the ventral midbrain have been found to signal a reward prediction error that can mediate positive reinforcement. Despite the demonstration of modest diversity at the cellular and molecular levels, there has been little analysis of response diversity in behaving animals. Here we examine response diversity in rhesus macaques to appetitive, aversive, and neutral stimuli having relative motivational values that were measured and controlled through a choice task. First, consistent with previous studies, we observed a continuum of response variability and an apparent absence of distinct clusters in scatter plots, suggesting a lack of statistically discrete subpopulations of neurons. Second, we found that a group of " sensitive " neurons tend to be more strongly suppressed by a variety of stimuli and to be more strongly activated by juice. Third, neurons in the " ventral tier " of substantia nigra were found to have greater suppression, and a subset of these had higher baseline firing rates and late " rebound " activation after suppression. These neurons could belong to a previously identified subgroup of dopamine neurons that express high levels of H-type cation channels but lack calbindin. Fourth, neurons further rostral exhibited greater suppression. Fifth, although we observed weak activation of some neurons by aversive stimuli, this was not associated with their aversiveness. In conclusion, we find a diversity of response properties, distributed along a continuum, within what may be a single functional population of neurons signaling reward prediction error. |
Thomas Fischer; Sven-Thomas Graupner; Boris M. Velichkovsky; Sebastian Pannasch Attentional dynamics during free picture viewing: Evidence from oculomotor behavior and electrocortical activity Journal Article In: Frontiers in Systems Neuroscience, vol. 7, pp. 17, 2013. @article{Fischer2013, Most empirical evidence on attentional control is based on brief presentations of rather abstract stimuli. Results revealed indications for a dynamic interplay between bottom-up and top-down attentional mechanisms. Here we used a more naturalistic task to examine temporal signatures of attentional mechanisms on fine and coarse time scales. Subjects had to inspect digitized copies of 60 paintings, each shown for 40 s. We simultaneously measured oculomotor behavior and electrophysiological correlates of brain activity to compare early and late intervals (1) of inspection time of each picture (picture viewing) and (2) of the full experiment (time on task). For picture viewing, we found an increase in fixation duration and a decrease of saccadic amplitude while these parameters did not change with time on task. Furthermore, early in picture viewing we observed higher spatial and temporal similarity of gaze behavior. Analyzing electrical brain activity revealed changes in three components (C1, N1 and P2) of the eye fixation-related potential (EFRP); during picture viewing; no variation was obtained for the power in the frontal beta- and in the theta activity. Time on task analyses demonstrated no effects on the EFRP amplitudes but an increase of power in the frontal theta and beta band activity. Thus, behavioral and electrophysiological measures similarly show characteristic changes during picture viewing, indicating a shifting balance of its underlying (bottom-up and top-down) attentional mechanisms. Time on task also modulated top-down attention but probably represents a different attentional mechanism. |
Gemma Fitzsimmons How fast can predictability influence word skipping during reading? Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 4, pp. 1054–1063, 2013. @article{Fitzsimmons2013, Participants' eye movements were tracked when reading sentences in which target word predictability was manipulated to being unpredictable from the preceding context, predictable from the sentence preceding the one in which the target word was embedded, or predictable from the adjective directly preceding the target word. Results show that there was no difference in skipping rates between the 2 predictability conditions, which were skipped more often than the neutral condition. This suggests that predictability can impact the decision of whether to skip a word to a similar degree irrespective of whether the predictability originated from the prior word or the entire preceding sentence context. This finding can only be explained by models of eye-movement control during reading that assume that word n is processed up to a high level before the decision to skip word n + 1 is made. |
Heather Flowe; Lorraine Hope; Anne P. Hillstrom Oculomotor examination of the weapon focus effect: Does a gun automatically engage visual attention? Journal Article In: PLoS ONE, vol. 8, no. 12, pp. e81011, 2013. @article{Flowe2013, BACKGROUND: A person is less likely to be accurately remembered if they appear in a visual scene with a gun, a result that has been termed the weapon focus effect (WFE). Explanations of the WFE argue that weapons engage attention because they are unusual and/or threatening, which causes encoding deficits for the other items in the visual scene. Previous WFE research has always embedded the weapon and nonweapon objects within a larger context that provides information about an actor's intention to use the object. As such, it is currently unknown whether a gun automatically engages attention to a greater extent than other objects independent of the context in which it is presented. METHOD: Reflexive responding to a gun compared to other objects was examined in two experiments. Experiment 1 employed a prosaccade gap-overlap paradigm, whereby participants looked toward a peripheral target, and Experiment 2 employed an antisaccade gap-overlap paradigm, whereby participants looked away from a peripheral target. In both experiments, the peripheral target was a gun or a nonthreatening object (i.e., a tomato or pocket watch). We also controlled how unexpected the targets were and compared saccadic reaction times across types of objects. RESULTS: A gun was not found to differentially engage attention compared to the unexpected object (i.e., a pocket watch). Some evidence was found (Experiment 2) that both the gun and the unexpected object engaged attention to a greater extent compared the expected object (i.e., a tomato). CONCLUSION: An image of a gun did not engage attention to a larger extent than images of other types of objects (i.e., a pocket watch or tomato). The results suggest that context may be an important determinant of WFE. The extent to which an object is threatening may depend on the larger context in which it is presented. |
Tori E. Foster; Scott P. Ardoin; Katherine S. Binder Underlying changes in repeated reading: An eye movement study Journal Article In: School Psychology Review, vol. 42, no. 2, pp. 140–156, 2013. @article{Foster2013, Past research supports the use of repeated reading but does not provide conclusive evidence as to the mechanisms through which RR takes effect. Eye movement studies allow for precise examination of intervention effects. The current study examined underlying changes in elementar>' students' (A'^^ 43) reading behavior across four consecutive readings of the same passage. Passage- level analyses revealed that rereading yielded significant decreases in measures thought to reflect early processing (i.e., first fixation duration, gaze duration) and higher level processing (i.e., total fixation time, number of regressions, average number of fixations per word). Analyses based on embedded high- and low- frequency target words suggested that repeated reading mainly facilitates reading of low-frequency words, but that children remain sensitive to word frequency after rereading. Finally, results indicated that children who have completed repeated reading continue to focus on word-level (vs. passage-level) reading but devote less overall attention to individual words with repeated practice. |
Tom Foulsham; James Farley; Alan Kingstone Mind wandering in sentence reading: Decoupling the link between mind and eye Journal Article In: Canadian Journal of Experimental Psychology, vol. 67, no. 1, pp. 51–59, 2013. @article{Foulsham2013, When people read, their thoughts sometimes drift away from the task at hand: They are "mind wandering." Recent research suggests that this change in task focus is reflected in eye movements and this was tested in an experiment using controlled stimuli. Participants were presented with a series of sentences containing high- and low-frequency words, which they read while being eye-tracked, and they were sometimes probed to indicate whether they were on task or mind wandering. The results showed multiple differences between reading prior to a mind-wandering response and reading when on task: Mind wandering led to slower reading times, longer average fixation duration, and an absence of the word frequency effect on gaze duration. Collectively, these findings confirm that task focus could be inferred from eye movements, and they indicate that the link between word identification and eye scanning is decoupled when the mind wanders. |
Tom Foulsham; Alexander Gray; Eleni Nasiopoulos; Alan Kingstone Leftward biases in picture scanning and line bisection: A gaze-contingent window study Journal Article In: Vision Research, vol. 78, pp. 14–25, 2013. @article{Foulsham2013d, A bias for humans to attend to the left side of space has been reported in a variety of experiments. While patients with hemispatial neglect mistakenly bisect horizontal lines to the right of centre, neurologically healthy individuals show a mean leftward error. Here, two experiments demonstrated a robust tendency for participants to saccade to the left when viewing photographs. We were able to manipulate this bias by using an asymmetrical gaze-contingent window, which revealed more of the scene on one side of fixation-causing participants to saccade more often in that direction. A second experiment demonstrated the same change in eye movements occurring rapidly from trial to trial, and investigated whether it would carry over and effect attention during a line bisection task. There was some carry-over from gaze-contingent scene viewing to the eye movements during line bisection. However, despite frequent initial eye movements and many errors to the left, manual responses were not affected by this change in orienting. We conclude that the mechanisms underlying asymmetrical attention in picture scanning and line bisection are flexible and can be separated, with saccades in scene perception driven more by a skewed perceptual span. |
Tom Foulsham; Alan Kingstone Fixation-dependent memory for natural scenes: An experimental test of scanpath theory Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 1, pp. 41–56, 2013. @article{Foulsham2013a, Many modern theories propose that perceptual information is represented by the sensorimotor activity elicited by the original stimulus. Scanpath theory (Noton & Stark, 1971) predicts that reinstating a sequence of eye fixations will help an observer recognize a previously seen image. However, the only studies to investigate this are correlational ones based on calculating scanpath similarity. We therefore describe a series of 5 experiments that constrain the fixations during encoding or recognition of images in order to manipulate scanpath similarity. Participants encoded a set of images and later had to recognize those that they had seen. They spontaneously selected regions that they had fixated during encoding (Experiment 1), and this was a predictor of recognition accuracy. Yoking the parts of the image available at recognition to the encoded scanpath led to better memory performance than randomly selected image regions (Experiment 2), and this could not be explained by the spatial distribution of locations (Experiment 3). However, there was no recognition advantage for re-viewing one's own fixations versus someone else's (Experiment 4) or for retaining their serial order (Experiment 5). Therefore, although it is beneficial to look at encoded regions, there is no evidence that scanpaths are stored or that scanpath recapitulation is functional in scene memory. This paradigm provides a controlled way of studying the integration of scene content, spatial structure, and oculomotor signals, with consequences for the perception, representation, and retrieval of visual information. |
Tom Foulsham; Alan Kingstone Optimal and preferred eye landing positions in objects and scenes Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 9, pp. 1707–1728, 2013. @article{Foulsham2013b, Viewing position effects are commonly observed in reading, but they have only rarely been investigated in object perception or in the realistic context of a natural scene. In two experiments, we explored where people fixate within photorealistic objects and the effects of this landing position on recognition and subsequent eye movements. The results demonstrate an optimal viewing position-objects are processed more quickly when fixation is in the centre of the object. Viewers also prefer to saccade to the centre of objects within a natural scene, even when making a large saccade. A central landing position is associated with an increased likelihood of making a refixation, a result that differs from previous reports and suggests that multiple fixations within objects, within scenes, occur for a range of reasons. These results suggest that eye movements within scenes are systematic and are made with reference to an early parsing of the scene into constituent objects. |
Tom Foulsham; Lucy Anne Sanderson Look who's talking? Sound changes gaze behaviour in a dynamic social scene Journal Article In: Visual Cognition, vol. 21, no. 7, pp. 922–944, 2013. @article{Foulsham2013c, Humans often look at other people in natural scenes, and previous research has shown that these looks follow the conversation and that they are sensitive to sound in audiovisual speech perception. In the present experiment, participants viewed video clips of four people involved in a discussion. By removing the sound, we asked whether auditory information would affect when speakers were fixated, how fixations between different observers were synchronized, and whether the eyes or mouth were looked at most often. The results showed that sound changed the timing of looks?by alerting observers to changes in conversation and attracting attention to the speaker. Clips with sound also led to greater attentional synchrony, with more observers fixating the same regions at the same time. However, looks towards the eyes of the people continued to dominate and were unaffected by removing the sound. These findings provide a rich example of multimodal social attention. |
Stefan L. Frank; Irene Fernandez Monsalve; Robin L. Thompson; Gabriella Vigliocco Reading time data for evaluating broad-coverage models of English sentence processing Journal Article In: Behavior Research Methods, vol. 45, no. 4, pp. 1182–1190, 2013. @article{Frank2013, We make available word-by-word self-paced reading times and eye-tracking data over a sample of English sentences from narrative sources. These data are intended to form a gold standard for the evaluation of computational psycholinguistic models of sentence comprehension in English. We describe stimuli selection and data collection and present descriptive statistics, as well as comparisons between the two sets of reading times. |
Jeremy Freeman; David J. Heeger; Elisha P. Merriam Coarse-scale biases for spirals and orientation in human visual cortex Journal Article In: Journal of Neuroscience, vol. 33, no. 50, pp. 19695–19703, 2013. @article{Freeman2013, Multivariate decoding analyses are widely applied to functional magnetic resonance imaging (fMRI) data, but there is controversy over their interpretation. Orientation decoding in primary visual cortex (V1) reflects coarse-scale biases, including an over-representation of radial orientations. But fMRI responses to clockwise and counter-clockwise spirals can also be decoded. Because these stimuli are matched for radial orientation, while differing in local orientation, it has been argued that fine-scale columnar selectivity for orientation contributes to orientation decoding. We measured fMRI responses in human V1 to both oriented gratings and spirals. Responses to oriented gratings exhibited a complex topography, including a radial bias that was most pronounced in the peripheral representation, and a near-vertical bias that was most pronounced near the foveal representation. Responses to clockwise and counter-clockwise spirals also exhibited coarse-scale organization, at the scale of entire visual quadrants. The preference of each voxel for clockwise or counter-clockwise spirals was predicted from the preferences of that voxel for orientation and spatial position (i.e., within the retinotopic map). Our results demonstrate a bias for local stimulus orientation that has a coarse spatial scale, is robust across stimulus classes (spirals and gratings), and suffices to explain decoding from fMRI responses in V1. |
Megan Freeth; Tom Foulsham; Alan Kingstone What affects social attention? Social presence, eye contact and autistic traits Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e53286, 2013. @article{Freeth2013, Social understanding is facilitated by effectively attending to other people and the subtle social cues they generate. In order to more fully appreciate the nature of social attention and what drives people to attend to social aspects of the world, one must investigate the factors that influence social attention. This is especially important when attempting to create models of disordered social attention, e.g. a model of social attention in autism. Here we analysed participants' viewing behaviour during one-to-one social interactions with an experimenter. Interactions were conducted either live or via video (social presence manipulation). The participant was asked and then required to answer questions. Experimenter eye-contact was either direct or averted. Additionally, the influence of participant self-reported autistic traits was also investigated. We found that regardless of whether the interaction was conducted live or via a video, participants frequently looked at the experimenter's face, and they did this more often when being asked a question than when answering. Critical differences in social attention between the live and video interactions were also observed. Modifications of experimenter eye contact influenced participants' eye movements in the live interaction only; and increased autistic traits were associated with less looking at the experimenter for video interactions only. We conclude that analysing patterns of eye-movements in response to strictly controlled video stimuli and natural real-world stimuli furthers the field's understanding of the factors that influence social attention. |
Aline Frey; Gelu Ionescu; Benoît Lemaire; Francisco López-Orozco; Thierry Baccino; Anne Guérin-Dugué Decision-making in information seeking on texts: an eye-fixation-related potentials investigation Journal Article In: Frontiers in Systems Neuroscience, vol. 7, pp. 39, 2013. @article{Frey2013, Reading on a web page is known to be not linear and people need to make fast decisions about whether they have to stop or not reading. In such context, reading, and decision-making processes are intertwined and this experiment attempts to separate them through electrophysiological patterns provided by the Eye-Fixation-Related Potentials technique (EFRPs). We conducted an experiment in which EFRPs were recorded while participants read blocks of text that were semantically highly related, moderately related, and unrelated to a given goal. Participants had to decide as fast as possible whether the text was related or not to the semantic goal given at a prior stage. Decision making (stopping information search) may occur when the paragraph is highly related to the goal (positive decision) or when it is unrelated to the goal (negative decision). EFRPs were analyzed on and around typical eye fixations: either on words belonging to the goal (target), subjected to a high rate of positive decisions, or on low frequency unrelated words (incongruent), subjected to a high rate of negative decisions. In both cases, we found EFRPs specific patterns (amplitude peaking between 51 to 120 ms after fixation onset) spreading out on the next words following the goal word and the second fixation after an incongruent word, in parietal and occipital areas. We interpreted these results as delayed late components (P3b and N400), reflecting the decision to stop information searching. Indeed, we show a clear spill-over effect showing that the effect on word N spread out on word N + 1 and N + 2. |
Hans Peter Frey; Sophie Molholm; Edmund C. Lalor; Natalie N. Russo; John J. Foxe Atypical cortical representation of peripheral visual space in children with an autism spectrum disorder Journal Article In: European Journal of Neuroscience, vol. 38, no. 1, pp. 2125–2138, 2013. @article{Frey2013a, A key feature of early visual cortical regions is that they contain discretely organized retinotopic maps. Titration of these maps must occur through experience, and the fidelity of their spatial tuning will depend on the consistency and accuracy of the eye movement system. Anomalies in fixation patterns and the ballistics of eye movements are well documented in autism spectrum disorder (ASD), with off-center fixations a hallmark of the phenotype. We hypothesized that these atypicalities might affect the development of visuo-spatial maps and specifically that peripheral inputs might receive altered processing in ASD. Using high-density recordings of visual evoked potentials (VEPs) and a novel system-identification approach known as VESPA (visual evoked spread spectrum analysis), we assessed sensory responses to centrally and peripherally presented stimuli. Additionally, input luminance was varied to bias responsiveness to the magnocellular system, given previous suggestions of magnocellular-specific deficits in ASD. Participants were 22 ASD children (7-17 years of age) and 31 age- and performance-IQ-matched neurotypical controls. Both VEP and VESPA responses to central presentations were indistinguishable between groups. In contrast, peripheral presentations resulted in significantly greater early VEP and VESPA amplitudes in the ASD cohort. We found no evidence that anomalous enhancement was restricted to magnocellular-biased responses. The extent of peripheral response enhancement was related to the severity of stereotyped behaviors and restricted interests, cardinal symptoms of ASD. The current results point to differential visuo-spatial cortical mapping in ASD, shedding light on the consequences of peculiarities in gaze and stereotyped visual behaviors often reported by clinicians working with this population. |
Teresa C. Frohman; Scott L. Davis; Shin Beh; Benjamin M. Greenberg; Gina Remington; Elliot M. Frohman Uhthoff's phenomena in MS - clinical features and pathophysiology Journal Article In: Nature Reviews Neurology, vol. 9, no. 9, pp. 535–540, 2013. @article{Frohman2013, In the late 19(th) century, Wilhelm Uhthoff reported on a series of patients with acute optic neuritis who manifested similar recurrent, stereotyped visual symptoms that were of paroxysmal onset, short in duration, and reversible. These 'Uhthoff's phenomena', which are a feature of multiple sclerosis (MS) and other demyelinating diseases, can be triggered by factors including the perimenstrual period, exercise, infection, fever, exposure to high ambient temperatures, and psychological stress. Here, we characterize the clinical, pathophysiological and neurotherapeutic challenges associated with Uhthoff's phenomena, and discuss the differentiation of these events from other paroxysmal, acute or subacute changes in functional capabilities and neurological symptoms in MS. For instance, whereas MS exacerbations are contingent on immune dysregulation, Uhthoff's phenomena are predicated on ion channel modifications, in conjunction with thermoregulatory derangements that transiently alter the conduction properties of demyelinated axons. An understanding of these pathophysiological underpinnings of Uhthoff's phenomena is germane to their recognition and timely treatment. |
Isabella Fuchs; Jan Theeuwes; Ulrich Ansorge Exogenous attentional capture by subliminal abrupt-onset cues: Evidence from contrast-polarity independent cueing effects Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 974–988, 2013. @article{Fuchs2013, In the present study, we tested whether subliminal abrupt-onset cues capture attention in a bottom-up or top-down controlled manner. Eor our tests, we varied the searched-for target-contrast polarity (i.e., dark or light targets against a gray background) over four experiments. In line with the bottom-up hypothesis, our results indicate that subliminal-onset cues capture attention independently of the searched-for target-contrast polarity (Experiment 1), and this effect is not stronger for targets that matched the searched-for target-contrast polarity (Experiment 2). In fact, even to-be-ignored cues associated with a no-go response captured attention in a salience-driven way (Experiment 3). Eor supraliminal cues, we found attentional capture only by cues with a matching contrast polarity, reflecting contingent capture (Experiment 4). The results point toward a specific role of subliminal abrupt onsets for attentional capture. |
Shai Gabay; Yoni Pertzov; Noga Cohen; Galia Avidan; Avishai Henik Remapping of the environment without corollary discharges: Evidence from scene-based IOR Journal Article In: Journal of vision, vol. 13, pp. 1–10, 2013. @article{Gabay2013, Previous studies suggested that in order to perceive a stable image of the visual world despite constant eye movements, an efference copy of the oculomotor command is used to remap the representation of the environment in the brain. In two experiments, an inhibitory attentional component (inhibition of return-IOR) was used to examine whether remapping can occur also in the absence of eye movements. Participants were asked to maintain fixation while an unpredictive, attention-grabbing cue appeared and was then followed by a movement of the background image which was artificial (random dots, Experiment 1) or composed of natural scenes (Experiment 2). The participants were then required to respond to a target stimulus that was presented either at the same location as the cue relative to fixation (retinotopic), or at a matching location relative to the background (scene based). In both experiments, an IOR effect was found in scene-based locations immediately after the movement of the background. We suggest that remapping of the inhibitory tagging, which might be a proxy for remapping of the visual scene, could be accomplished rapidly even without the use of an efference copy; the inhibitory tag seems to be anchored to the background image and to move together with it. |
Jose Ignacio Egaña; Christ Devia; Rocío Mayol; Javiera Parrini; Gricel Orellana; Aida Ruiz; Pedro E. Maldonado Small saccades and image complexity during free viewing of natural images in schizophrenia Journal Article In: Frontiers in Psychiatry, vol. 4, pp. 37, 2013. @article{Egana2013, In schizophrenia, patients display dysfunctions during the execution of simple visual tasks such as antisaccade or smooth pursuit. In more ecological scenarios, such as free viewing of natural images, patients appear to make fewer and longer visual fixations and display shorter scanpaths. It is not clear whether these measurements reflect alterations in their proficiency to perform basic eye movements, such as saccades and fixations, or are related to high-level mechanisms, such as exploration or attention. We utilized free exploration of natural images of different complexities as a model of an ecological context where normally operative mechanisms of visual control can be accurately measured. We quantified visual exploration as Euclidean distance, scanpaths, saccades, and visual fixation, using the standard SR-Research eye tracker algorithm (SR). We then compared this result with a computation that includes microsaccades (EM). We evaluated eight schizophrenia patients and corresponding healthy controls (HC). Next, we tested whether the decrement in the number of saccades and fixations, as well as their increment in duration reported previously in schizophrenia patients, resulted from the increasing occurrence of undetected microsaccades. We found that when utilizing the standard SR algorithm, patients displayed shorter scanpaths as well as fewer and shorter saccades and fixations. When we employed the EM algorithm, the differences in these parameters between patients and HC were no longer significant. On the other hand, we found that image complexity plays an important role in exploratory behaviors, demonstrating that this factor explains most of differences between eye-movement behaviors in schizophrenia patients. These results help elucidate the mechanisms of visual motor control that are affected in schizophrenia and contribute to the finding of adequate markers for diagnosis and treatment for this condition. |
Caroline Ego; Jean-Jacques Orban de Xivry; Marie-Cécile Nassogne; Demet Yüksel; Philippe Lefèvre The saccadic system does not compensate for the immaturity of the smooth pursuit system during visual tracking in children. Journal Article In: Journal of neurophysiology, vol. 110, no. 2, pp. 358–367, 2013. @article{Ego2013, Motor skills improve with age from childhood into adulthood, and this improvement is reflected in the performance of smooth pursuit eye movements. In contrast, the saccadic system becomes mature earlier than the smooth pursuit system. Therefore, the present study investigates whether the early mature saccadic system compensates for the lower pursuit performance during childhood. To answer this question, horizontal eye movements were recorded in 58 children (ages 5-16 yr) and 16 adults (ages 23-36 yr) in a task that required the combination of smooth pursuit and saccadic eye movements. Smooth pursuit performance improved with age. However, children had larger average position error during target tracking compared with adults, but they did not execute more saccades to compensate for their low pursuit performance despite the early maturity of their saccadic system. This absence of error correction suggests that children have a lower sensitivity to visual errors compared with adults. This reduced sensitivity might stem from poor internal models and longer processing time in young children. |
Miriam Ellert Resolving ambiguous pronouns in a second language: A visual-world eye-tracking study with dutch learners of German Journal Article In: International Review of Applied Linguistics in Language Teaching, vol. 51, no. 2, pp. 171–197, 2013. @article{Ellert2013, This study examined whether resolving ambiguous pronouns in a second language is guided by the L1 preferences of the learners. Given the fact that the typologically closely related languages, German and Dutch, have both been found to use personal pronouns (German er, Dutch hij; ‘he') to refer to topical antecedents, and d-pronouns (German der,Dutch die; ‘he') for non-topical co- reference (Ellert 2010; Kaiser 2011; Kaiser and Trueswell 2004), it was asked whether Dutch L2 learners of German would exhibit similar preferences when resolving the two pronominal forms in their L2. This was examined with the visual-world eye-tracking paradigm and an off-line referent assignment task. The results showed differences in resolution patterns: the Dutch learners of German showed an overall topic preference across pronouns which became more target-like at higher proficiency levels. This suggests that L2 information organization cannot be merely explained by L1 influences, but needs to take more more general L2 learner effects into account. |
James T. Enns; Sarah C. MacDonald The role of clarity and blur in guiding visual attention in photographs Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 2, pp. 568–578, 2013. @article{Enns2013, Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory and personality ratings for individual people in the photos (Experiments 1-3). The results showed that fixations occurred more rapidly and frequently to a local region of clarity than to a comparable blurred region in all tasks, independent of the content of the photo in the local region, and even under instructions to look equally at both regions. However, this bias was reversed when the content of the photos was no longer task-relevant. In Experiment 4, participants located target regions defined by either clarity or blur. Fixations and manual responses were faster for blurred than for sharp targets. These findings imply that the saliency of both image clarity and image blur depends on viewers' goals. Focusing on photo content prioritizes regions of clarity whereas focusing on photo quality prioritizes attention to regions of blur. |
Neil K. Archibald; Samuel B. Hutton; Michael P. Clarke; Urs P. Mosimann; David J. Burn Visual exploration in Parkinson's disease and Parkinson's disease dementia Journal Article In: Brain, vol. 136, no. 3, pp. 739–750, 2013. @article{Archibald2013, Parkinson's disease, typically thought of as a movement disorder, is increasingly recognized as causing cognitive impairment and dementia. Eye movement abnormalities are also described, including impairment of rapid eye movements (saccades) and the fixations interspersed between them. Such movements are under the influence of cortical and subcortical networks commonly targeted by the neurodegeneration seen in Parkinson's disease and, as such, may provide a marker for cognitive decline. This study examined the error rates and visual exploration strategies of subjects with Parkinson's disease, with and without cognitive impairment, whilst performing a battery of visuo-cognitive tasks. Error rates were significantly higher in those Parkinson's disease groups with either mild cognitive impairment (P = 0.001) or dementia (P < 0.001), than in cognitively normal subjects with Parkinson's disease. When compared with cognitively normal subjects with Parkinson's disease, exploration strategy, as measured by a number of eye tracking variables, was least efficient in the dementia group but was also affected in those subjects with Parkinson's disease with mild cognitive impairment. When compared with control subjects and cognitively normal subjects with Parkinson's disease, saccade amplitudes were significantly reduced in the groups with mild cognitive impairment or dementia. Fixation duration was longer in all Parkinson's disease groups compared with healthy control subjects but was longest for cognitively impaired Parkinson's disease groups. The strongest predictor of average fixation duration was disease severity. Analysing only data from the most complex task, with the highest error rates, both cognitive impairment and disease severity contributed to a predictive model for fixation duration F(2,76) = 12.52, P 0.001, but medication dose did not (r = 0.18 |
Scott P. Ardoin; Katherine S. Binder; Andrea M. Zawoyski; Tori E. Foster; Leslie A. Blevins Using eye-tracking procedures to evaluate generalization effects: Practicing target words during repeated readings within versus across texts Journal Article In: School Psychology Review, vol. 42, no. 4, pp. 477–495, 2013. @article{Ardoin2013, Repeated readings is a frequently studied and recommended intervention for improving reading fluency. Typically, researchers investigate generalization of repeated readings interventions by assessing students' accuracy and rate on researcher-developed high word overlap passages. Unfortunately, this methodology may mask intervention effects given that the dependent measure is reflective of time spent by students reading both practiced and unpracticed words. Eye-tracking procedures have the potential to overcome this limitation. The current study examined the eye movements of participants who were (a) not provided with any intervention (n = 28), (b) provided with repeated readings on a single passage containing a set of target words (n = 28), or (c) provided the opportunity to read four different passages each containing the same set of target words (n = 28). Students' reading of a novel passage containing the target words provides evidence to support recommendations that schools use repeated readings. Copyright 2013 by the National Association of School Psychologists. |
Wael F. Asaad; Navaneethan Santhanam; Steven McClellan; David J. Freedman High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB Journal Article In: Journal of Neurophysiology, vol. 109, no. 1, pp. 249–260, 2013. @article{Asaad2013, Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. |
Jane Ashby; Heather Dix; Morgan Bontrager Phonemic awareness contributes to text reading fluency: Evidence from eye movements. Journal Article In: School Psychology Review, vol. 42, no. 2, pp. 157–170, 2013. @article{Ashby2013, Although phonemic awareness is a known predictor of early decoding and word recognition, less is known about relationships between phonemic awareness and text reading fluency. This longitudinal study is the first to inves-tigate this relationship by measuring eye movements during picture matching tasks and during silent sentence reading. Time spent looking at the correct target during phonemic awareness and receptive spelling tasks gauged the efficiency of phonological and orthographic processes. Children's eye movements during sen-tence reading provided a direct measure of silent reading fluency for compre-hended text. Results indicate that children who processed the phonemic awareness targets more slowly in Grade 2 tended to be slower readers in Grade 3. Processing difficulty during a receptive spelling task was related to reading fluency within Grade 2. Findings suggest that inefficient phonemic processing contributes to poor silent reading fluency after second grade. |
Senay Aydin; Niall C. Strang; Velitchko Manahilov Age-related deficits in attentional control of perceptual rivalry Journal Article In: Vision Research, vol. 77, pp. 32–40, 2013. @article{Aydin2013, Some aspects of attentional processing are known to decline with normal aging. To understand how age affects the attentional control of perceptual stability, we investigated age-related changes in voluntarily controlled perceptual rivalry. Durations of the dominant percept, produced by an ambiguous Rubin vase-faces figure, were measured in conditions that required passive viewing and attentional control: holding and switching the dominant percept. During passive viewing, mean dominance duration in the older group was significantly longer (63%) than the dominance duration found in the young group. This age-related deficit could be due to a decline in the apparent strength of the alternating percepts as a result of higher contrast gain of visual cortical activity and a reduction in the amount of attentional resources allocated to the ambiguous stimulus in older people compared to young adults. In comparison to passive viewing, holding the dominant percept did not significantly alter the dominance durations in the older group, while the dominance durations in the young group were increased (~100%). The dominance durations for both age groups in switch conditions were reduced compared to their passive viewing durations (~40%). The inability of older people to voluntarily prolong the duration of the dominant percept suggests that they may have abnormal attentional mechanisms, which are inefficient at enhancing the effective strength of the dominant percept. Results suggest that older adults have difficulty holding attended visual objects in focus, a problem that could affect their ability to carry out everyday tasks. |
Nicola C. Anderson; Walter F. Bischof; Kaitlin E. W. Laidlaw; Evan F. Risko; Alan Kingstone Recurrence quantification analysis of eye movements Journal Article In: Behavior Research Methods, vol. 45, pp. 842–856, 2013. @article{Anderson2013, Recurrence quantification analysis (RQA) has been successfully used for describing dynamic systems that are too complex to be characterized adequately by standard methods in time series analysis. More recently, RQA has been used for analyzing the coordination of gaze patterns between cooperating individuals. Here, we extend RQA to the characterization of fixation sequences, and we show that the global and local temporal characteristics of fixation sequences can be captured by a small number of RQA measures that have a clear interpretation in this context. We applied RQA to the analysis of a study in which observers looked at different scenes under natural or gaze-contingent viewing conditions, and we found large differences in the RQA measures between the viewing conditions, indicating that RQA is a powerful new tool for the analysis of the temporal patterns of eye movement behavior. |
Bernhard Angele; Keith Rayner Processing the in the parafovea: Are articles skipped automatically? Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 2, pp. 649–662, 2013. @article{Angele2013, One of the words that readers of English skip most often is the definite article the. Most accounts of reading assume that in order for a reader to skip a word, it must have received some lexical processing. The definite article is skipped so regularly, however, that the oculomotor system might have learned to skip the letter string t-h-e automatically. We tested whether skipping of articles in English is sensitive to context information or whether it is truly automatic in the sense that any occurrence of the letter string the will trigger a skip. This was done using the gaze-contingent boundary paradigm (Rayner, 1975) to provide readers with false parafoveal previews of the article the. All experimental sentences contained a short target verb, the preview of which could be correct (i.e., identical to the actual subsequent word in the sentence; e.g., ace), a nonword (tda), or an infelicitous article preview (the). Our results indicated that readers tended to skip the infelicitous the previews frequently, suggesting that, in many cases, they seemed to be unable to detect the syntactic anomaly in the preview and based their skipping decision solely on the orthographic properties of the article. However, there was some evidence that readers sometimes detected the anomaly, as they also showed increased skipping of the pretarget word in the the preview condition. |
Bernhard Angele; Keith Rayner Eye movements and parafoveal preview of compound words: Does morpheme order matter? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 505–526, 2013. @article{Angele2013a, Recently, there has been considerable debate about whether readers can identify multiple words in parallel or whether they are limited to a serial mode of word identification, processing one word at a time (see, e.g., Reichle, Liversedge, Pollatsek, & Rayner, 2009). Similar questions can be applied to bimorphemic compound words: Do readers identify all the constituents of a compound word in parallel, and does it matter which of the morphemes is identified first? We asked subjects to read compound words embedded in sentences while monitoring their eye movements. Using the boundary paradigm (Rayner, 1975), we manipulated the preview that subjects received of the compound word before they fixated it. In particular, the morpheme order of the preview was either normal (cowboy) or reversed (boycow). Additionally, we manipulated the preview availability for each of the morphemes separately. Preview was thus available for the first morpheme only (cowtxg), for the second morpheme only (enzboy), or for neither of the morphemes (enztxg). We report three major findings: First, there was an effect of morpheme order on gaze durations measured on the compound word, indicating that, as expected, readers obtained a greater preview benefit when the preview presented the morphemes in the correct order than when their order was reversed. Second, gaze durations on the compound word were influenced not only by preview availability for the first, but also by that for the second morpheme. Finally, and most importantly, the results show that readers are able to extract some morpheme information even from a reverse order preview. In summary, readers obtain preview benefit from both constituents of a short compound word, even when the preview does not reflect the correct morpheme order. |
Bernhard Angele; Randy Tran; Keith Rayner Parafoveal-foveal overlap can facilitate ongoing word identification during reading: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 2, pp. 526–538, 2013. @article{Angele2013b, Readers continuously receive parafoveal information about the upcoming word in addition to the foveal information about the currently fixated word. Previous research (Inhoff, Radach, Starr, & Greenberg, 2000) showed that the presence of a parafoveal word that was similar to the foveal word facilitated processing of the foveal word. We used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the parafoveal information that subjects received before or while fixating a target word (e.g., news) within a sentence. Specifically, a reader's parafovea could contain a repetition of the target (news), a correct preview of the posttarget word (once), an unrelated word (warm), random letters (cxmr), a nonword neighbor of the target (niws), a semantically related word (tale), or a nonword neighbor of that word (tule). Target fixation times were significantly lower in the parafoveal repetition condition than in all other conditions, suggesting that foveal processing can be facilitated by parafoveal repetition. We present a simple model framework that can account for these effects. |
Ulrich Ansorge; Heinz-Werner Priess; Dirk Kerzel Effects of relevant and irrelevant color singletons on inhibition of return and attentional capture Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1687–1702, 2013. @article{Ansorge2013, We tested whether color singletons lead to saccadic and manual inhibition of return (IOR; i.e., slower responses at cued locations) and whether IOR depended on the relevance of the color singletons. The target display was preceded by a nonpredictive cue display. In three experiments, half of the cues were response-relevant, because participants had to perform a discrimination task at the cued location. With the exception of Experiment 2, none of the cue colors matched the target color. We observed saccadic IOR after color singletons, which was greater for slow than for fast responses. Furthermore, when the relevant cue color matched the target color, we observed attentional capture (i.e., faster responses at cued locations) with rapid responses, but IOR with slower responses, which provides evidence for attentional deallocation. When the cue display was completely response-irrelevant in two additional experiments, we did not find evidence for IOR. Instead, we found attentional capture when the cue color matched the target color. Also, attentional capture was greater for rapid responses and with short cue-target intervals. Thus, IOR emerges when cues are relevant and do not match the target color, whereas attentional capture emerges with relevant and irrelevant cues that match the target color. |
Katharina Anton-Erxleben; Katrin Herrmann; Marisa Carrasco Independent Effects of Adaptation and Attention on Perceived Speed Journal Article In: Psychological Science, vol. 24, no. 2, pp. 150–159, 2013. @article{AntonErxleben2013, Adaptation and attention are two mechanisms by which sensory systems manage limited bioenergetic resources: Whereas adaptation decreases sensitivity to stimuli just encountered, attention increases sensitivity to behaviorally relevant stimuli. In the visual system, these changes in sensitivity are accompanied by a change in the appearance of different stimulus dimensions, such as speed. Adaptation causes an underestimation of speed, whereas attention leads to an overestimation of speed. In the two experiments reported here, we investigated whether the effects of these mechanisms interact and how they affect the appearance of stimulus features. We tested the effects of adaptation and the subsequent allocation of attention on perceived speed. A quickly moving adaptor decreased the perceived speed of subsequent stimuli, whereas a slow adaptor did not alter perceived speed. Attention increased perceived speed regardless of the adaptation effect, which indicates that adaptation and attention affect perceived speed independently. Moreover, the finding that attention can alter perceived speed after adaptation indicates that adaptation is not merely a by-product of neuronal fatigue. |
Kenn Apel; Danielle Brimo; Elizabeth B Wilson-Fowler; Christian Vorstius; Ralph Radach Children develop initial orthographic knowledge during storybook reading Journal Article In: Scientific Studies of Reading, vol. 17, no. 4, pp. 286–302, 2013. @article{Apel2013, We examined whether young children acquire orthographic knowledge during structured adult-led storybook reading even though minimal viewing time is devoted to print. Sixty-two kindergarten children were read 12 storybook ?chapters? while their eye movements were tracked. Results indicated that the children quickly acquired initial mental graphemic representations of target nonwords. This learning occurred even though they focused on the target nonwords approximately one fourth of the total time while viewing the pages. Their ability to acquire the initial orthographic representations of the target nonwords and their viewing time was affected by the linguistic statistical regularities of the words. The results provide evidence of orthographic learning during structured storybook reading and for the use of implicit linguistic statistical regularities for learning new orthographic word forms in the early stages of reading development. |
Manabu Arai; Frank Keller The use of verb-specific information for prediction in sentence processing Journal Article In: Language and Cognitive Processes, vol. 28, no. 4, pp. 525–560, 2013. @article{Arai2013, Recent research has shown that language comprehenders make predictions about upcoming linguistic information. These studies demonstrate that the processor not only analyses the input that it received but also predicts upcoming unseen elements. Two visual world experiments were conducted to examine the type of syntactic information this prediction process has access to. Experiment 1 examined whether the verb's subcategorization information is used for predicting a direct object, by comparing transitive verbs (e.g., punish) to intransitive verbs (e.g., disagree). Experiment 2 examined whether verb frequency information is used for predicting a reduced relative clause by contrasting verbs that are infrequent in the past participle form (e.g., watch) with ones that are frequent in that form (e.g., record). Both experiments showed that comprehenders used lexically specific syntactic information to predict upcoming syntactic structure; this information can be used to avoid garden paths in certain cases, as Experiment 2 demonstrated. |
T. C. Blanchard; John M. Pearson; Benjamin Y. Hayden Postreward delays and systematic biases in measures of animal temporal discounting Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 38, pp. 15491–15496, 2013. @article{Blanchard2013, Intertemporal choice tasks, which pit smaller/sooner rewards against larger/later ones, are frequently used to study time preferences and, by extension, impulsivity and self-control. When used in animals, many trials are strung together in sequence and an adjusting buffer is added after the smaller/sooner option to hold the total duration of each trial constant. Choices of the smaller/sooner option are not reward maximizing and so are taken to indicate that the animal is discounting future rewards. However, if animals fail to correctly factor in the duration of the postreward buffers, putative discounting behavior may instead reflect constrained reward maximization. Here, we report three results consistent with this discounting-free hypothesis. We find that (i) monkeys are insensitive to the association between the duration of postreward delays and their choices; (ii) they are sensitive to the length of postreward delays, although they greatly underestimate them; and (iii) increasing the salience of the postreward delay biases monkeys toward the larger/later option, reducing measured discounting rates. These results are incompatible with standard discounting-based accounts but are compatible with an alternative heuristic model. Our data suggest that measured intertemporal preferences in animals may not reflect impulsivity, or even mental discounting of future options, and that standard human and animal intertemporal choice tasks measure unrelated mental processes. |
A. R. Bogadhi; Anna Montagnini; Guillaume S. Masson Dynamic interaction between retinal and extraretinal signals in motion integration for smooth pursuit Journal Article In: Journal of Vision, vol. 13, no. 13, pp. 1–26, 2013. @article{Bogadhi2013, Due to the aperture problem, the initial direction of tracking responses to a translating bar is biased towards the direction orthogonal to the bar. This observation offers a powerful way to explore the interactions between retinal and extraretinal signals in controlling our actions. We conducted two experiments to probe these interactions by briefly (200 and 400 ms) blanking the moving target (458 or 1358 tilted bar) during steady state (Experiment 1) and at different moments during the early phase of pursuit (Experiment 2). In Experiment 1, we found a marginal but statistically significant directional bias on target reappearance for all subjects in at least one blank condition (200 or 400 ms). In Experiment 2, no systematic significant directional bias was observed at target reappearance after a blank. These results suggest that the weighting of retinal and extraretinal signals is dynamically modulated during the different phases of pursuit. Based on our previous theoretical work on motion integration, we propose a new closed- loop two-stage recurrent Bayesian model where retinal and extraretinal signals are dynamically weighted based on their respective reliabilities and combined to compute the visuomotor drive. With a single free parameter, the model reproduces many aspects of smooth pursuit observed across subjects during and immediately after target blanking. It provides a new theoretical framework to understand how different signals are dynamically combined based on their relative reliability to adaptively control our actions. Overall, the model and behavioral results suggest that human subjects rely more strongly on prediction during theearly phasethaninthe steady state phase of pursuit. |
B. Bonev; Lewis L. Chuang; F. Escolano How do image complexity, task demands and looking biases influence human gaze behavior? Journal Article In: Pattern Recognition Letters, vol. 34, no. 7, pp. 723–730, 2013. @article{Bonev2013, In this paper we propose an information-theoretic approach to understand eye-movement patterns, in relation to the task performed and image complexity. We commence with the analysis of the distributions and amplitudes of eye-movement saccades, performed across two different image-viewing tasks: free viewing and visual search. Our working hypothesis is that the complexity of image information and task demands should interact. This should be reflected in the Markovian pattern of short and long saccades. We compute high-order Markovian models of performing a large saccade after many short ones and also propose a novel method for quantifying image complexity. The analysis of the interaction between high-order Markovianity, task and image complexity supports our hypothesis. |
Robert W. Booth; Ulrich W. Weger The function of regressions in reading: Backward eye movements allow rereading Journal Article In: Memory & Cognition, vol. 41, no. 1, pp. 82–97, 2013. @article{Booth2013, Standard text reading involves frequent eye movements that go against normal reading order. The function of these "regressions" is still largely unknown. The most obvious explanation is that regressions allow for the rereading of previously fixated words. Alternatively, physically returning the eyes to a word's location could cue the reader's memory for that word, effectively aiding the comprehension process via location priming (the "deictic pointer hypothesis"). In Experiment 1, regression frequency was reduced when readers knew that information was no longer available for rereading. In Experiment 2, readers listened to auditorily presented text while moving their eyes across visual placeholders on the screen. Here, rereading was impossible, but deictic pointers remained available, yet the readers did not make targeted regressions in this experiment. In Experiment 3, target words in normal sentences were changed after reading. Where the eyes later regressed to these words, participants generally remained unaware of the change, and their answers to comprehension questions indicated that the new meaning of the changed word was what determined their sentence representations. These results suggest that readers use regressions to reread words and not to cue their memory for previously read words. |