All EyeLink Publications
All 11,000+ peer-reviewed EyeLink research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
Eiling Yee; Sheila E. Blumstein; Julie C. Sedivy
In: Journal of Cognitive Neuroscience, vol. 20, no. 4, pp. 592–612, 2008.
Lexical processing requires both activating stored representations, and selecting among active candidates. The current work uses an eye-tracking paradigm to conduct a detailed temporal investigation of lexical processing. Patients with Broca's and Wernicke's aphasia are studied to shed light on the roles of anterior and posterior brain regions in lexical processing as well as the effects of lexical competition on such processing. Experiment 1 investigates whether objects semantically related to an uttered word are preferentially fixated, e.g., given the auditory target 'hammer', do participants fixate a picture of a nail? Results show that, like normals, both groups of patients are more likely to fixate on an object semantically related to the target than an unrelated object. Experiment 2 explores whether Broca's and Wernicke's aphasics show competition effects when words share onsets with the uttered word, e.g., given the auditory target 'hammer', do participants fixate a picture of a hammock? Experiment 3 investigates whether these patients activate words semantically related to onset competitors of the uttered word, e.g., given the auditory target 'hammock' do participants fixate a nail due to partial activation of the onset competitor hammer? Results of Experiments 2 and 3 show pathological patterns of performance for both Broca's and Wernicke's aphasics under conditions of lexical onset competition. However, the patterns of deficit differed, suggesting different functional and computational roles for anterior and posterior areas in lexical processing. Implications of the findings for the functional architecture of the lexical processing system and its potential neural substrates are considered.
Miao-Hsuan Yen; Jie-Li Tsai; Ovid J. L. Tzeng; Daisy L. Hung
Eye movements and parafoveal word processing in reading Chinese Journal Article
In: Memory and Cognition, vol. 36, no. 5, pp. 1033–1045, 2008.
In two experiments, a parafoveal lexicality effect in the reading of Chinese (a script that does not physically mark word boundaries) was examined. Both experiments used the boundary paradigm (Rayner, 1975) and indicated that the lexical properties of parafoveal words influenced eye movements. In Experiment 1, the preview stimulus was either a real word or a pseudoword. Targets with word previews, even unrelated ones, were more likely to be skipped than were those with pseudowords. In Experiment 2, all of the preview stimuli had the same first character as the target. Target words with same-morpheme previews were fixated for less time than were those with pseudoword previews, suggesting that morphological processing may be involved in extracting information from parafoveal words in Chinese reading. Together, the two experiments dealing with how words are processed in Chinese may provide some constraints on current computational models of reading.
Shlomit Yuval-Greenberg; Orr Tomer; Alon S. Keren; Israel Nelken; Leon Y. Deouell
In: Neuron, vol. 58, no. 3, pp. 429–441, 2008.
The induced gamma-band EEG response (iGBR) recorded on the scalp is widely assumed to reflect synchronous neural oscillation associated with object representation, attention, memory, and consciousness. The most commonly reported EEG iGBR is a broadband transient increase in power at the gamma range ∼200-300 ms following stimulus onset. A conspicuous feature of this iGBR is the trial-to-trial poststimulus latency variability, which has been insufficiently addressed. Here, we show, using single-trial analysis of concomitant EEG and eye tracking, that this iGBR is tightly time locked to the onset of involuntary miniature eye movements and reflects a saccadic "spike potential." The time course of the iGBR is related to an increase in the rate of saccades following a period of poststimulus saccadic inhibition. Thus, whereas neuronal gamma-band oscillations were shown conclusively with other methods, the broadband transient iGBR recorded by scalp EEG reflects properties of miniature saccade dynamics rather than neuronal oscillations.
Denise H. Wu; Anne Morganti; Anjan Chatterjee
In: Neuropsychologia, vol. 46, no. 2, pp. 704–713, 2008.
Languages consistently distinguish the path and the manner of a moving event in different constituents, even if the specific constituents themselves vary across languages. Children also learn to categorize moving events according to their path and manner at different ages. Motivated by these linguistic and developmental observations, we employed fMRI to test the hypothesis that perception of and attention to path and manner of motion is segregated neurally. Moreover, we hypothesize that such segregation respects the "dorsal-where and ventral-what" organizational principle of vision. Consistent with this proposal, we found that attention to the path of a moving event was associated with greater activity within bilateral inferior/superior parietal lobules and the frontal eye-field, while attention to manner was associated with greater activity within bilateral postero-lateral inferior/middle temporal regions. Our data provide evidence that motion perception, traditionally considered as a dorsal "where" visual attribute, further segregates into dorsal path and ventral manner attributes. This neural segregation of the components of motion, which are linguistically tagged, points to a perceptual counterpart of the functional organization of concepts and language.
Lu Qi Xiao; Jun-Yun Zhang; Rui Wang; Stanley A. Klein; Dennis M. Levi; Cong Yu
In: Current Biology, vol. 18, no. 24, pp. 1922–1926, 2008.
Practice improves discrimination of many basic visual features, such as contrast, orientation, and positional offset [1-7]. Perceptual learning of many of these tasks is found to be retinal location specific, in that learning transfers little to an untrained retinal location [1, 6-8]. In most perceptual learning models, this location specificity is interpreted as a pointer to a retinotopic early visual cortical locus of learning [1, 6-11]. Alternatively, an untested hypothesis is that learning could occur in a central site, but it consists of two separate aspects: learning to discriminate a specific stimulus feature ("feature learning"), and learning to deal with stimulus-nonspecific factors like local noise at the stimulus location ("location learning") . Therefore, learning is not transferable to a new location that has never been location trained. To test this hypothesis, we developed a novel double-training paradigm that employed conventional feature training (e.g., contrast) at one location, and additional training with an irrelevant feature/task (e.g., orientation) at a second location, either simultaneously or at a different time. Our results showed that this additional location training enabled a complete transfer of feature learning (e.g., contrast) to the second location. This finding challenges location specificity and its inferred cortical retinotopy as central concepts to many perceptual-learning models and suggests that perceptual learning involves higher nonretinotopic brain areas that enable location transfer.
Gregory J. Zelinsky
A theory of eye movements during target acquisition Journal Article
In: Psychological Review, vol. 115, pp. 787–835, 2008.
The gaze movements accompanying target localization were examined via human observers and a computational model (Target Acquisition Model, TAM). Search contexts ranged from fully realistic scenes, to toys in a crib, to Os and Qs, and manipulations included set size, target eccentricity, and target-distractor similarity. Observers and the model always previewed the same targets and searched the identical displays. Behavioral and simulated eye movements were analyzed for acquisition accuracy, efficiency, and target guidance. TAM's behavior generally fell within the behavioral mean's 95% confidence interval for all measures in each experiment/condition. This agreement suggests that a fixed-parameter model using spatio-chromatic filters and a simulated retina, when driven by the correct visual routines, can be a good general purpose predictor of human target acquisition behavior.
Gregory J. Zelinsky; Mark B. Neider
In: Visual Cognition, vol. 16, no. 5, pp. 553–566, 2008.
To study multiple object tracking under naturalistic conditions, observers tracked 1–4 sharks (9 in total) swimming throughout an underwater scene. Accuracy was high in the Track 1–3 conditions (>92%), but declined when tracking 4 targets (78%). Gaze analyses revealed a dependency between tracking strategy and target number. Observers tracking 2 targets kept their gaze on the target centroid rather than individual objects; observers tracking 4 targets switched their gaze back-and-forth between sharks. Using an oculomotor method for identifying targets lost during tracking, we confirmed that this strategy shift was real and not an artifact of centroid definition. Moreover, we found that tracking errors increased with gaze time on targets, and decreased with time spent looking at the centroid. Depending on tracking load, both centroid and target-switching strategies are used, with accuracy improving with reliance on centroid tracking. An index juggling hypothesis is advanced to explain the suboptimal tendency to fixate tracked objects.
Zhi-Lei Zhang; Christopher R. L. Cantor; Clifton M. Schor
In: Journal of vision, vol. 8, no. 14, pp. 22 1–18, 2008.
Visual directions of foveal targets flashed just prior to the onset of a saccade are misperceived as shifted in the direction of the eye movement. We examined the effects of luminance level and temporal interactions on the amplitude of these perisaccadic spatial distortions (PSDs). PSDs were larger for both single and sequentially double-flashed stimuli with low than high luminance levels, and there was a reduction of PSDs for low luminance targets flashed immediately before the saccade. Significant temporal interactions were suggested by PSDs for a pair of sequentially presented flashes (ISI = 50 ms) that could not be predicted from the single-flash distortions: PSD increased for the first flash and decreased for the second compared to the single-flash distortions. We also found that when the flash pair was presented near saccade onset, the perceived distortion of the earlier flash overtook that of the later flash, even though the late flash occurred closer in time to the saccade. To explain these effects, we propose that stimulus-dependent nonlinearities (contrast gain control and saccadic suppression) influence the duration of the temporal impulse response of both single- and double-flashed stimuli.
Claudia Wilimzig; Naotsugu Tsuchiya; Manfred Fahle; Wolfgang Einhäuser; Christof Koch
In: Journal of Vision, vol. 8, no. 5, pp. 7, 2008.
Selective attention to a target yields faster and more accurate responses. Faster response times, in turn, are usually associated with increased subjective confidence. Could the decrease in reaction time in the presence of attention therefore simply reflect a shift toward more confident responses? We here addressed the extent to which attention modulates accuracy, processing speed, and confidence independently. To probe the effect of spatial attention on performance, we used two attentional manipulations of a visual orientation discrimination task. We demonstrate that spatial attention significantly increases accuracy, whereas subjective confidence measures reveal overconfidence in non-attended stimuli. At constant confidence levels, reaction times showed a significant decrease (by 15-49%, corresponding to 100-250 ms). This dissociation of objective performance and subjective confidence suggests that attention and awareness, as measured by confidence, are distinct, albeit related, phenomena.
Amanda H. Wilson; Adam Wilson; Martin W. Hove; Martin Paré; Kevin G. Munhall
Loss of central vision and audiovisual speech perception Journal Article
In: Visual Impairment Research, vol. 10, no. 1, pp. 23–34, 2008.
Communication impairments pose a major threat to an individual's quality of life. However, the impact of visual impairments on communication is not well understood, despite the important role that vision plays in the perception of speech. Here we present 2 experiments examining the impact of discrete central scotomas on speech perception. In the first experiment, 4 patients with central vision loss due to unilateral macular holes identified utterances with conflicting auditory-visual information, while simultaneously having their eye movements recorded. Each eye was tested individually. Three participants showed similar speech perception with both the impaired eye and the unaffected eye. For 1 participant, speech perception was disrupted by the scotoma because the participant did not shift gaze to avoid obscuring the talker's mouth with the scotoma. In the second experiment, 12 undergraduate students with gaze-contingent artificial scotomas (10 visual degrees in diameter) identified sentences in background noise. These larger scotomas disrupted speech perception, but some participants overcame this by adopting a gaze strategy whereby they shifted gaze to prevent obscuring important regions of the face such as the mouth. Participants who did not spontaneously adopt an adaptive gaze strategy did not learn to do so over the course of 5 days; however, participants who began with adaptive gaze strategies became more consistent in their gaze location. These findings confirm that peripheral vision is sufficient for perception of most visual information in speech, and suggest that training in gaze strategy may be worthwhile for individuals with communication deficits due to visual impairments.
D. A. Wismeijer; Raymond Van Ee; Casper J. Erkelens
Depth cues, rather than perceived depth, govern vergence Journal Article
In: Experimental Brain Research, vol. 184, no. 1, pp. 61–70, 2008.
We studied the influence of perceived surface orientation on vergence accompanying a saccade while viewing an ambiguous stimulus. We used the slant rivalry stimulus, in which perspective foreshortening and disparity specified opposite surface orientations. This rivalrous configuration induces alternations of perceived surface orientation, while the slant cues remain constant. Subjects were able to voluntarily control their perceptual state while viewing the ambiguous stimulus. They were asked to make a saccade across the perceived slanted surface. Our data show that vergence responses closely approximated the vergence response predicted by the disparity cue, irrespective of voluntarily controlled perceived orientation. However, comparing the data obtained while viewing the ambiguous stimulus with data from an unambiguous stimulus condition (when disparity and perspective specified similar surface orientations) revealed an effect of perspective cues on vergence. Collectively our results show that depth cues rather than perceived depth govern vergence.
M. Wittenberg; Frank Bremmer; T. Wachtler
Perceptual evidence for saccadic updating of color stimuli Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–9, 2008.
In retinotopically organized areas of the macaque visual cortex, neurons have been found that shift their receptive fields before a saccade to their postsaccadic position. This saccadic remapping has been interpreted as a mechanism contributing to perceptual stability of space across eye movements. So far, there is only limited evidence for similar mechanisms that support perceptual stability of visual objects by remapping the representation of object features across saccades. In our present study, we investigated whether color stimuli presented before a saccade affected the perception of color stimuli at the same spatial position after the saccade. We found that the perceived hue of a postsaccadically flashed stimulus was systematically shifted toward the color of a presaccadically presented stimulus. This finding would be in accordance with a saccadic remapping process that preactivates, prior to a saccade, the neurons that represent a stimulus after the saccade at this very location. Such a remapping of visual object features could contribute to the stable perception of the visual world across saccades.
Elizabeth Wonnacott; Elissa L. Newport; Michael K. Tanenhaus
In: Cognitive Psychology, vol. 56, no. 3, pp. 165–209, 2008.
Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.
Ian Cunnings; Harald Clahsen
In: The Mental Lexicon, vol. 3, no. 2, pp. 149–175, 2008.
The avoidance of regular but not irregular plurals inside compounds (e.g., *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and Experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing.
Delphine Dahan; Sarah J. Drucker; Rebecca A. Scarborough
In: Cognition, vol. 108, no. 3, pp. 710–718, 2008.
Past research has established that listeners can accommodate a wide range of talkers in understanding language. How this adjustment operates, however, is a matter of debate. Here, listeners were exposed to spoken words from a speaker of an American English dialect in which the vowel /æ/ is raised before /g/, but not before /k/. Results from two experiments showed that listeners' identification of /k/-final words like back (which are unaffected by the dialect) was facilitated by prior exposure to their dialect-affected /g/-final counterparts, e.g., bag. This facilitation occurred because the competition between interpretations, e.g., bag or back, while hearing the initial portion of the input [bæ], was mitigated by the reduced probability for the input to correspond to bag as produced by this talker. Thus, adaptation to an accent is not just a matter of adjusting the speech signal as it is being heard; adaptation involves dynamic adjustment of the representations stored in the lexicon, according to the characteristics of the speaker or the context.
Stephen V. David; Benjamin Y. Hayden; James A. Mazer; Jack L. Gallant
In: Neuron, vol. 59, no. 3, pp. 509–521, 2008.
Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search.
Scott L. Davis; Teresa C. Frohman; C. J. Crandall; M. J. Brown; D. A. Mills; Phillip D. Kramer; O. Stuve; Elliot M. Frohman
In: Neurology, vol. 70, pp. 1098–1106, 2008.
Objective: The goal of this investigation was to demonstrate that internuclear ophthalmoparesis (INO) can be utilized to model the effects of body temperature-induced changes on the fidelity of axonal conduction in multiple sclerosis (Uhthoff's phenomenon). Methods: Ocular motor function was measured using infrared oculography at 10-minute intervals in patients with multiple sclerosis (MS) with INO (MS-INO; n=8), patients with MS without INO (MS-CON; n=8), and matched healthy controls (CON; n=8) at normothermic baseline, during whole-body heating (increase in core temperature 0.8°C as measured by an ingestible temperature probe and transabdominal telemetry), and after whole-body cooling. The versional disconjugacy index (velocity-VDI), the ratio of abducting/adducting eye movements for velocity, was calculated to assess changes in interocular disconjugacy. The first pass amplitude (FPA), the position of the adducting eye when the abducting eye achieves a centrifugal fixation target, was also computed. Results: Velocity-VDI and FPA in MS-INO patients was elevated (p<0.001) following whole body heating with respect to baseline measures, confirming a compromise in axonal electrical impulse transmission properties. Velocity-VDI and FPA in MS-INO patients was then restored to baseline values following whole-body cooling, confirming the reversible and stereotyped nature of this characteristic feature of demyelination. Conclusions: We have developed a neurophysiologic model for objectively understanding temperature-related reversible changes in axonal conduction in multiple sclerosis. Our observations corroborate the hypothesis that changes in core body temperature (heating and cooling) are associated with stereotypic decay and restoration in axonal conduction mechanisms.
Jan Churan; Farhan A. Khawaja; James M. G. Tsui; Christopher C. Pack
In: Current Biology, vol. 18, no. 22, pp. 1–6, 2008.
Intuitively one might think that larger objects should be easier to see, and indeed performance on visual tasks generally improves with increasing stimulus size [1,2]. Recently, a remarkable exception to this rule was reported : when a high-contrast, moving stimulus is presented very briefly, motion perception deteriorates as stimulus size increases. This psychophysical surround suppression has been interpreted as a correlate of the neuronal surround suppression that is commonly found in the visual cortex [3-5]. However, many visual cortical neurons lack surround suppression, and so one might expect that the brain would simply use their outputs to discriminate the motion of large stimuli. Indeed previous work has generally found that observers rely on whichever neurons are most informative about the stimulus to perform similar psychophysical tasks . Here we show that the responses of neurons in the middle temporal (MT) area of macaque monkeys provide a simple resolution to this paradox. We find that surround-suppressed MT neurons integrate motion signals relatively quickly, so that by comparison non-suppressed neurons respond poorly to brief stimuli. Thus, psychophysical surround suppression for brief stimuli can be viewed as a consequence of a strategy that weights neuronal responses according to how informative they are about a given stimulus. If this interpretation is correct, then it follows that any psychophysical experiment that uses brief motion stimuli will effectively probe the responses of MT neurons that have strong surround suppression.
Meghan Clayards; Michael K. Tanenhaus; Richard N. Aslin; Robert A. Jacobs
In: Cognition, vol. 108, no. 3, pp. 804–809, 2008.
Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial stops in English (e.g., "beach" and "peach"). Participants categorized words from distributions of VOT with wide or narrow variances. Uncertainty about word identity was measured by four-alternative forced-choice judgments and by the probability of looks to pictures. Both measures closely reflected the posterior probability of the word given the likelihood distributions of VOT, suggesting that listeners are sensitive to these distributions.
Thérèse Collins; Tobias Schicke; Brigitte Röder
In: Cognition, vol. 109, no. 3, pp. 363–371, 2008.
The preparation of eye or hand movements enhances visual perception at the upcoming movement end position. The spatial location of this influence of action on perception could be determined either by goal selection or by motor planning. We employed a tool use task to dissociate these two alternatives. The instructed goal location was a visual target to which participants pointed with the tip of a triangular hand-held tool. The motor endpoint was defined by the final fingertip position necessary to bring the tool tip onto the goal. We tested perceptual performance at both locations (tool tip endpoint, motor endpoint) with a visual discrimination task. Discrimination performance was enhanced in parallel at both spatial locations, but not at nearby and intermediate locations, suggesting that both action goal selection and motor planning contribute to visual perception. In addition, our results challenge the widely held view that tools extend the body schema and suggest instead that tool use enhances perception at those precise locations which are most relevant during tool action: the body part used to manipulate the tool, and the active tool tip.
Lillian Chen; Julie E. Boland
In: Memory and Cognition, vol. 36, no. 7, pp. 1306–1323, 2008.
Two eyetracking-during-listening experiments showed frequency and context effects on fixation probability for pictures representing multiple meanings of homophones. Participants heard either an imperative sentence instructing them to look at a homophone referent (Experiment 1) or a declarative sentence that was either neutral or biased toward the homophone's subordinate meaning (Experiment 2). At homophone onset in both experiments, the participants viewed four pictures: (1) a referent of one homophone meaning, (2) a shape competitor for a nonpictured homophone meaning, and (3) two unrelated filler objects. In Experiment 1, meaning dominance affected looks to both the homophone referent and the shape competitor. In Experiment 2, as compared with neutral contexts, subordinatebiased contexts lowered the fixation probability for shape competitors of dominant meanings, but shape competitors still attracted more looks than would be expected by chance. We discuss the consistencies and discrepancies of these findings with the selective access and reordered access theories of lexical ambiguity resolution.
R. Contreras; Rachel Kolster; Henning U. Voss; Jamshid Ghajar; M. Suh; S. Bahar
In: Journal of Biological Physics, vol. 34, no. 3-4, pp. 381–392, 2008.
Eye-target synchronization is critical for effective smooth pursuit of a moving visual target. We apply the nonlinear dynamical technique of stochastic-phase synchronization to human visual pursuit of a moving target, in both normal and mild traumatic brain-injured (mTBI) patients. We observe significant fatigue effects in all subject populations, in which subjects synchronize better with the target during the first half of the trial than in the second half. The fatigue effect differed, however, between the normal and the mTBI populations and between old and young subpopulations of each group. In some cases, the younger (</=40 years old) normal subjects performed better than mTBI subjects and also better than older (>40 years old) normal subjects. Our results, however, suggest that further studies will be necessary before a standard of "normal" smooth pursuit synchronization can be developed.
David D. Cox; Alexander M. Papanastassiou; Daniel Oreper; Benjamin B. Andken; James J. DiCarlo
In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2966–2976, 2008.
Much of our knowledge of brain function has been gleaned from studies using microelectrodes to characterize the response properties of individual neurons in vivo. However, because it is difficult to accurately determine the location of a microelectrode tip within the brain, it is impossible to systematically map the fine three-dimensional spatial organization of many brain areas, especially in deep structures. Here, we present a practical method based on digital stereo microfocal X-ray imaging that makes it possible to estimate the three-dimensional position of each and every microelectrode recording site in "real time" during experimental sessions. We determined the system's ex vivo localization accuracy to be better than 50 microm, and we show how we have used this method to coregister hundreds of deep-brain microelectrode recordings in monkeys to a common frame of reference with median error of <150 microm. We further show how we can coregister those sites with magnetic resonance images (MRIs), allowing for comparison with anatomy, and laying the groundwork for more detailed electrophysiology/functional MRI comparison. Minimally, this method allows one to marry the single-cell specificity of microelectrode recording with the spatial mapping abilities of imaging techniques; furthermore, it has the potential of yielding fundamentally new kinds of high-resolution maps of brain function.
Matthew T. Crawford; John J. Skowronski; Chris Stiff; Ute Leonards
In: Journal of Experimental Social Psychology, vol. 44, no. 3, pp. 840–847, 2008.
When an informant describes trait-implicative behavior of a target, the informant is often associated with the trait implied by the behavior and can be assigned heightened ratings on that trait (STT effects). Presentation of a target photo along with the description seemingly eliminates these effects. Using three different measures of visual attention, the results of two studies show the elimination of STT effects by target photo presentation cannot be attributed to associative mechanisms linked to enhanced visual attention to targets. Instead, presentation of a target's photo likely prompts perceivers to spontaneously make target inferences in much the same way they make spontaneous inferences about self-describers. As argued by Todorov and Uleman [Todorov, A., & Uleman, J. S. (2004). The person reference process in spontaneous trait inferences. Journal of Personality & Social Psychology, 87, 482-493], such attributional processing can preclude the formation of trait associations to informants.
Sarah C. Creel; Richard N. Aslin; Michael K. Tanenhaus
In: Cognition, vol. 106, no. 2, pp. 633–664, 2008.
Two experiments used the head-mounted eye-tracking methodology to examine the time course of lexical activation in the face of a non-phonemic cue, talker variation. We found that lexical competition was attenuated by consistent talker differences between words that would otherwise be lexical competitors. In Experiment 1, some English cohort word-pairs were consistently spoken by a single talker (male couch, male cows), while other word-pairs were spoken by different talkers (male sheep, female sheet). After repeated instances of talker-word pairings, words from different-talker pairs showed smaller proportions of competitor fixations than words from same-talker pairs. In Experiment 2, participants learned to identify black-and-white shapes from novel labels spoken by one of two talkers. All of the 16 novel labels were VCVCV word-forms atypical of, but not phonologically illegal in, English. Again, a word was consistently spoken by one talker, and its cohort or rhyme competitor was consistently spoken either by that same talker (same-talker competitor) or the other talker (different-talker competitor). Targets with different-talker cohorts received greater fixation proportions than targets with same-talker cohorts, while the reverse was true for fixations to cohort competitors; there were fewer erroneous selections of competitor referents for different-talker competitors than same-talker competitors. Overall, these results support a view of the lexicon in which entries contain extra-phonemic information. Extensions of the artificial lexicon paradigm and developmental implications are discussed. textcopyright 2007 Elsevier B.V. All rights reserved.
Michael D. Crossland; Antony B. Morland; Mary P. Feely; Elisabeth Hagen; Gary S. Rubin
In: Investigative Ophthalmology & Visual Science, vol. 49, no. 8, pp. 3734–3739, 2008.
PURPOSE: Functional magnetic resonance imaging (fMRI) experiments determining the retinotopic structure of visual cortex have commonly been performed on young adults, who are assumed to be able to maintain steady fixation throughout the trial duration. The authors quantified the effects of age and fixation stability on the quality of retinotopic maps of primary visual cortex. METHODS: With the use of a 3T fMRI scanner, the authors measured cortical activity in six older and six younger normally sighted participants observing an expanding flickering checkerboard stimulus of 30 degrees diameter. The area of flattened primary visual cortex (V1) showing any blood oxygen level-dependent (BOLD) activity to the visual stimulus and the area responding to the central 3.75 degrees of the stimulus (relating to the central ring of our target) were recorded. Fixation stability was measured while participants observed the same stimuli outside the scanner using an infrared gazetracker. RESULTS: There were no age-related changes in the area of V1. However, the proportion of V1 active to our visual stimulus was lower for the older observers than for the younger observers (overall activity: 89.8% of V1 area for older observers, 98.6% for younger observers; P <0.05). This effect was more pronounced for the central 3.75 degrees of the target (older subjects, 26.4%; younger subjects, 40.7%; P <0.02). No significant relationship existed between fixation stability and age or the magnitude of activity in the primary visual cortex. CONCLUSIONS: Although the cortical area remains unchanged, healthy older persons show less BOLD activity in V1 than do younger persons. Normal variations in fixation stability do not have a significant effect on the accuracy of experiments to determine the retinotopic structure of the visual cortex.
Maria Nella Carminati; Roger P. G. Gompel; Christoph Scheepers; Manabu Arai
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 5, pp. 1098–1110, 2008.
Two visual-world eye-movement experiments investigated the nature of syntactic priming during comprehension--specifically, whether the priming effects in ditransitive prepositional object (PO) and double object (DO) structures (e.g., "The wizard will send the poison to the prince/the prince the poison?") are due to anticipation of structural properties following the verb (send) in the target sentence or to anticipation of animacy properties of the first postverbal noun. Shortly following the target verb onset, listeners looked at the recipient more (relative to the theme) following DO than PO primes, indicating that the structure of the prime affected listeners' eye gazes on the target scene. Crucially, this priming effect was the same irrespective of whether the postverbal nouns in the prime sentences did ("The monarch will send the painting to the president") or did not ("The monarch will send the envoy to the president") differ in animacy, suggesting that PO/DO priming in comprehension occurs because structural properties, rather than animacy features, are being primed when people process the ditransitive target verb.
Jonathan S. A. Carriere; Daniel Eaton; Michael G. Reynolds; Mike J. Dixon; Daniel Smilek
Grapheme–color synesthesia influences overt visual attention Journal Article
In: Journal of Cognitive Neuroscience, vol. 21, no. 2, pp. 246–258, 2008.
For individuals with grapheme–color synesthesia, achromatic letters and digits elicit vivid perceptual experiences of color. We report two experiments that evaluate whether synesthesia influences overt visual attention. In these experiments, two grapheme–color synesthetes viewed colored letters while their eye movements were monitored. Letters were presented in colors that were either congruent or incongruent with the synesthetes' colors. Eye tracking analysis showed that synesthetes exhibited a color congruity bias—a propensity to fixate congruently colored letters more often and for longer durations than incongruently colored letters—in a naturalistic free-viewing task. In a more structured visual search task, this congruity bias caused synesthetes to rapidly fixate and identify congruently colored target letters, but led to problems in identifying incongruently colored target letters. The results are discussed in terms of their implications for perception in synesthesia.
Monica S. Castelhano; Alexander Pollatsek; Kyle R. Cave
In: Psychonomic Bulletin & Review, vol. 15, no. 4, pp. 795–801, 2008.
Participants searched for a picture of an object, and the object was either a typical or an atypical category member. The object was cued by either the picture or its basic-level category name. Of greatest interest was whether it would be easier to search for typical objects than to search for atypical objects. The answer was"yes," but only in a qualified sense: There was a large typicality effect on response time only for name cues, and almost none of the effect was found in the time to locate (i.e., first fixate) the target. Instead, typicality influenced verification time-the time to respond to the target once it was fixated. Typicality is thus apparently irrelevant when the target is well specified by a picture cue; even when the target is underspecified (as with a name cue), it does not aid attentional guidance, but only facilitates categorization.
Susan E. Brennan; Xin Chen; Christopher A. Dickinson; Mark B. Neider; Gregory J. Zelinsky
In: Cognition, vol. 106, no. 3, pp. 1465–1477, 2008.
Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one searcher seeing a gaze-cursor indicating where the other was looking, and vice versa), shared-voice (by speaking to each other), and shared-gaze-plus-voice (by using both gaze-cursors and speech). Although collaborating pairs performed better than solitary searchers, search in the shared gaze condition was best of all: twice as fast and efficient as solitary search. People can successfully communicate and coordinate their searching labor using shared gaze alone. Strikingly, shared gaze search was even faster than shared-gaze-plus-voice search; speaking incurred substantial coordination costs. We conclude that shared gaze affords a highly efficient method of coordinating parallel activity in a time-critical spatial task.
Sarah Brown-Schmidt; Christine Gunlogson; Michael K. Tanenhaus
In: Cognition, vol. 107, no. 3, pp. 1122–1134, 2008.
Two experiments examined the role of common ground in the production and on-line interpretation of wh-questions such as What's above the cow with shoes? Experiment 1 examined unscripted conversation, and found that speakers consistently use wh-questions to inquire about information known only to the addressee. Addressees were sensitive to this tendency, and quickly directed attention toward private entities when interpreting these questions. A second experiment replicated the interpretation findings in a more constrained setting. These results add to previous evidence that the common ground influences initial language processes, and suggests that the strength and polarity of common ground effects may depend on contributions of sentence type as well as the interactivity of the situation.
Julie N. Buchan; Martin Paré; Kevin G. Munhall
In: Brain Research, vol. 1242, pp. 162–171, 2008.
During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.
Antimo Buonocore; Robert D. McIntosh
Saccadic inhibition underlies the remote distractor effect Journal Article
In: Experimental Brain Research, vol. 191, no. 1, pp. 117–122, 2008.
The remote distractor effect is a robust finding whereby a saccade to a lateralised visual target is delayed by the simultaneous, or near simultaneous, onset of a distractor in the opposite hemifield. Saccadic inhibition is a more recently discovered phenomenon whereby a transient change to the scene during a visual task induces a depression in saccadic frequency beginning within 70 ms, and maximal around 90-100 ms. We assessed whether saccadic inhibition is responsible for the increase in saccadic latency induced by remote distractors. Participants performed a simple saccadic task in which the delay between target and distractor was varied between 0, 25, 50, 100 and 150 ms. Examination of the distributions of saccadic latencies showed that each distractor produced a discrete dip in saccadic frequency, time-locked to distractor onset, conforming closely to the character of saccadic inhibition. We conclude that saccadic inhibition underlies the remote distractor effect.
Manuel G. Calvo; Pedro Avero
In: Cognitive, Affective and Behavioral Neuroscience, vol. 8, no. 1, pp. 41–53, 2008.
This study investigated whether stimulus affective content can be extracted from visual scenes when these appear in parafoveal locations of the visual field and are foveally masked, and whether there is lateralization involved. Parafoveal prime pleasant or unpleasant scenes were presented for 150 msec 2.5° away from fixation and were followed by a foveal probe scene that was either congruent or incongruent in emotional valence with the prime. Participants responded whether the probe was emotionally positive or negative. Affective priming was demonstrated by shorter response latencies for congruent than for incongruent prime-probe pairs. This effect occurred when the prime was presented in the left visual field at a 300-msec prime-probe stimulus onset asynchrony, even when the prime and the probe were different in physical appearance and semantic category. This result reveals that the affective significance of emotional stimuli can be assessed early through covert attention mechanisms, in the absence of overt eye fixations on the stimuli, and suggests that right-hemisphere dominance is involved. Copyright 2008 Psychonomic Society, Inc.
Manuel G. Calvo; Michael W. Eysenck
In: Quarterly Journal of Experimental Psychology, vol. 61, no. 11, pp. 1669–1686, 2008.
To investigate the processing of emotional words by covert attention, threat-related, positive, and neutral word primes were presented parafoveally (2.2 degrees away from fixation) for 150 ms, under gaze-contingent foveal masking, to prevent eye fixations. The primes were followed by a probe word in a lexical-decision task. In Experiment 1, results showed a parafoveal threat-anxiety superiority: Parafoveal prime threat words facilitated responses to probe threat words for high-anxiety individuals, in comparison with neutral and positive words, and relative to low-anxiety individuals. This reveals an advantage in threat processing by covert attention, without differences in overt attention. However, anxiety was also associated with greater familiarity with threat words, and the parafoveal priming effects were significantly reduced when familiarity was covaried out. To further examine the role of word knowledge, in Experiment 2, vocabulary and word familiarity were equated for low- and high-anxiety groups. In these conditions, the parafoveal threat-anxiety advantage disappeared. This suggests that the enhanced covert-attention effect depends on familiarity with words.
Manuel G. Calvo; Lauri Nummenmaa
In: Journal of Experimental Psychology: General, vol. 137, no. 3, pp. 471–494, 2008.
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection.
Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero
In: Experimental Psychology, vol. 55, no. 6, pp. 359–370, 2008.
In a visual search task using photographs of real faces, a target emotional face was presented in an array of six neutral faces. Eye movements were monitored to assess attentional orienting and detection efficiency. Target faces with happy, surprised, and disgusted expressions were: (a) responded to more quickly and accurately, (b) localized and fixated earlier, and (c) detected as different faster and with fewer fixations, in comparison with fearful, angry, and sad target faces. This reveals a happy, surprised, and disgusted-face advantage in visual search, with earlier attentional orienting and more efficient detection. The pattern of findings remained equivalent across upright and inverted presentation conditions, which suggests that the search advantage involves processing of featural rather than configural information. Detection responses occurred generally after having fixated the target, which implies that detection of all facial expressions is post- rather than preattentional
Manuel G. Calvo; Lauri Nummenmaa; Jukka Hyönä
In: Emotion, vol. 8, no. 1, pp. 68–80, 2008.
Emotional-neutral pairs of visual scenes were presented peripherally (with their inner edges 5.2 degrees away from fixation) as primes for 150 to 900 ms, followed by a centrally presented recognition probe scene, which was either identical in specific content to one of the primes or related in general content and affective valence. Results indicated that (a) if no foveal fixations on the primes were allowed, the false alarm rate for emotional probes was increased; (b) hit rate and sensitivity (A') were higher for emotional than for neutral probes only when a fixation was possible on only one prime; and (c) emotional scenes were more likely to attract the first fixation than neutral scenes. It is concluded that the specific content of emotional or neutral scenes is not processed in peripheral vision. Nevertheless, a coarse impression of emotional scenes may be extracted, which then leads to selective attentional orienting or--in the absence of overt attention--causes false alarms for related probes.
Gideon P. Caplovitz; Nora A. Paymer; Peter U. Tse
In: Vision Research, vol. 48, no. 22, pp. 2403–2414, 2008.
We describe the Drifting Edge Illusion (DEI), in which a stationary edge appears to move when it abuts a drifting grating. Although a single edge is sufficient to perceive DEI, a particularly compelling version of DEI occurs when a drifting grating is viewed through an oriented and stationary aperture. The magnitude of the illusion depends crucially on the orientations of the grating and aperture. Using psychophysics, we describe the relationship between the magnitude of DEI and the relative angle between the grating and aperture. Results are discussed in the context of the roles of occlusion, component-motion, and contour relationships in the interpretation of motion information. In particular, we suggest that the visual system is posed with solving an ambiguity other than the traditionally acknowledged aperture problem of determining the direction of motion of the drifting grating. In this 'second aperture problem' or 'edge problem', a motion signal may belong to either the occluded or occluding contour. That is, the motion along the contour can arise either because the grating is drifting or because the edge is drifting over a stationary grating. DEI appears to result from a misattribution of motion information generated by the drifting grating to the stationary contours of the aperture, as if the edges are interpreted to travel over the grating, although they are in fact stationary.
Eric Matheron; Qing Yang; Thanh Thuan Lê; Zoï Kapoula
In: Neuroscience Letters, vol. 444, no. 2, pp. 176–180, 2008.
This study examined the eye movement responses to vertical disparity induced by a 2-diopter vertical prism base down while in standing position. Vertical vergence movements are known to be small requiring accurate measurement with the head stabilized, and was done with the EyeLink 2. The 2-diopter vertical prism, base down, was inserted in front of either the non-dominant eye (NDE) or dominant eye (DE) at 40 and 200 cm. The results showed that vertical vergence was stronger and excessive relative to the required value (i.e. 1.14°) when the prism was on the NDE for both distances, but more appropriate when the prism was on the DE. The results suggest that sensory disparity process and vertical vergence responses are modulated by eye dominance.
Jason S. McCarley; Christopher Grant
In: Psychonomic Bulletin & Review, vol. 15, no. 5, pp. 1008–1014, 2008.
Visual illusions often appear to have a larger influence on subjective judgments than on visuomotor behavior. Although this effect has been taken as evidence for multiple estimates of stimulus size in the visual brain, dissociations between subjective judgments and visuomotor measures can frequently be reconciled with a single-estimate model. To circumvent this difficulty, we used state-trace analysis in a pair of experiments to examine the effects of the Müller-Lyer illusion on subjective length estimates, voluntary saccade amplitudes, and reflexive saccade amplitudes. All dependent measures were affected by the illusion. However, state-trace analyses revealed nonmonotonic relationships among all three variables, a pattern inconsistent with the possibility of a single underlying estimate of stimulus size.
K. -M. Lee; Edward L. Keller
In: Journal of Neuroscience, vol. 28, no. 9, pp. 2242–2251, 2008.
Selection of identical responses may not use the same neural mechanisms when the number of alternatives (NA) for the selection changes, as suggested by Hick's law. For elucidating the choice mechanisms, frontal eye field (FEF) neurons were monitored during a color-to-location choice saccade task as the number of potential targets was varied. Visual responses to alternative targets decreased as NA increased, whereas perisaccade activities increased with NA. These modulations of FEF activities seem closely related to the choice process because the activity enhancements coincided with the timing of target selection, and the neural modulation was greater as NA increased, features expected of neural correlates for a choice process from the perspective of Hick's law. Our current observations suggest two novel notions of FEF neuronal behavior that have not been reported previously: (1) cells called "phasic visual" that do not discharge in the perisaccade interval in a delayed-saccade paradigm show such activity in a choice response task at the time of the saccade; and (2) the activity in FEF visuomotor cells display an inverse relationship between perisaccadic activity and the time of saccade triggering with higher levels of activity leading to longer saccade reaction times. These findings support the area's involvement in sensory-motor translation for target selection through coactivation and competitive interaction of neural populations that code for alternative action sets.
Vaia Lestou; Frank E. Pollick; Zoe Kourtzi
In: Journal of Cognitive Neuroscience, vol. 20, no. 2, pp. 324–341, 2008.
Understanding complex movements and abstract action goals is an important skill for our social interactions. Successful social interactions entail understanding of actions at different levels of action description, ranging from detailed movement trajectories that support learning of complex motor skills through imitation to distinct features of actions that allow us to discriminate between action goals and different action styles. Previous studies have implicated premotor, parietal, and superior temporal areas in action understanding. However, the role of these different cortical areas in action understanding at different levels of action description remains largely unknown. We addressed this question using advanced animation and stimulus generation techniques in combination with sensitive functional magnetic resonance imaging adaptation or repetition suppression methods. We tested the neural sensitivity of fronto-parietal and visual areas to differences in the kinematics and goals of actions using kinematic morphs of arm movements. Our findings provide novel evidence for differential involvement of ventral premotor, parietal, and temporal regions in action understanding. We show that the ventral premotor cortex encodes the physical similarity between movement trajectories and action goals that are important for exact copying of actions and the acquisition of complex motor skills. In contrast, whereas parietal regions and the superior temporal sulcus process the perceptual similarity between movements and may support the perception and imitation of abstract action goals and movement styles. Thus, our findings propose that fronto-parietal and visual areas involved in action understanding mediate a cascade of visual-motor processes at different levels of action description from exact movement copies to abstract action goals achieved with different movement styles.
James S. Magnuson; Michael K. Tanenhaus; Richard N. Aslin
In: Cognition, vol. 108, no. 3, pp. 866–873, 2008.
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar "nouns" and "adjectives" did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration.
George L. Malcolm; Linda J. Lanyon; Andrew J. B. Fugard; Jason J. S. Barton
In: Journal of Vision, vol. 8, no. 8, pp. 1–9, 2008.
Perceptual studies suggest that processing facial identity emphasizes upper-face information, whereas processing expressions of anger or happiness emphasizes the lower-face. The two goals of the present study were to determine (a) if the distributions of eye fixations reflect these upper/lower-face biases, and (b) whether this bias is task- or stimulus-driven. We presented a target face followed by a probe pair of morphed faces, neither of which was identical to the target. Subjects judged which of the pair was more similar to the target face while eye movements were recorded. In Experiment 1 the probe pair always differed from each other in both identity and expression on each trial. In one block subjects judged which probe face was more similar to the target face in identity, and in a second block subjects judged which probe face was more similar to the target face in expression. In Experiment 2 the two probe faces differed in either expression or identity, but not both. Subjects were not informed which dimension differed, but simply asked to judge which probe face was more similar to the target face. We found that subjects scanned the upper-face more than the lower-face during the identity task but the lower-face more than the upper-face during the expression task in Experiment 1 (task-driven effects), with significantly less variation in bias in Experiment 2 (stimulus-driven effects). We conclude that fixations correlate with regional variations of diagnostic information in different processing tasks, but that these reflect top-down task-driven guidance of information acquisition more than stimulus-driven effects.
Andrea E. Martin; Brian McElree
In: Journal of Memory and Language, vol. 58, no. 3, pp. 879–906, 2008.
Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context. We investigated the online interpretation of VP ellipsis in an eye-tracking experiment and four speed-accuracy tradeoff experiments. To investigate whether the antecedent for a VP ellipsis is accessed with a search or direct-access retrieval process, Experiments 1 and 2 measured the effect of the distance between an ellipsis and its antecedent on the speed and accuracy of comprehension. Accuracy was lower with longer distances, indicating that interpolated material reduced the quality of retrieved information about the antecedent. However, contra a search process, distance did not affect the speed of interpreting ellipsis. This pattern suggests that antecedent representations are content-addressable and retrieved with a direct-access process. To determine whether interpreting ellipsis involves copying antecedent information into the ellipsis site, Experiments 3-5 manipulated the length and complexity of the antecedent. Some types of antecedent complexity lowered accuracy, notably, the number of discourse entities in the antecedent. However, neither antecedent length nor complexity affected the speed of interpreting the ellipsis. This pattern is inconsistent with a copy operation, and it suggests that ellipsis interpretation may involve a pointer to extant structures in memory.
Xingshan Li; Gordon D. Logan
In: Psychonomic Bulletin & Review, vol. 15, no. 5, pp. 945–949, 2008.
Most object-based attention studies use objects defined bottom-up by Gestalt principles. In the present study, we defined objects top-down, using Chinese words that were seen as objects by skilled readers of Chinese. Using a spatial cuing paradigm, we found that a target character was detected faster if it was in the same word as the cued character than if it was in a different word. Because there were no bottom-up factors that distinguished the words, these results showed that objects defined by subjects' knowledge--in this case, lexical information--can also constrain the deployment of attention.
Angelika Lingnau; Jens Schwarzbach; Dirk Vorberg
Adaptive strategies for reading with a forced retinal location Journal Article
In: Journal of Vision, vol. 8, no. 5, pp. 1–18, 2008.
Forcing normal-sighted participants to use a distinct parafoveal retinal location for reading, we studied which part of the visual field is best suited to take over functions of the fovea during early stages of macular degeneration (MD). A region to the right of fixation lead to best reading performance and most natural gaze behavior, whereas reading performance was severely impaired when a region to the left or below fixation had to be used. An analysis of the underlying oculomotor behavior revealed that practice effects were accompanied by a larger number of saccades in text direction and decreased fixation durations, whereas no adjustment of saccade amplitudes was observed. We provide an explanation for the observed performance differences at different retinal locations based on the interplay of attention and eye movements. Our findings have important implications for the development of training methods for MD patients targeted at reading, suggesting that it would be beneficial for MD patients to use a region to the right of their central scotoma.
Victor Kuperman; Raymond Bertram; R. Harald Baayen
Morphological dynamics in compound processing Journal Article
In: Language and Cognitive Processes, vol. 23, no. 7-8, pp. 1089–1132, 2008.
This paper explores the time-course of morphological processing of trimorphemic Finnish compounds. We find evidence for the parallel access to full- forms and morphological constituents diagnosed by the early effects of compound frequency, as well as early effects of left constituent frequency and family size. We also observe an interaction between compound frequency and both the left and the right constituent family sizes. Furthermore, our data show that suffixes embedded in the derived left constituent of a compound are efficiently used for establishing the boundary between compounds' constituents. The success of segmentation of a compound is demonstrably modulated by the affixal salience of the embedded suffixes. We discuss implications of these findings for current models of morphological processing and propose a new model that views morphemes, combinations of morphemes and morphological paradigms as probabilistic sources of information that are interactively used in recognition of complex words.
Jochen Laubrock; Ralf Engbert; Reinhold Kliegl
In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008.
Neuronal activity in area LIP is correlated with the perceived direction of ambiguous apparent motion (Z. M. Williams, J. C. Elfar, E. N. Eskandar, L. J. Toth, & J. A. Assad, 2003). Here we show that a similar correlation exists for small eye movements made during fixation. A moving dot grid with superimposed fixation point was presented through an aperture. In a motion discrimination task, unambiguous motion was compared with ambiguous motion obtained by shifting the grid by half of the dot distance. In three experiments we show that (a) microsaccadic inhibition, i.e., a drop in microsaccade frequency precedes reports of perceptual flips, (b) microsaccadic inhibition does not accompany simple response changes, and (c) the direction of microsaccades occurring before motion onset biases the subsequent perception of ambiguous motion. We conclude that microsaccades provide a signal on which perceptual judgments rely in the absence of objective disambiguating stimulus information.
Casimir J. H. Ludwig; Adam Ranson; Iain D. Gilchrist
In: Journal of Vision, vol. 8, no. 14, pp. 1–16, 2008.
Attentional and oculomotor capture by some salient visual event gives insight into what types of dynamic signals the human orienting system is sensitive to. We examined the sensitivity of the saccadic eye movement system to 4 types of dynamic, but task-irrelevant, visual events: abrupt onset, abrupt offset, motion onset and flicker onset. We varied (1) the primary task (contrast vs. motion discrimination) and (2) the amount of prior knowledge of the location of the dynamic event. Interference from the irrelevant events was quantified using a discrimination threshold metric. When the primary task involved contrast discrimination, all four events disrupted performance approximately equally, including the sudden disappearance of an old object. However, when motion was the task-relevant dimension, abrupt onsets and offsets did not disrupt performance at all, but motion onset had a strong effect. Providing more spatial certainty to observers decreased the amount of direct oculomotor capture but nevertheless impaired performance. We conclude that oculomotor capture is predominantly contingent upon the channel the observer monitors in order to perform the primary visual task.
Amy D. Lykins; Marta Meana; Gregory P. Strauss
In: Archives of Sexual Behavior, vol. 37, no. 2, pp. 219–228, 2008.
It has been suggested that sex differences in the processing of erotic material (e.g., memory, genital arousal, brain activation patterns) may also be reflected by differential attention to visual cues in erotic material. To test this hypothesis, we presented 20 heterosexual men and 20 heterosexual women with erotic and non-erotic images of heterosexual couples and tracked their eye movements during scene presentation. Results supported previous findings that erotic and non-erotic information was visually processed in a different manner by both men and women. Men looked at opposite sex figures significantly longer than did women, and women looked at same sex figures significantly longer than did men. Within-sex analyses suggested that men had a strong visual attention preference for opposite sex figures as compared to same sex figures, whereas women appeared to disperse their attention evenly between opposite and same sex figures. These differences, however, were not limited to erotic images but evidenced in non-erotic images as well. No significant sex differences were found for attention to the contextual region of the scenes. Results were interpreted as potentially supportive of recent studies showing a greater non-specificity of sexual arousal in women. This interpretation assumes there is an erotic valence to images of the sex to which one orients, even when the image is not explicitly erotic. It also assumes a relationship between visual attention and erotic valence.
Antonio F. Macedo; Michael D. Crossland; Gary S. Rubin
The effect of retinal image slip on peripheral visual acuity Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–11, 2008.
Retinal image slip promoted by fixational eye movements prevents image fading in central vision. However, in the periphery a higher amount of movement is necessary to prevent this fading. We assessed the effect of different levels of retinal image slip in peripheral vision by measuring peripheral visual acuity (VA), with and without crowding, while modulating retinal eccentricity. Gaze position was monitored throughout using an infrared eyetracker. The target was presented for up to 500 msec, either with no retinal image slip, with reduced retinal slip, or with increased retinal image slip. Without crowding, peripheral visual acuity improved with increased retinal image slip compared with the other two conditions. IN contrast to the previous result, under crowded conditions, peripheral visual acuity decreased markedly with increased retinal image slip. Therefore, the effects of increased retinal image slip are different for simple (noncrowded) and more complex (crowded) visual tasks. These results provide further evidence for the importance of fixation stability on complex visual tasks when using the peripheral retina.
Wieske Zoest; Mieke Donk
In: Quarterly Journal of Experimental Psychology, vol. 61, no. 10, pp. 1553–1572, 2008.
Four experiments were performed to investigate the contribution of goal-driven modulation in saccadic target selection as a function of time. Observers were required to make an eye movement to a prespecified target that was concurrently presented with multiple nontargets and possibly one distractor. Target and distractor were defined in different dimensions (orientation dimension and colour dimension in Experiment 1), or were both defined in the same dimension (i.e., both defined in the orientation dimension in Experiment 2, or both defined in the colour dimension in Experiments 3 and 4). The identities of target and distractor were switched over conditions. Speed-accuracy functions were computed to examine the full time course of selection in each condition. There were three major results. First, the ability to exert goal-driven control increased as a function of response latency. Second, this ability depended on the specific target-distractor combination, yet was not a function of whether target and distractor were defined within or across dimensions. Third, goal-driven control was available earlier when target and distractor were dissimilar than when they were similar. It was concluded that the influence of goal-driven control in visual selection is not all or none, but is of a continuous nature.
Wieske Zoest; Stefan Van der Stigchel; Jason J. S. Barton
In: Experimental Brain Research, vol. 186, no. 3, pp. 431–442, 2008.
The present study investigated the contribution of the presence of a visual signal at the saccade goal on saccade trajectory deviations and measured distractor-related inhibition as indicated by deviation away from an irrelevant distractor. Performance in a prosaccade task where a visual target was present at the saccade goal was compared to performance in an anti- and memory-guided saccade task. In the latter two tasks no visual signal is present at the location of the saccade goal. It was hypothesized that if saccade deviation can be ultimately explained in terms of relative activation levels between the saccade goal location and distractor locations, the absence of a visual stimulus at the goal location will increase the competition evoked by the distractor and affect saccade deviations. The results of Experiment 1 showed that saccade deviation away from a distractor varied significantly depending on whether a visual target was presented at the saccade goal or not: when no visual target was presented, saccade deviation away from a distractor was increased compared to when the visual target was present. The results of Experiments 2-4 showed that saccade deviation did not systematically change as a function of time since the offset of the target. Moreover, Experiments 3 and 4 revealed that the disappearance of the target immediately increased the effect of a distractor on saccade deviations, suggesting that activation at the target location decays very rapidly once the visual signal has disappeared from the display.
André Vandierendonck; Maud Deschuyteneer; Ann Depoorter; Denis Drieghe
In: Psychological Research, vol. 72, no. 1, pp. 1–11, 2008.
Several studies have shown that anti-saccades, more than pro-saccades, are executed under executive control. It is argued that executive control subsumes a variety of controlled processes. The present study tested whether some of these underlying processes are involved in the execution of anti-saccades. An experiment is reported in which two such processes were parametrically varied, namely input monitoring and response selection. This resulted in four selective interference conditions obtained by factorially combining the degree of input monitoring and the presence of response selection in the interference task. The four tasks were combined with a primary task which required the participants to perform either pro-saccades or anti-saccades. By comparison of performance in these dual-task conditions and performance in single-task conditions, it was shown that anti-saccades, but not pro-saccades, were delayed when the secondary task required input monitoring or response selection. The results are discussed with respect to theoretical attempts to deconstruct the concept of executive control.
Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Thomas Eggert; Zoï Kapoula
In: Investigative Ophthalmology & Visual Science, vol. 49, no. 1, pp. 230–237, 2008.
PURPOSE: Human ocular saccades are not perfectly yoked; the origin of this disconjugacy (muscular versus central) remains controversial. The purpose of this study was to test a cortical influence on the binocular coordination of saccades. METHODS: The authors used a gap paradigm to elicit vertical or horizontal saccades of 10 degrees , randomly interleaved; transcranial magnetic stimulation (TMS) was applied on the posterior parietal cortex (PPC) 100 ms after the target onset. RESULTS: TMS of the left or right PPC increased (i) the misalignment of the eyes during the presaccadic fixation period; (ii) the size difference between the saccades of the eyes, called disconjugacy; the increase of disconjugacy was significant for rightward and downward saccades after TMS of the right PPC and for downward saccades after TMS of the left PPC. CONCLUSIONS: The authors conclude that the PPC is actively involved in maintaining eye alignment during fixation and in the control of binocular coordination of saccades.
Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Zoï Kapoula
In: Brain Research Bulletin, vol. 76, no. 1-2, pp. 50–56, 2008.
This study tests the influence of transcranial magnetic stimulation (TMS) of the posterior parietal cortex (PPC) on the initiation of horizontal and vertical saccades, alone or combined with a predictable divergence. A gap paradigm was used; TMS was applied 100 ms after target onset. TMS of the left PPC increased the latency of unpredictable rightward saccades, while TMS of the right PPC increased the latency of unpredictable downward saccades. Yet, when unpredictable saccades were combined with predictable divergence, neither component was affected. We suggest that in the latter case, the initiation of both components was taken in charge by another area, e.g. frontal. Thus, even when one component was predictable, a common mechanism controls the initiation of both components. The results confirm that TMS only modifies the latency when the cortical area stimulated is involved in the triggering of the eye movement.
Marine Vernet; Qing Yang; Gintautas Daunys; Christophe Orssaud; Zoï Kapoula
In: Optometry and Vision Science, vol. 85, no. 3, pp. 187–195, 2008.
Purpose. In real life, divergence is frequently combined with vertical saccades. The purpose of this study was to examine the initiation of vertical and horizontal saccades, pure or combined with divergence. Methods. We used a gap paradigm to elicit vertical or horizontal saccades (10 degrees), pure or combined with a predictable divergence (10 degrees). Eye movements from 12 subjects were recorded with EyeLink II. Results. The major results were (i) when combined with divergence, the latency of horizontal saccades increased but not the latency of vertical saccades; (ii) for both vertical and horizontal saccades, a tight correlation between the latency of saccade and divergence was found; (iii) when the divergence was anticipated, the saccade was delayed. Conclusion. We conclude that the initiation of both components of combined movements is interdependent.
Julius Verrel; Harold Bekkering; Bert Steenbergen
In: Experimental Brain Research, vol. 187, no. 1, pp. 107–116, 2008.
In the present study we investigated eye-hand coordination in adolescents with hemiparetic cerebral palsy (CP) and neurologically healthy controls. Using an object prehension and transport task, we addressed two hypotheses, motivated by the question whether early brain damage and the ensuing limitations of motor activity lead to general and/or effector-specific effects in visuomotor control of manual actions. We hypothesized that individuals with hemiparetic CP would more closely visually monitor actions with their affected hand, compared to both their less affected hand and to control participants without a sensorimotor impairment. A second, more speculative hypothesis was that, in relation to previously established deficits in prospective action control in individuals with hemiparetic CP, gaze patterns might be less anticipatory in general, also during actions performed with the less affected hand. Analysis of the gaze and hand movement data revealed the increased visual monitoring of participants with CP when using their affected hand at the beginning as well as during object transport. In contrast, no general deficit in anticipatory gaze control in the participants with hemiparetic CP could be observed. Collectively, these findings are the first to directly show that individuals with hemiparetic CP adapt eye-hand coordination to the specific constraints of the moving limb, presumably to compensate for sensorimotor deficits.
Christian Vorstius; Ralph Radach; Alan R. Lang; Christina J. Riccardi
In: Psychopharmacology, vol. 196, no. 2, pp. 201–210, 2008.
RATIONALE: Alcohol affects a variety of human behaviors, including visual perception and motor control. Although recent research has begun to explore mechanisms that mediate these changes, their exact nature is still not well understood. OBJECTIVES: The present study used two basic oculomotor tasks to examine the effect of alcohol on different levels of visual processing within the same individuals. A theoretical framework is offered to integrate findings across multiple levels of oculomotor control. MATERIALS AND METHODS: Twenty-four healthy participants were asked to perform eye movements in reflexive (pro-) and voluntary (anti-) saccade tasks. In one of two counterbalanced sessions, performance was measured after alcohol administration (mean BrAC=69 mg%); the other served as a within-subjects no-alcohol comparison condition. RESULTS: Error rates were not influenced by alcohol intoxication in either task. However, there were significant effects of alcohol on saccade latency and peak velocity in both tasks. Critically, a specific alcohol-induced impairment (hypermetria) in saccade amplitudes was observed exclusively in the anti-saccade task. CONCLUSIONS: The saccade latency data strongly suggest that alcohol intoxication impairs temporal aspects of saccade generation, irrespective of the level of processing triggering the saccade. The absence of effects on anti-saccade errors calls for further research into the notion of alcohol-induced impairment of the ability to inhibit prepotent responses. Furthermore, the specific impairment of saccade amplitude in the anti-saccade task under alcohol suggests that higher level processes involved in the spatial remapping of target location in the absence of a visually specified saccade goal are specifically affected by alcohol intoxication.
Robin Walker; Eugene McSorley
In: Journal of Eye Movement Research, vol. 2, no. 3, pp. 1–13, 2008.
It has long been known that the path (trajectory) taken by the eye to land on a target is rarely straight (Yarbus, 1967). Furthermore, the magnitude and direction of this natural tendency for curvature can be modulated by the presence of a competing distractor stimulus presented along with the saccade target. The distractorrelated modulation of saccade trajectories provides a subtle measure of the underlying competitive processes involved in saccade target selection. Here we review some of our own studies into the effects distractors have on saccade trajectories, which can be regarded as a way of probing the competitive balance between target and distractor salience.
Mark Wexler; Nizar Ouarti
Depth affects where we look Journal Article
In: Current Biology, vol. 18, no. 23, pp. 1872–1876, 2008.
Understanding how we spontaneously scan the visual world through eye movements is crucial for characterizing both the strategies and inputs of vision [1-27]. Despite the importance of the third or depth dimension for perception and action, little is known about how the specifically three-dimensional aspects of scenes affect looking behavior. Here we show that three-dimensional surface orientation has a surprisingly large effect on spontaneous exploration, and we demonstrate that a simple rule predicts eye movements given surface orientation in three dimensions: saccades tend to follow surface depth gradients. The rule proves to be quite robust: it generalizes across depth cues, holds in the presence or absence of a task, and applies to more complex three-dimensional objects. These results not only lead to a more accurate understanding of visuo-motor strategies, but also suggest a possible new oculomotor technique for studying three-dimensional vision from a variety of depth cues in subjects-such as animals or human infants-that cannot explicitly report their perceptions.
Brian J. White; Martin Stritzke; Karl R. Gegenfurtner
Saccadic facilitation in natural backgrounds Journal Article
In: Current Biology, vol. 18, no. 2, pp. 124–128, 2008.
In visual systems with a fovea, only a small portion of the visual field can be analyzed with high accuracy. Saccadic eye movements shift that center of gaze around several times a second. Saccades have been characterized in great detail and depend critically on a number of visual properties of the stimuli [1-5]. However, typical experiments have used bright spots on dark backgrounds, while our natural environment has a highly characteristic rich spatial structure [6, 7]. Here we show that the saccadic system, unlike the perceptual system, is able to compensate for the masking caused by structured backgrounds. Consequently, saccadic latencies in the context of natural backgrounds are much faster than unstructured backgrounds at equal levels of visibility. The results suggest that whenever a structured background acts to mask the visibility of the saccade target, it simultaneously preactivates saccadic circuitry and thus ensures a fast reaction to potentially critical stimuli that are difficult to detect in our environment. textcopyright 2008 Elsevier Ltd. All rights reserved.
Sarah J. White; Raymond Bertram; Jukka Hyönä
Semantic processing of previews within compound words Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 988–993, 2008.
Previous studies have suggested that previews of words prior to fixation can be processed orthographically, but not semantically, during reading of sentences (K. Rayner, D. A. Balota, & A. Pollatsek, 1986). The present study tested whether semantic processing of previews can occur within words. The preview of the second constituent of 2-constituent Finnish compound nouns was manipulated. The previews were either identical to the 2nd constituent or they were incorrect in the form of a semantically related word, a semantically unrelated word, or a semantically meaningless nonword. The results indicate that previews of 2nd constituents within compound words can be semantically processed. The results have important implications for understanding the nature of preview and compound word processing. These issues are crucial to developing comprehensive models of eye-movement control and word recognition during reading.
Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde
In: Spatial Vision, vol. 22, pp. 335–348, 2008.
When corners are embedded in a luminance gradient, their perceived salience varies linearly with corner angle (Troncoso et al., 2005). Here we hypothesize that this relationship may hold true for all corners, not just corner gradients. To test this hypothesis, we developed a novel variant of the flicker-augmented contrast illusion (Anstis and Ho, 1998) that employs solid (non-gradient) corners of varying angles to modify perceived brightness. We flickered solid corners from dark to light grey (50% luminance over time) against a black or a white background. With this new stimulus, subjects compared the apparent brightness of corners, which did not vary in actual luminance, to non-illusory stimuli that varied in actual luminance. We found that the apparent brightness of corners was linearly related to the sharpness of corner angle. Thus this relationship is not solely an effect of corners embedded in gradients, but may be a general principle of corner perception. These findings may have important repercussions for brain mechanisms underlying the early visual processing of shape and brightness. A large fraction of Vasarely's art showcases the perceptual salience of corners, curvature and terminators. Several of these artworks and their implications for visual processing are discussed.
Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-conde
Microsaccades counteract perceptual ﬁlling-in Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–9, 2008.
Artificial scotomas positioned within peripheral dynamic noise fade perceptually during visual fixation (that is, the surrounding dynamic noise appears to fill-in the scotoma). Because the scotomas' edges are continuously refreshed by the dynamic noise background, this filling-in effect cannot be explained by low-level adaptation mechanisms (such as those that may underlie classical Troxler fading). We recently showed that microsaccades counteract Troxler fading and drive first-order visibility during fixation (S. Martinez-Conde, S. L. Macknik, X. G. Troncoso, & T. A. Dyar, 2006). Here we set out to determine whether microsaccades may counteract the perceptual filling-in of artificial scotomas and thus drive second-order visibility. If so, microsaccades may not only counteract low-level adaptation but also play a role in higher perceptual processes. We asked subjects to indicate, via button press/release, whether an artificial scotoma presented on a dynamic noise background was visible or invisible at any given time. The subjects' eye movements were simultaneously measured with a high precision video system. We found that increases in microsaccade production counteracted the perception of filling-in, driving the visibility of the artificial scotoma. Conversely, decreased microsaccades allowed perceptual filling-in to take place. Our results show that microsaccades do not solely overcome low-level adaptation mechanisms but they also contribute to maintaining second-order visibility during fixation.
Xoana G. Troncoso; Stephen L. Macknik; Jorge Otero-Millan; Susana Martinez-Conde
Microsaccades drive illusory motion in the Enigma illusion Journal Article
In: Proceedings of the National Academy of Sciences, vol. 105, no. 41, pp. 16033–16038, 2008.
Visual images consisting of repetitive patterns can elicit striking illusory motion percepts. For almost 200 years, artists, psychologists, and neuroscientists have debated whether this type of illusion originates in the eye or in the brain. For more than a decade, the controversy has centered on the powerful illusory motion perceived in the painting Enigma, created by op-artist Isia Leviant. However, no previous study has directly correlated the Enigma illusion to any specific physiological mechanism, and so the debate rages on. Here, we show that microsaccades, a type of miniature eye movement produced during visual fixation, can drive illusory motion in Enigma. We asked subjects to indicate when illusory motion sped up or slowed down during the observation of Enigma while we simultaneously recorded their eye movements with high precision. Before "faster" motion periods, the rate of microsaccades increased. Before "slower/no" motion periods, the rate of microsaccades decreased. These results reveal a direct link between microsaccade production and the perception of illusory motion in Enigma and rule out the hypothesis that the origin of the illusion is purely cortical.
Yuan-Chi Tseng; Chiang-Shan Ray Li
In: The Open Psychology Journal, vol. 1, no. 1, pp. 18–25, 2008.
The stop-signal task (SST) and anti-saccade tasks are both widely used to explore cognitive inhibitory control. Our previous work on a manual SST showed that subjects' readiness to respond to the go signal and the extent to which subjects monitor their errors need to be considered in order to attribute impaired performance to deficits in response inhi- bition. Here we examine whether these same task-related variables similarly influence oculomotor SST and anti-saccade performance. Thirty-six and sixty healthy, adult subjects participated in an oculomotor SST and anti-saccade task, respec- tively, in which the fore-period (FP) of imperative stimulus varied randomly from trial to trial. We computed a FP effect to index response readiness to the imperative stimulus and a post-error slowing (PES) effect to index error monitoring. Contrary to what we had anticipated, other than a weak but negative association between the FP effect and anti-saccade errors, these behavioral variables did not correlate with SST or anti-saccade performance.
Geoffrey Underwood; Emma Templeman; Laura Lamming; Tom Foulsham
In: Consciousness and Cognition, vol. 17, no. 1, pp. 159–170, 2008.
Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.
Seppo Vainio; Jukka Hyönä; Anneli Pajunen
In: Memory and Cognition, vol. 36, no. 2, pp. 329–340, 2008.
The present study examined whether type of inflectional case (semantic or grammatical) and phonological and morphological transparency affect the processing of Finnish modifier-head agreement in reading. Readers' eye movement patterns were registered. In Experiment 1, an agreeing modifier condition (agreement was transparent) was compared with a no-modifier condition, and in Experiment 2, similar constructions with opaque agreement were used. In both experiments, agreement was found to affect the processing of the target noun with some delay. In Experiment 3, unmarked and case-marked modifiers were used. The results again demonstrated a delayed agreement effect, ruling out the possibility that the agreement effects observed in Experiments 1 and 2 reflect a mere modifier-presence effect. We concluded that agreement exerts its effect at the level of syntactic integration but not at the level of lexical access.
Suiping Wang; Hsuan-Chih Chen; Jinmian Yang; Lei Mo
In: Language and Cognitive Processes, vol. 23, no. 2, pp. 241–257, 2008.
An eye-movement study was conducted to examine whether Chinese readers immediately activate and integrate related background information during discourse comprehension. Participants were asked to read short passages, each containing a critical word that fitted well within the local context but was inconsistent or neutral with background information from the early part of the passage. This manipulation of textual consistency produced reliable effects on both first-pass reading fixations in the target region and second-pass reading times in the pre-target and target regions. These results indicate that integration processes start very rapidly in reading text in a writing system with properties that encourage delayed processing, suggesting that immediate processing is likely a universal principle in discourse comprehension.
Z. I. Wang; Louis F. Dell'Osso
In: Vision Research, vol. 48, no. 12, pp. 1409–1419, 2008.
Our purpose was to perform a systematic study of the post-four-muscle-tenotomy procedure changes in target acquisition time by comparing predictions from the behavioral ocular motor system (OMS) model and data from infantile nystagmus syndrome (INS) patients. We studied five INS patients who underwent only tenotomy at the enthesis and reattachment at the original insertion of each (previously unoperated) horizontal rectus muscle for their INS treatment. We measured their pre- and post-tenotomy target acquisition changes using data from infrared reflection and high-speed digital video. Three key aspects were calculated and analyzed: the saccadic latency (Ls), the time to target acquisition after the target jump (Lt) and the normalized stimulus time within the cycle. Analyses were performed in MATLAB environment (The MathWorks, Natick, MA) using OMLAB software (OMtools, available from http://www.omlab.org). Model simulations were performed in MATLAB Simulink environment. The model simulation suggested an Lt reduction due to an overall foveation-quality improvement. Consistent with that prediction, improvement in Lt, ranging from ∼200 ms to ∼500 ms (average ∼ 280 ms), was documented in all five patients post-tenotomy. The Lt improvement was not a result of a reduced Ls. INS patients acquired step-target stimuli faster post-tenotomy. This target acquisition improvement may be due to the elevated foveation quality resulting in less inherent variation in the input to the OMS. A refined behavioral OMS model, with "fast" and "slow" motor neuron pathways and a more physiological plant, successfully predicted this improved visual behavior and again demonstrated its utility in guiding ocular motor research.
Tessa Warren; Kerry McConnell; Keith Rayner
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 1001–1010, 2008.
Plausibility violations resulting in impossible scenarios lead to earlier and longer lasting eye movement disruption than violations resulting in highly unlikely scenarios (K. Rayner, T. Warren, B. J. Juhasz, & S. P. Liversedge, 2004; T. Warren & K. McConnell, 2007). This could reflect either differences in the timing of availability of different kinds of information (e.g., selectional restrictions, world knowledge, and context) or differences in their relative power to guide semantic interpretation. The authors investigated eye movements to possible and impossible events in real-world and fantasy contexts to determine when contextual information influences detection of impossibility cued by a semantic mismatch between a verb and an argument. Gaze durations on a target word were longer to impossible events independent of context. However, a measure of the time elapsed from first fixating the target word to moving past it showed disruption only in the real-world context. These results suggest that contextual information did not eliminate initial disruption but moderated it quickly thereafter.
Katsumi Watanabe; Kenji Yokoi
In: Journal of Vision, vol. 8, no. 3, pp. 1–11, 2008.
The relative visual positions of briefly flashed stimuli are systematically modified in the presence of motion signals (R. Nijhawan, 2002; D. Whitney, 2002). Previously, we investigated the two-dimensional distortion of relative-position representations between moving and flashed stimuli. The results showed that the perceived position of a flash is not uniformly displaced but shifted toward a single convergent point back along the trajectory of a moving object (K. Watanabe & K. Yokoi, 2006, 2007). In the present study, we examined the temporal dynamics of the anisotropic distortion of visual position representation. While observers fixated on a stationary cross, a black disk appeared, moved along a horizontal trajectory, and disappeared. A white dot was briefly flashed at various positions relative to the moving disk and at various timings relative to the motion onset/offset. The temporal emerging-waning pattern of anisotropic mislocalization indicated that position representation in the space ahead of a moving object differs qualitatively from that in the space behind it. Thus, anisotropic mislocalization cannot be explained by either a spatially or a temporally homogeneous process. Instead, visual position representation is anisotropically influenced by moving objects in both space and time.
Matteo Valsecchi; Sven Saage; Brian J. White; Karl R. Gegenfurtner
In: Journal of Eye Movement Research, vol. 6, no. 5:2, pp. 1–15, 2008.
Formulaic sequences such as idioms, collocations, and lexical bundles, which may be processed as holistic units, make up a large proportion of natural language. For language learners, however, formulaic patterns are a major barrier to achieving native like compe- tence. The present study investigated the processing of lexical bundles by native speakers and less advanced non-native English speakers using corpus analysis for the identification of lexical bundles and eye-tracking to measure the reading times. The participants read sentences containing 4-grams and control phrases which were matched for sub-string fre- quency. The results for native speakers demonstrate a processing advantage for formulaic sequences over the matched control units. We do not find any processing advantage for non-native speakers which suggests that native like processing of lexical bundles comes only late in the acquisition process
Ronald Berg; Frans W. Cornelissen; Jos B. T. M. Roerdink
In: ACM Transactions on Applied Perception, vol. 4, no. 4, pp. 1–21, 2008.
A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as color, size and orientation. Here, we consider visualizations in which different data dimensions should receive equal weight and attention. Many of the end-user tasks performed on these images involve a form of visual search. Often, it is simply assumed that features can be judged independently of each other in such tasks. However, there is evidence for perceptual dependencies when simultaneously presenting multiple features. Such dependencies could potentially affect information visualizations that contain combinations of features for encoding information and, thereby, bias subjects into unequally weighting the relevance of different data dimensions. We experimentally assess (1) the presence of judgment dependencies in a visualization task (searching for a target node in a node-link diagram) and (2) how feature contrast relates to salience. From a visualization point of view, our most relevant findings are that (a) to equalize saliency (and thus bottom-up weighting) of size and color, color contrasts have to become very low. Moreover, orientation is less suitable for representing information that consists of a large range of data values, because it does not show a clear relationship between contrast and salience; (b) color and size are features that can be used independently to represent information, at least as far as the range of colors that were used in our study are concerned; (c) the concept of (static) feature salience hierarchies is wrong; how salient a feature is compared to another is not fixed, but a function of feature contrasts; (d) final decisions appear to be as good an indicator of perceptual performance as indicators based on measures obtained from individual fixations. Eye tracking, therefore, does not necessarily present a benefit for user studies that aim at evaluating performance in search tasks.
Menno Van Der Schoot; Alain L. Vasbinder; Tako M. Horsley; Ernest C. D. M. Van Lieshout
In: Journal of Research in Reading, vol. 31, no. 2, pp. 203–223, 2008.
This study examined whether 1012-year-old children use two reading strategies to aid their text comprehension: (1) distinguishing between important and unimportant words; and (2) resolving anaphoric references. Of interest was the question to what extent use of these reading strategies was predictive of reading comprehension skill over and above decoding skill and vocabulary. Reading strategy use was examined by the recording of eye fixations on specific target words. In contrast to less successful comprehenders, more successful comprehenders invested more processing time in important than in unimportant words. On the other hand, they needed less time to determine the antecedent of an anaphor. The results suggest that more successful comprehenders build a more effective mental model of the text than less successful comprehenders in at least two ways. First, they allocate more attention to the incorporation of goal-relevant than goal-irrelevant information into the model. Second, they ascertain that the text model is coherent and richly connected.
Stefan Van der Stigchel; Jan Theeuwes
In: NeuroReport, vol. 19, no. 2, pp. 251–254, 2008.
The present study systematically investigated the influence of a distractor on horizontal and vertical eye movements. Results showed that both horizontal and vertical eye movements deviated away from the distractor but these deviations were stronger for vertical than for horizontal movements. As trajectory deviations away from a distractor are generally attributed to inhibition applied to the distractor, this suggests that this deviation is not only due to differences in activity between the two collicular motor maps, but can also be evoked by local application of inhibitory processes in the same map as the target. Nonetheless, deviations were more dominant for vertical movements which suggests that for these movements more inhibition is applied than for horizontal movements.
Stefan Van der Stigchel; Wieske Zoest; Jan Theeuwes; Jason J. S. Barton
In: Journal of Cognitive Neuroscience, vol. 20, no. 11, pp. 2025–2036, 2008.
There is evidence that some visual information in blind regions may still be processed in patients with hemifield defects after cerebral lesions ("blindsight"). We tested the hypothesis that, in the absence of retinogeniculostriate processing, residual retinotectal processing may still be detected as modifications of saccades to seen targets by irrelevant distractors in the blind hemifield. Six patients were presented with distractors in the blind and intact portions of their visual field and participants were instructed to make eye movements to targets in the intact field. Eye movements were recorded to determine if blind-field distractors caused deviation in saccadic trajectories. No deviation was found in one patient with an optic chiasm lesion, which affect both retinotectal and retinogeniculostriate pathways. In five patients with lesions of the optic radiations or the striate cortex, the results were mixed, with two of the five patients showing significant deviations of saccadic trajectory away from the "blind" distractor. In a second experiment, two of the five patients were tested with the target and the distractor more closely aligned. Both patients showed a "global effect," in that saccades deviated toward the distractor, but the effect was stronger in the patient who also showed significant trajectory deviation in the first experiment. Although our study confirms that distractor effects on saccadic trajectory can occur in patients with damage to the retinogeniculostriate visual pathway but preserved retinotectal projections, there remain questions regarding what additional factors are required for these effects to manifest themselves in a given patient.
Stan Van Pelt; W. Pieter Medendorp
Updating target distance across eye movements in depth Journal Article
In: Journal of Neurophysiology, vol. 99, no. 5, pp. 2281–2290, 2008.
We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements (n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.
Marco Thiel; M. Carmen Romano; Jürgen Kurths; Martin Rolfs; Reinhold Kliegl
Generating surrogates from recurrences Journal Article
In: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 366, pp. 545–557, 2008.
In this paper, we present an approach to recover the dynamics from recurrences of a system and then generate (multivariate) twin surrogate (TS) trajectories. In contrast to other approaches, such as the linear-like surrogates, this technique produces surrogates which correspond to an independent copy of the underlying system, i.e. they induce a trajectory of the underlying system visiting the attractor in a different way. We show that these surrogates are well suited to test for complex synchronization, which makes it possible to systematically assess the reliability of synchronization analyses. We then apply the TS to study binocular fixational movements and find strong indications that the fixational movements of the left and right eye are phase synchronized. This result indicates that there might be only one centre in the brain that produces the fixational movements in both eyes or a close link between the two centres.
P. D. Thiem; Jessica A. Hill; K. -M. Lee; Edward L. Keller
Behavioral properties of saccades generated as a choice response Journal Article
In: Experimental Brain Research, vol. 186, no. 3, pp. 355–364, 2008.
The behavior characterizing choice response decision-making was studied in monkeys to provide background information for ongoing neurophysiological studies of the neural mechanisms underlying saccadic choice decisions. Animals were trained to associate a specific color from a set of colored visual stimuli with a specific spatial location. The visual stimuli (colored disks) appeared briefly at equal eccentricity from a central fixation position and then were masked by gray disks. The correct target association was subsequently cued by the appearance of a colored stimulus at the fixation point. The animal indicated its choice by saccading to the remembered location of the eccentric stimulus, which had matched the color of the cue. The number of alternative associations (NA) varied from 1 to 4 and remained fixed within a block of trials. After the training period, performance (percent correct responses) declined modestly as NA increased (on average 96, 93 or 84% correct for 1, 2 or 4 NA, respectively). Response latency increased logarithmically as a function of NA, thus obeying Hick's law. The spatial extent of the learned association between color and location was investigated by rotating the array of colored stimuli that had remained fixed during the learning phase to various different angles. Error rates in choice saccades increased gradually as a function of the amount of rotation. The learned association biased the direction of the saccadic response toward the quadrant associated with the cue, but saccade direction was always toward one of the actual visual stimuli. This suggests that the learned associations between stimuli and responses were not spatially exact, but instead the association between color and location was distributed with declining strength from the trained locations. These results demonstrate that the saccade system in monkeys also displays the characteristic dependence on NA in choice response latencies, while more basic features of the eye movements are invariant from those in other tasks. The findings also provide behavioral evidence that spatially distributed regions are established for the sensory-to-motor associations during training which are later utilized for choice decisions.
Shery Thomas; Frank A. Proudlock; Nagini Sarvananthan; Eryl O. Roberts; Musarat Awan; Rebecca J. McLean; Mylvaganam Surendran; A. S. Anil Kumar; Shegufta J. Farooq; Christopher Degg; Richard P. Gale; Robert D. Reinecke; Geoffrey Woodruff; Andrea Langmann; Susanne Lindner; Sunila Jain; Patrick Tarpey; F. Lucy Raymond; Irene Gottlob
In: Brain, vol. 131, no. 5, pp. 1259–1267, 2008.
Idiopathic infantile nystagmus (IIN) consists of involuntary oscillations of the eyes. The familial form is most commonly X-linked. We recently found mutations in a novel gene FRMD7 (Xq26.2), which provided an opportunity to investigate a genetically defined and homogeneous group of patients with nystagmus. We compared clinical features and eye movement recordings of 90 subjects with mutation in the gene (FRMD7 group) to 48 subjects without mutations but with clinical IIN (non-FRMD7 group). Fifty-eight female obligate carriers of the mutation were also investigated. The median visual acuity (VA) was 0.2 logMAR (Snellen equivalent 6/9) in both groups and most patients had good stereopsis. The prevalence of strabismus was also similar (FRMD7: 7.8%, non-FRMD7: 10%). The presence of anomalous head posture (AHP) was significantly higher in the non-FRMD7 group (P < 0.0001). The amplitude of nystagmus was more strongly dependent on the direction of gaze in the FRMD7 group being lower at primary position (P < 0.0001), compared to non-FRMD7 group (P = 0.83). Pendular nystagmus waveforms were also more frequent in the FRMD7 group (P = 0.003). Fifty-three percent of the obligate female carriers of an FRMD7 mutation were clinically affected. The VA's in affected females were slightly better compared to affected males (P = 0.014). Subnormal optokinetic responses were found in a subgroup of obligate unaffected carriers, which may be interpreted as a sub-clinical manifestation. FRMD7 is a major cause of X-linked IIN. Most clinical and eye movement characteristics were similar in the FRMD7 group and non-FRMD7 group with most patients having good VA and stereopsis and low incidence of strabismus. Fewer patients in the FRMD7 group had AHPs, their amplitude of nystagmus being lower in primary position. Our findings are helpful in the clinical identification of IIN and genetic counselling of nystagmus patients.
Aidan A. Thompson; Denise Y. P. Henriques
In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2507–2514, 2008.
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Jeremy B. Badler; Philippe Lefèvre; Marcus Missal
Anticipatory pursuit is influenced by a concurrent timing task Journal Article
In: Journal of Vision, vol. 8, no. 16, pp. 1–9, 2008.
The ability to predict upcoming events is important to compensate for relatively long sensory-motor delays. When stimuli are temporally regular, their prediction depends on a representation of elapsed time. However, it is well known that the allocation of attention to the timing of an upcoming event alters this representation. The role of attention on the temporal processing component of prediction was investigated in a visual smooth pursuit task that was performed either in isolation or concurrently with a manual response task. Subjects used smooth pursuit eye movements to accurately track a moving target after a constant-duration delay interval. In the manual response task, subjects had to estimate the instant of target motion onset by pressing a button. The onset of anticipatory pursuit eye movements was used to quantify the subject's estimate of elapsed time. We found that onset times were delayed significantly in the presence of the concurrent manual task relative to the pursuit task in isolation. There was also a correlation between the oculomotor and manual response latencies. In the framework of Scalar Timing Theory, the results are consistent with a centralized attentional gating mechanism that allocates clock resources between smooth pursuit preparation and the parallel timing task.
Xuejun Bai; Guoli Yan; Simon P. Liversedge; Chuanli Zang; Keith Rayner
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 5, pp. 1277–1287, 2008.
Native Chinese readers' eye movements were monitored as they read text that did or did not demark word boundary information. In Experiment 1, sentences had 4 types of spacing: normal unspaced text, text with spaces between words, text with spaces between characters that yielded nonwords, and finally text with spaces between every character. The authors investigated whether the introduction of spaces into unspaced Chinese text facilitates reading and whether the word or, alternatively, the character is a unit of information that is of primary importance in Chinese reading. Global and local measures indicated that sentences with unfamiliar word spaced format were as easy to read as visually familiar unspaced text. Nonword spacing and a space between every character produced longer reading times. In Experiment 2, highlighting was used to create analogous conditions: normal Chinese text, highlighting that marked words, highlighting that yielded nonwords, and highlighting that marked each character. The data from both experiments clearly indicated that words, and not individual characters, are the unit of primary importance in Chinese reading.
Brian P. Bailey; Shamsi T. Iqbal
In: ACM Transactions on Computer-Human Interaction, vol. 14, no. 4, pp. 1–28, 2008.
Notifications can have reduced interruption cost if delivered at moments of lower mental workload during task execution. Cognitive theorists have speculated that these moments occur at subtask boundaries. In this article, we empirically test this speculation by examining how workload changes during execution of goal-directed tasks, focusing on regions between adjacent chunks within the tasks, that is, the subtask boundaries. In a controlled experiment, users performed several interactive tasks while their pupil dilation, a reliable measure of workload, was continuously measured using an eye tracking system. The workload data was extracted from the pupil data, precisely aligned to the corresponding task models, and analyzed. Our principal findings include (i) workload changes throughout the execution of goal-directed tasks; (ii) workload exhibits transient decreases at subtask boundaries relative to the preceding subtasks; (iii) the amount of decrease tends to be greater at boundaries corresponding to the completion of larger chunks of the task; and (iv) different types of subtasks induce different amounts of workload. We situate these findings within resource theories of attention and discuss important implications for interruption management systems.
Daniel Baldauf; Heiner Deubel
Visual attention during the preparation of bimanual movements Journal Article
In: Vision Research, vol. 48, no. 4, pp. 549–563, 2008.
We investigated the deployment of visual attention during the preparation of bimanually coordinated actions. In a dual-task paradigm participants had to execute bimanual pointing movements to different peripheral locations, and to identify target letters that had been briefly presented at various peripheral locations during the latency period before movement initialisation. The discrimination targets appeared either at the movement goal of the left or the right hand, or at other locations that were not movement-relevant in the particular trial. Performance in the letter discrimination task served as a measure for the distribution of visual attention during the action preparation. The results showed that the goal positions of both hands are selected before movement onset, revealing a superior discrimination performance at the action-relevant locations (Experiment 1). Selection-for-action in the preparation of bimanual movements involved attention being spread to both goal locations in parallel, independently of whether the targets had been cued by colour or semantically (Experiment 2). A comparison with perceptual performance in unimanual reaching suggested that the total amount of attentional resources that are distributed over the visual field depended on the demands of the primary motor task, with more attentional resources being deployed for the selection of multiple goal positions than for the selection of a single goal (Experiment 3).
M. S. Baptista; C. Bohn; Reinhold Kliegl; Ralf Engbert; Jürgen Kurths
Reconstruction of eye movements during blinks Journal Article
In: Chaos, vol. 18, no. 1, pp. 1–15, 2008.
In eye movement research in reading, the amount of data plays a crucial role for the validation of results. A methodological problem for the analysis of the eye movement in reading are blinks, when readers close their eyes. Blinking rate increases with increasing reading time, resulting in high data losses, especially for older adults or reading impaired subjects. We present a method, based on the symbolic sequence dynamics of the eye movements, that reconstructs the horizontal position of the eyes while the reader blinks. The method makes use of an observed fact that the movements of the eyes before closing or after opening contain information about the eyes movements during blinks. Test results indicate that our reconstruction method is superior to methods that use simpler interpolation approaches. In addition, analyses of the reconstructed data show no significant deviation from the usual behavior observed in readers.
Dale J. Barr
In: Cognition, vol. 109, no. 1, pp. 18–40, 2008.
When listeners search for the referent of a speaker's expression, they experience interference from privileged knowledge, knowledge outside of their 'common ground' with the speaker. Evidence is presented that this interference reflects limitations in lexical processing. In three experiments, listeners' eye movements were monitored as they searched for the target of a speaker's referring expression in a display that also contained a phonological competitor (e.g., bucket/buckle). Listeners anticipated that the speaker would refer to something in common ground, but they did not experience less interference from a competitor in privileged ground than from a matched competitor in common ground. In contrast, interference from the competitor was eliminated when it was ruled out by a semantic constraint. These findings support a view of comprehension as relying on multiple systems with distinct access to information and present a challenge for constraint-based views of common ground.
Luke Barrington; Tim K. Marks; Janet Hui-wen Hsiao; Garrison W. Cottrell
NIMBLE: A kernel density model of saccade-based visual memory Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 17–17, 2008.
We present a Bayesian version of J. Lacroix, J. Murre, and E. Postma's (2006) Natural Input Memory (NIM) model of saccadic visual memory. Our model, which we call NIMBLE (NIM with Bayesian Likelihood Estimation), uses a cognitively plausible image sampling technique that provides a foveated representation of image patches. We conceive of these memorized image fragments as samples from image class distributions and model the memory of these fragments using kernel density estimation. Using these models, we derive class-conditional probabilities of new image fragments and combine individual fragment probabilities to classify images. Our Bayesian formulation of the model extends easily to handle multi-class problems. We validate our model by demonstrating human levels of performance on a face recognition memory task and high accuracy on multi-category face and object identification. We also use NIMBLE to examine the change in beliefs as more fixations are taken from an image. Using fixation data collected from human subjects, we directly compare the performance of NIMBLE's memory component to human performance, demonstrating that using human fixation locations allows NIMBLE to recognize familiar faces with only a single fixation.
Sarah Bate; Catherine Haslam; Jeremy J. Tree; Timothy L. Hodgson
In: Cortex, vol. 44, no. 7, pp. 806–819, 2008.
While extensive work has examined the role of covert recognition in acquired prosopagnosia, little attention has been directed to this process in the congenital form of the disorder. Indeed, evidence of covert recognition has only been demonstrated in one congenital case in which autonomic measures provided evidence of recognition (Jones and Tranel, 2001), whereas two investigations using behavioural indicators failed to demonstrate the effect (de Haan and Campbell, 1991; Bentin et al., 1999). In this paper, we use a behavioural indicator, an "eye movement-based memory effect" (Althoff and Cohen, 1999), to provide evidence of covert recognition in congenital prosopagnosia. In an initial experiment, we examined viewing strategies elicited to famous and novel faces in control participants, and found fewer fixations and reduced regional sampling for famous compared to novel faces. In a second experiment, we examined the same processes in a patient with congenital prosopagnosia (AA), and found some evidence of an eye movement-based memory effect regardless of his recognition accuracy. Finally, we examined whether a difference in scanning strategy was evident for those famous faces AA failed to explicitly recognise, and again found evidence of reduced sampling for famous faces. We use these findings to (a) provide evidence of intact structural representations in a case of congenital prosopagnosia, and (b) to suggest that covert recognition can be demonstrated using behavioural indicators in this disorder.
Elina Birmingham; Walter F. Bischof; Alan Kingstone
In: Quarterly Journal of Experimental Psychology, vol. 61, no. 7, pp. 986–998, 2008.
The present study examined how social attention is influenced by social content and the presence of items that are available for attention. We monitored observers' eye movements while they freely viewed real-world social scenes containing either 1 or 3 people situated among a variety of objects. Building from the work of Yarbus (1965/1967) we hypothesized that observers would demonstrate a preferential bias to fixate the eyes of the people in the scene, although other items would also receive attention. In addition, we hypothesized that fixations to the eyes would increase as the social content (i.e., number of people) increased. Both hypotheses were supported by the data, and we also found that the level of activity in the scene influenced attention to eyes when social content was high. The present results provide support for the notion that the eyes are selected by others in order to extract social information. Our study also suggests a simple and surreptitious methodology for studying social attention to real-world stimuli in a range of populations, such as those with autism spectrum disorders.
Elina Birmingham; Walter Bischof; Alan Kingstone
Gaze selection in complex social scenes Journal Article
In: Visual Cognition, vol. 16, no. 2-3, pp. 341–355, 2008.
A great deal of recent research has sought to understand the factors and neural systems that mediate the orienting of spatial attention to a gazed-at location. What have rarely been examined, however, are the factors that are critical to the initial selection of gaze information from complex visual scenes. For instance, is gaze prioritized relative to other possible body parts and objects within a scene? The present study springboards from the seminal work of Yarbus (1965/1967), who had originally examined participants? scan paths while they viewed visual scenes containing one or more people. His work suggested to us that the selection of gaze information may depend on the task that is assigned to participants, the social content of the scene, and/or the activity level depicted within the scene. Our results show clearly that all of these factors can significantly modulate the selection of gaze information. Specifically, the selection of gaze was enhanced when the task was to describe the social attention within a scene, and when the social content and activity level in a scene were high. Nevertheless, it is also the case that participants always selected gaze information more than any other stimulus. Our study has broad implications for future investigations of social attention as well as resolving a number of longstanding issues that had undermined the classic original work of Yarbus. A great deal of recent research has sought to understand the factors and neural systems that mediate the orienting of spatial attention to a gazed-at location. What have rarely been examined, however, are the factors that are critical to the initial selection of gaze information from complex visual scenes. For instance, is gaze prioritized relative to other possible body parts and objects within a scene? The present study springboards from the seminal work of Yarbus (1965/1967), who had originally examined participants? scan paths while they viewed visual scenes containing one or more people. His work suggested to us that the selection of gaze information may depend on the task that is assigned to participants, the social content of the scene, and/or the activity level depicted within the scene. Our results show clearly that all of these factors can significantly modulate the selection of gaze information. Specifically, the selection of gaze was enhanced when the task was to describe the social attention within a scene, and when the social content and activity level in a scene were high. Nevertheless, it is also the case that participants always selected gaze information more than any other stimulus. Our study has broad implications for future investigations of social attention as well as resolving a number of longstanding issues that had undermined the classic original work of Yarbus.
Caroline Blais; Rachael E. Jack; Christoph Scheepers; Daniel Fiset; Roberto Caldara
Culture shapes how we look at faces Journal Article
In: PLoS ONE, vol. 3, no. 8, pp. e3022, 2008.
Background: Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern. Methodology/Principal Findings: Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face. Conclusions/Significance: These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures.
Lizzy Bleumers; Peter De Graef; Karl Verfaillie; Johan Wagemans
Eccentric grouping by proximity in multistable dot lattices Journal Article
In: Vision Research, vol. 48, no. 2, pp. 179–192, 2008.
The Pure Distance Law predicts grouping by proximity in dot lattices that can be organised in four ways by grouping dots along parallel lines. It specifies a quantitative relationship between the relative probability of perceiving an organisation and the relative distance between the grouped dots. The current study was set up to investigate whether this principle holds both for centrally and for eccentrically displayed dot lattices. To this end, dot lattices were displayed either in central vision, or to the right of fixation with their closest border at 3° or 15°. We found that the Pure Distance Law adequately predicted grouping of centrally displayed dot lattices but did not capture the eccentric data well, even when the eccentric dot lattices were scaled. Specifically, a better fit was obtained when we included the possibility in the model that in some trials participants could not report an organisation and consequently responded randomly. A plausible interpretation for the occurrence of random responses in the eccentric conditions is that under these circumstances an attention shift is required from the locus of fixation towards the dot lattice, which occasionally fails to take place. When grouping could be reported, scale and eccentricity appeared to interact. The effect of the relative interdot distances on the perceptual organisation of the dot lattices was estimated to be stronger in peripheral vision than in central vision at the two largest scales, but this difference disappeared when the smallest scale was applied.
Larry Allen Abel; Zhong I. Wang; Louis F. Dell'Osso
In: Investigative Ophthalmology & Visual Science, vol. 49, no. 8, pp. 3413–3423, 2008.
PURPOSE: To investigate the proper usage of wavelet analysis in infantile nystagmus syndrome (INS) and determine its limitations and abilities. METHODS: Data were analyzed from accurate eye-movement recordings of INS patients. Wavelet analysis was performed to examine the foveation characteristics, morphologic characteristics and time variation in different INS waveforms. Also compared were the wavelet analysis and the expanded nystagmus acuity function (NAFX) analysis on sections of pre- and post-tenotomy data. RESULTS: Wavelet spectra showed some sensitivity to different features of INS waveforms and reflected their variations across time. However, wavelet analysis was not effective in detecting foveation periods, especially in a complicated INS waveform. NAFX, on the other hand, was a much more direct way of evaluating waveform changes after nystagmus treatments. CONCLUSIONS: Wavelet analysis is a tool that performs, with difficulty, some things that can be done faster and better by directly operating on the nystagmus waveform itself. It appears, however, to be insensitive to the subtle but visually important improvements brought about by INS therapies. Wavelet analysis may have a role in developing automated waveform classifiers where its time-dependent characterization of the waveform can be used. The limitations of wavelet analysis outweighed its abilities in INS waveform-characteristic examination.
Joana Acha; Manuel Perea
In: Cognition, vol. 108, pp. 290–300, 2008.
Transposed-letter effects (e.g., jugde activates judge) pose serious models for models of visual-word recognition that use position-specific coding schemes. However, even though the evidence of transposed-letter effects with nonword stimuli is strong, the evidence for word stimuli is scarce and inconclusive. The present experiment examined the effect of neighborhood frequency during normal silent reading using transposed-letter neighbors (e.g., silver, sliver). Two sets of low-frequency words were created (equated in the number of substitution neighbors, word frequency, and number of letters), which were embedded in sentences. In one set, the target word had a higher frequency transposed-letter neighbor, and in the other set, the target word had no transposed-letter neighbors. An inhibitory effect of neighborhood frequency was observed in measures that reflect late processing in words (number of regressions back to the target word, and total time). We examine the implications of these findings for models of visual-word recognition and reading.
N. Alahyane; V. Fonteille; C. Urquizar; Roméo Salemme; Norbert Nighoghossian; Denis Pelisson; C. Tilikete
In: Cerebellum, vol. 7, no. 4, pp. 595–601, 2008.
Sensory-motor adaptation processes are critically involved in maintaining accurate motor behavior throughout life. Yet their underlying neural substrates and task-dependency bases are still poorly understood. We address these issues here by studying adaptation of saccadic eye movements, a well-established model of sensory-motor plasticity. The cerebellum plays a major role in saccadic adaptation but it has not yet been investigated whether this role can account for the known specificity of adaptation to the saccade type (e.g., reactive versus voluntary). Two patients with focal lesions in different parts of the cerebellum were tested using the double-step target paradigm. Each patient was submitted to two separate sessions: one for reactive saccades (RS) triggered by the sudden appearance of a visual target and the second for scanning voluntary saccades (SVS) performed when exploring a more complex scene. We found that a medial cerebellar lesion impaired adaptation of reactive-but not of voluntary-saccades, whereas a lateral lesion affected adaptation of scanning voluntary saccades, but not of reactive saccades. These findings provide the first evidence of an involvement of the lateral cerebellum in saccadic adaptation, and extend the demonstrated role of the cerebellum in RS adaptation to adaptation of SVS. The double dissociation of adaptive abilities is also consistent with our previous hypothesis of the involvement in saccadic adaptation of partially separated cerebellar areas specific to the reactive or voluntary task (Alahyane et al. Brain Res 1135:107-121 (2007)).