All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2008 |
Jens R. Helmert; Sebastian Pannasch; Boris M. Velichkovsky Influences of dwell time and cursor control on the performance in gaze driven typing Journal Article In: Journal of Eye Movement Research, vol. 2, no. 1, pp. 1–8, 2008. @article{Helmert2008, In gaze controlled computer interfaces the dwell time is often used as selection criterion. But this solution comes along with several problems, especially in the temporal domain: Eye movement studies on scene perception could demonstrate that fixations of different durations serve different purposes and should therefore be differentiated. The use of dwell time for selection implies the need to distinguish intentional selections from merely per-ceptual processes, described as the Midas touch problem. Moreover, the feedback of the actual own eye position has not yet been addressed to systematic studies in the context of usability in gaze based computer interaction. We present research on the usability of a simple eye typing set up. Different dwell time and eye position feedback configurations were tested. Our results indicate that smoothing raw eye position and temporal delays in visual feedback enhance the system's functionality and usability. Best overall performance was obtained with a dwell time of 500 ms. |
John M. Henderson; Graham L. Pierce Eye movements during scene viewing: Evidence for mixed control of fixation durations Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 3, pp. 566–573, 2008. @article{Henderson2008, Recent behavioral and computational research on eye movement control during scene viewing has focused on where the eyes move. However, fixations also differ in their durations, and when the eyes move may be another important indicator of perceptual and cognitive activity. Here we used a scene onset delay paradigm to investigate the degree to which individual fixation durations are under direct moment-to-moment control of the viewer's current visual scene. During saccades just prior to critical fixations, the scene was removed from view so that when the eyes landed, no scene was present. Following a manipulated delay period, the scene was restored to view. We found that one population of fixations was under the direct control of the current scene, increasing in duration as delay increased. A second population of fixations was relatively constant across delay. The pattern of data did not change whether delay duration was random or blocked, suggesting that the effects were not under the strategic control of the viewer. The results support a mixed control model in which the durations of some fixations proceed regardless of scene presence, whereas others are under the direct moment-to-moment control of ongoing scene analysis. |
Jesse A. Harris; Liina Pylkkänen; Brian McElree; Steven Frisson The cost of question concealment: Eye-tracking and MEG evidence Journal Article In: Brain and Language, vol. 107, no. 1, pp. 44–61, 2008. @article{Harris2008, Although natural language appears to be largely compositional, the meanings of certain expressions cannot be straightforwardly recovered from the meanings of their parts. This study examined the online processing of one such class of expressions: concealed questions, in which the meaning of a complex noun phrase (the proof of the theorem) shifts to a covert question (what the proof of the theorem is) when mandated by a sub-class of question-selecting verbs (e.g., guess). Previous behavioral and magnetoencephalographic (MEG) studies have reported a cost associated with converting an entity denotation to an event. Our study tested whether both types of meaning-shift affect the same computational resources by examining the effects elicited by concealed questions in eye-tracking and MEG. Experiment 1 found evidence from eye-movements that verbs requiring the concealed question interpretation require more processing time than verbs that do not support a shift in meaning. Experiment 2 localized the cost of the concealed question interpretation in the left posterior temporal region, an area distinct from that affected by complement coercion. Experiment 3 presented the critical verbs in isolation and found no posterior temporal effect, confirming that the effect of Experiment 2 reflected sentential, and not lexical-level, processing. |
Benjamin Y. Hayden; Sarah R. Heilbronner; Amrita C. Nair; Michael L. Platt Cognitive influences on risk-seeking by rhesus macaques Journal Article In: Judgment and Decision Making, vol. 3, no. 5, pp. 389–395, 2008. @article{Hayden2008, Humans and other animals are idiosyncratically sensitive to risk, either preferring or avoiding options having the same value but differing in uncertainty. Many explanations for risk sensitivity rely on the non-linear shape of a hypothesized utility curve. Because such models do not place any importance on uncertainty per se, utility curve-based accounts predict indifference between risky and riskless options that offer the same distribution of rewards. Here we show that monkeys strongly prefer uncertain gambles to alternating rewards with the same payoffs, demonstrating that uncertainty itself contributes to the appeal of risky options. Based on prior observations, we hypothesized that the appeal of the risky option is enhanced by the salience of the potential jackpot. To test this, we subtly manipulated payoffs in a second gambling task. We found that monkeys are more sensitive to small changes in the size of the large reward than to equivalent changes in the size of the small reward, indicating that they attend preferentially to the jackpots. Together, these results challenge utility curve-based accounts of risk sensitivity, and suggest that psychological factors, such as outcome salience and uncertainty itself, contribute to risky decision-making. |
Benjamin Y. Hayden; Amrita C. Nair; Allison N. McCoy; Michael L. Platt Posterior cingulate cortex mediates outcome-contingent allocation of behavior Journal Article In: Neuron, vol. 60, no. 1, pp. 19–25, 2008. @article{Hayden2008a, Adaptive decision making requires selecting an action and then monitoring its consequences to improve future decisions. The neuronal mechanisms supporting action evaluation and subsequent behavioral modification, however, remain poorly understood. To investigate the contribution of posterior cingulate cortex (CGp) to these processes, we recorded activity of single neurons in monkeys performing a gambling task in which the reward outcome of each choice strongly influenced subsequent choices. We found that CGp neurons signaled reward outcomes in a nonlinear fashion and that outcome-contingent modulations in firing rate persisted into subsequent trials. Moreover, firing rate on any one trial predicted switching to the alternative option on the next trial. Finally, microstimulation in CGp following risky choices promoted a preference reversal for the safe option on the following trial. Collectively, these results demonstrate that CGp directly contributes to the evaluative processes that support dynamic changes in decision making in volatile environments. |
Robert D. Gordon; Sarah D. Vollmer; Megan L. Frankl Object continuity and the transsaccadic representation of form Journal Article In: Perception and Psychophysics, vol. 70, no. 4, pp. 667–679, 2008. @article{Gordon2008, Transsaccadic object file representations were investigated in three experiments. Subjects moved their eyes from a central fixation cross to a location between two peripheral objects. During the saccade, this preview display was replaced with a target display containing a single object to be named. On trials on which the target identity matched one of the preview objects, its orientation either matched or did not match the previewed orientation. The results of Experiments 1 and 2 revealed that orientation changes disrupt perceptual continuity for objects located near fixation, but not for objects located further from fixation. The results of Experiment 3 confirmed that orientation changes do not disrupt continuity for distant objects, while showing that subjects nevertheless maintain an object-specific representation of the orientation of such objects. Together, the results suggest that object files represent orientation but that whether or not orientation plays a role in the processes that determine continuity depends on the quality of the perceptual representation. While |
Melissa J. Green; Jennifer H. Waldron; Ian Simpson; Max Coltheart Visual processing of social context during mental state perception in schizophrenia Journal Article In: Journal of Psychiatry and Neuroscience, vol. 33, no. 1, pp. 34–42, 2008. @article{Green2008, OBJECTIVE: To examine schizophrenia patients' visual attention to social contextual information during a novel mental state perception task. METHOD: Groups of healthy participants (n = 26) and schizophrenia patients (n = 24) viewed 7 image pairs depicting target characters presented context-free and context-embedded (i.e., within an emotion-congruent social context). Gaze position was recorded with the EyeLink I Gaze Tracker while participants performed a mental state inference task. Mean eye movement variables were calculated for each image series (context-embedded v. context-free) to examine group differences in social context processing. RESULTS: The schizophrenia patients demonstrated significantly fewer saccadic eye movements when viewing context-free images and significantly longer eye-fixation durations when viewing context-embedded images. Healthy individuals significantly shortened eye-fixation durations when viewing context-embedded images, compared with context-free images, to enable rapid scanning and uptake of social contextual information; however, this pattern of visual attention was not pronounced in schizophrenia patients. In association with limited scanning and reduced visual attention to contextual information, schizophrenia patients' assessment of the mental state of characters embedded in social contexts was less accurate. CONCLUSION: In people with schizophrenia, inefficient integration of social contextual information in real-world situations may negatively affect the ability to infer mental and emotional states from facial expressions. |
Harold H. Greene Distance-from-target dynamics during visual search Journal Article In: Vision Research, vol. 48, no. 23-24, pp. 2476–2484, 2008. @article{Greene2008, Tseng, Y. C., & Li, C. S. (2004). Oculomotor correlates of context-guided learning in visual search. Perception & Psychophysics, 66, 1368-1378 noted that visual search with eye movements may be characterized by a search phase in which fixations do not move towards the target, followed by a phase in which fixations move steadily towards the target. They speculated that the phases are related to memory and recognition processes. Human visual search and Monte Carlo simulations are described towards an explanation. Distance-from-target dynamics were demonstrated to be sensitive to geometric constraints and therefore do not provide a solution to the question of memory in visual search. Finally, it is concluded that the specific distance-from-target dynamics noted by Tseng, Y. C., & Li, C. S. (2004). Oculomotor correlates of context-guided learning in visual search. Perception & Psychophysics, 66, 1368-1378 are parsimoniously explained by random walks that were initialized at the centre of their stimulus displays. |
C. Ehresman; D. Saucier; Matthew Heath; G. Binsted Online corrections can produce illusory bias during closed-loop pointing Journal Article In: Experimental Brain Research, vol. 188, no. 3, pp. 371–378, 2008. @article{Ehresman2008, This experiment examined whether the impact of pictorial illusions during the execution of goal-directed reaching movements is attributable to ocular motor signaling. We analyzed eye and hand movements directed toward both the vertex of the Müller-Lyer (ML) figure in a closed-loop procedure. Participants pointed to the right vertex of a visual stimulus in two conditions: a control condition wherein the figure (in-ML, neutral, out-ML) presented at response planning remained unchanged throughout the movement, and an experimental condition wherein a neutral figure presented at response planning was perturbed to an illusory figure (in-ML, out-ML) at movement onset. Consistent with previous work from our group (Heath et al. in Exp Brain Res 158:378-384, 2004; Heath et al. in J Mot Behav 37:179-185, 2005b), action-bias present in both conditions; thus illusory bias was introduced into during online control. Although primary saccades were influenced by illusory configurations (control conditions; see Binsted and Elliott in Hum Mov Sci 18:103-117, 1999a), illusory bias developed within the secondary "corrective" saccades during experimental trials (i.e., following a veridical primary saccade). These results support the position that a unitary spatial representation underlies both action and perception and this representation is common to both the manual and oculomotor systems. |
Wolfgang Einhäuser; Ueli Rutishauser; Christof Koch Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli Journal Article In: Journal of Vision, vol. 8, no. 2, pp. 1–19, 2008. @article{Einhaeuser2008, In natural vision both stimulus features and task-demands affect an observer's attention. However, the relationship between sensory-driven ("bottom-up") and task-dependent ("top-down") factors remains controversial: Can task-demands counteract strong sensory signals fully, quickly, and irrespective of bottom-up features? To measure attention under naturalistic conditions, we recorded eye-movements in human observers, while they viewed photographs of outdoor scenes. In the first experiment, smooth modulations of contrast biased the stimuli's sensory-driven saliency towards one side. In free-viewing, observers' eye-positions were immediately biased toward the high-contrast, i.e., high-saliency, side. However, this sensory-driven bias disappeared entirely when observers searched for a bull's-eye target embedded with equal probability to either side of the stimulus. When the target always occurred in the low-contrast side, observers' eye-positions were immediately biased towards this low-saliency side, i.e., the sensory-driven bias reversed. Hence, task-demands do not only override sensory-driven saliency but also actively countermand it. In a second experiment, a 5-Hz flicker replaced the contrast gradient. Whereas the bias was less persistent in free viewing, the overriding and reversal took longer to deploy. Hence, insufficient sensory-driven saliency cannot account for the bias reversal. In a third experiment, subjects searched for a spot of locally increased contrast ("oddity") instead of the bull's-eye ("template"). In contrast to the other conditions, a slight sensory-driven free-viewing bias prevails in this condition. In a fourth experiment, we demonstrate that at known locations template targets are detected faster than oddity targets, suggesting that the former induce a stronger top-down drive when used as search targets. Taken together, task-demands can override sensory-driven saliency in complex visual stimuli almost immediately, and the extent of overriding depends on the search target and the overridden feature, but not on the latter's free-viewing saliency. |
Wolfgang Einhäuser; Merrielle Spain; Pietro Perona Objects predict fixations better than early saliency Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–26, 2008. @article{Einhaeuser2008a, Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as "saliency maps," are often built on the assumption that "early" features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to "interesting" objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated. |
Wolfgang Einhäuser; James Stout; Christof Koch; Olivia Carter Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry Journal Article In: Proceedings of the National Academy of Sciences, vol. 105, no. 5, pp. 1704–1709, 2008. @article{Einhaeuser2008b, During sustained viewing of an ambiguous stimulus, an individual's perceptual experience will generally switch between the different possible alternatives rather than stay fixed on one interpretation (perceptual rivalry). Here, we measured pupil diameter while subjects viewed different ambiguous visual and auditory stimuli. For all stimuli tested, pupil diameter increased just before the reported perceptual switch and the relative amount of dilation before this switch was a significant predictor of the subsequent duration of perceptual stability. These results could not be explained by blink or eye-movement effects, the motor response or stimulus driven changes in retinal input. Because pupil dilation reflects levels of norepinephrine (NE) released from the locus coeruleus (LC), we interpret these results as suggestive that the LC-NE complex may play the same role in perceptual selection as in behavioral decision making. |
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold Visual selective attention among persons with schizophrenia: The distractor ratio effect Journal Article In: Schizophrenia Research, vol. 105, pp. 61–67, 2008. @article{Elahipanah2008, The current study investigated whether impaired visual attention among patients with schizophrenia can be accounted for by poor perceptual organization and impaired search selectivity. Twenty-three patients with schizophrenia and 22 healthy control participants completed a conjunctive visual search task where the relative frequency of the two types of distractors was manipulated. It has been shown that, when the total number of items in a display is fixed, search performance depends on the relative frequency of the types of distractors (i.e., as the ratio becomes more discrepant search time decreases). This modulation of search efficiency reflects participants' ability to group items by their perceptual similarity and then search only the smaller group of items that share a feature with the target. Results show that patients modulate their response time normally as a function of the distractor ratio – that is, they benefit from the presence of a smaller distractor subset in the display. This suggests that patients with schizophrenia, group items according to their perceptual similarity and flexibly deploy their attention to the smaller subset of distractors on each trial. These results demonstrate that search selectivity as a function of the relative frequency of distractors is unimpaired among patients with schizophrenia. |
Ralf Engbert; Antje Nuthmann Self-consistent estimation of mislocated fixations during reading Journal Article In: PLoS ONE, vol. 3, no. 2, pp. e1534, 2008. @article{Engbert2008, During reading, we generate saccadic eye movements to move words into the center of the visual field for word processing. However, due to systematic and random errors in the oculomotor system, distributions of within-word landing positions are rather broad and show overlapping tails, which suggests that a fraction of fixations is mislocated and falls on words to the left or right of the selected target word. Here we propose a new procedure for the self-consistent estimation of the likelihood of mislocated fixations in normal reading. Our approach is based on iterative computation of the proportions of several types of oculomotor errors, the underlying probabilities for word-targeting, and corrected distributions of landing positions. We found that the average fraction of mislocated fixations ranges from about 10% to more than 30% depending on word length. These results show that fixation probabilities are strongly affected by oculomotor errors. |
Tom Foulsham; Alan Kingstone; Geoffrey Underwood Turning the world around: Patterns in saccade direction vary with picture orientation Journal Article In: Vision Research, vol. 48, pp. 1777–1790, 2008. @article{Foulsham2008a, The eye movements made by viewers of natural images often feature a predominance of horizontal saccades. Can this behaviour be explained by the distribution of saliency around the horizon, low-level oculomotor factors, top-down control or laboratory artefacts? Two experiments explored this bias by recording saccades whilst subjects viewed photographs rotated to varying extents, but within a constant square frame. The findings show that the dominant saccade direction follows the orientation of the scene, though this pattern varies in interiors and during recognition of previously seen pictures. This demonstrates that a horizon bias is robust and affected by both the distribution of features and more global representations of the scene layout. |
Tom Foulsham; Geoffrey Underwood What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition Journal Article In: Journal of Vision, vol. 8, no. 2, pp. 1–17, 2008. @article{Foulsham2008, Saliency map models account for a small but significant amount of the variance in where people fixate, but evaluating these models with natural stimuli has led to mixed results. In the present study, the eye movements of participants were recorded while they viewed color photographs of natural scenes in preparation for a memory test (encoding) and when recognizing them later. These eye movements were then compared to the predictions of a well defined saliency map model (L. Itti & C. Koch, 2000), in terms of both individual fixation locations and fixation sequences (scanpaths). The saliency model is a significantly better predictor of fixation location than random models that take into account bias toward central fixations, and this is the case at both encoding and recognition. However, similarity between scanpaths made at multiple viewings of the same stimulus suggests that repetitive scanpaths also contribute to where people look. Top-down recapitulation of scanpaths is a key prediction of scanpath theory (D. Noton & L. Stark, 1971), but it might also be explained by bottom-up guidance. The present data suggest that saliency cannot account for scanpaths and that incorporating these sequences could improve model predictions. |
Hans Peter Frey; Christian Honey; P. König; Peter Konig What's color got to do with it? The influence of color on visual attention in different categories Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008. @article{Frey2008, Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention. |
Steven Frisson; Brian McElree Complement coercion is not modulated by competition: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 1, pp. 1–11, 2008. @article{Frisson2008a, An eye-movement study examined the processing of expressions requiring complement coercion (J. Pustejovsky, 1995), in which a noun phrase that does not denote an event (e.g., the book) appears as the complement of an event-selecting verb (e.g., began the book). Previous studies demonstrated that these expressions are more costly to process than are control expressions that can be processed with basic compositional operations (L. Pylkka ̈nen & B. McElree, 2006). Complement coercion is thought to be costly because comprehenders need to construct an event sense of the complement to satisfy the semantic restrictions of the verb (e.g., began writing the book). The reported experiment tests the alternative hypotheses that the cost arises from the need to select 1 interpretation from several or from competition between alternative interpretations. Expressions with weakly constrained interpretations (no dominant interpretation and several alternative interpretations) were not more costly to process than expressions with a strongly constrained interpretation (1 dominant interpretation and few alternative interpretations). These results are consistent with the hypothesis that the cost reflects the on-line construction of an event sense for the complement. |
Steven Frisson; Elizabeth Niswander-Klement; Alexander Pollatsek The role of semantic transparency in the processing of English compound words Journal Article In: British Journal of Psychology, vol. 99, no. 1, pp. 87–107, 2008. @article{Frisson2008, Experiment 1 examined whether the semantic transparency of an English unspaced compound word affected how long it took to process it in reading. Three types of opaque words were each compared with a matched set of transparent words (i.e. matched on the length and frequency of the constituents and the frequency of the word as a whole). Two sets of the opaque words were partially opaque: either the first constituent was not related to the meaning of the compound (opaque-transparent) or the second constituent was not related to the meaning of the compound (transparent-opaque). In the third set (opaque-opaque), neither constituent was related to the meaning of the compound. For all three sets, there was no significant difference between the opaque and the transparent words on any eye-movement measure. This replicates an earlier finding with Finnish compound words (Pollatsek & Hyönä, 2005) and indicates that, although there is now abundant evidence that the component constituents play a role in the encoding of compound words, the meaning of the compound word is not constructed from the parts, at least for compound words for which a lexical entry exists. Experiment 2 used the same compounds but with a space between the constituents. This presentation resulted in a transparency effect, indicating that when an assembly route is 'forced', transparency does play a role. |
Steffen Gais; Sabine Köster; Andreas Sprenger; Judith Bethke; Wolfgang Heide; Hubert Kimmig Sleep is required for improving reaction times after training on a procedural visuo-motor task Journal Article In: Neurobiology of Learning and Memory, vol. 90, no. 4, pp. 610–615, 2008. @article{Gais2008, Sleep has been found to enhance consolidation of many different forms of memory. However in most procedural tasks, a sleep-independent, fast learning component interacts with slow, sleep-dependent improvements. Here, we show that in humans a visuo-motor saccade learning task shows no improvements during training, but only during a delayed recall testing after a period of sleep. Subjects were trained in a prosaccade task (saccade to a visual target). Performance was tested in the prosaccade and the antisaccade task (saccade to opposite direction of the target) before training, after a night of sleep or sleep deprivation, after a night of recovery sleep, and finally in a follow-up test 4 weeks later. We found no immediate improvement in saccadic reaction time (SRT) during training, but a delayed reduction in SRT, indicating a slow-learning process. This reduction occurred only after a period of sleep, i.e. after the first night in the sleep group and after recovery sleep in the sleep deprivation group. This improvement was stable during the 4-week follow-up. Saccadic training can thus induce covert changes in the saccade generation pathway. During the following sleep period, these changes in turn bring about overt performance improvements, presuming a learning effect based on synaptic tagging. |
Paola Escudero; Rachel Hayes-Harb; Holger Mitterer Novel second-language words and asymmetric lexical access Journal Article In: Journal of Phonetics, vol. 36, no. 2, pp. 345–360, 2008. @article{Escudero2008, The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English nonwords, of which 10 contained the English contrast /ε/-æ/ (a confusable contrast for native Dutch speakers). One group of subjects learned the words by matching their auditory forms to pictured meanings, while a second group additionally saw the spelled forms of the words. We found that the group who received only auditory forms confused words containing /æ/ and /ε/ symmetrically, i.e., both /æ/ and /ε/ auditory tokens triggered looks to pictures containing both /æ/ and /ε/. In contrast, the group who also had access to spelled forms showed the same asymmetric word recognition pattern found by previous studies, i.e., they only looked at pictures of words containing /ε/ when presented with /ε/ target tokens, but looked at pictures of words containing both /æ/ and /ε/ when presented with /æ/ target tokens. The results demonstrate that L2 learners can form lexical contrasts for auditorily confusable novel L2 words. However, and most importantly, this study suggests that explicit information over the contrastive nature of two new sounds may be needed to build separate lexical representations for similar-sounding L2 words. |
Ali Ezzati; Ashkan Golzar; Arash S. R. Afraz Topography of the motion aftereffect with and without eye movements Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 1–16, 2008. @article{Ezzati2008, Although a lot is known about various properties of the motion aftereffect (MAE), there is no systematic study of the topographic organization of MAE. In the current study, first we provided a topographic map of the MAE to investigate its spatial properties in detail. To provide a fine topographic map, we measured MAE with small test stimuli presented at different loci after adaptation to motion in a large region within the visual field. We found that strength of MAE is highest on the internal edge of the adapted area. Our results show a sharper aftereffect boundary for the shearing motion compared to compression and expansion boundaries. In the second experiment, using a similar paradigm, we investigated topographic deformation of the MAE area after a single saccadic eye movement. Surprisingly, we found that topographic map of MAE splits into two separate regions after the saccade: one corresponds to the retinal location of the adapted stimulus and the other matches the spatial location of the adapted region on the display screen. The effect was stronger at the retinotopic location. The third experiment is basically replication of the second experiment in a smaller zone that confirms the results of previous experiments in individual subjects. The eccentricity of spatiotopic area is different from retinotopic area in the second experiment; Experiment 3 controls the effect of eccentricity and confirms the major results of the second experiment. |
Christopher A. Dickinson; Helene Intraub Transsaccadic representation of layout: What is the time course of boundary extension? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 3, pp. 543–555, 2008. @article{Dickinson2008, How rapidly does boundary extension occur? Across experiments, trials included a 3-scene sequence (325 ms/picture), masked interval, and repetition of 1 scene. The repetition was the same view or differed (more close-up or wide angle). Observers rated the repetition as same as, closer than, or more wide angle than the original view on a 5-point scale. Masked intervals were 100, 250, 625, or 1,000 ms in Experiment 1 and 42, 100, or 250 ms in Experiments 2 and 3. Boundary extension occurred in all cases: Identical views were rated as too "close-up," and distractor views elicited the rating asymmetry typical of boundary extension (wider angle distractors were rated as being more similar to the original than were closer up distractors). Most important, boundary extension was evident when only a 42-ms mask separated the original and test views. Experiments 1 and 3 included conditions eliciting a gaze shift prior to the rating test; this did not eliminate boundary extension. Results show that boundary extension is available soon enough and is robust enough to play an on-line role in view integration, perhaps supporting incorporation of views within a larger spatial framework. |
Adele Diederich; Hans Colonius Crossmodal interaction in saccadic reaction time: Separating multisensory from warning effects in the time window of integration model Journal Article In: Experimental Brain Research, vol. 186, no. 1, pp. 1–22, 2008. @article{Diederich2008, In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) non-target presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from -500 (non-target prior to target) to 0 ms, but the effect was larger for ipsi- than for contralateral presentation within an SOA range from -200 ms to 0. The time-window-of-integration (TWIN) model (Colonius and Diederich in J Cogn Neurosci 16:1000, 2004) is extended here to separate the effect of a spatially unspecific warning effect of the non-target from a spatially specific and genuine multisensory integration effect. |
Adele Diederich; Hans Colonius; Adele Diederich In: Brain Research, vol. 1242, pp. 219–230, 2008. @article{Diederich2008a, In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) nontarget presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from - 250 ms (nontarget prior to target) to 50 ms. This study specifically addressed the effect of varying nontarget intensity. While facilitation effects for auditory nontargets are somewhat more pronounced than for tactile ones, decreasing intensity slightly reduced facilitation for both types of nontargets. The time course of crossmodal mean SRT over SOA and the pattern of facilitation observed here suggest the existence of two distinct underlying mechanisms: (a) a spatially unspecific crossmodal warning triggered by the nontarget being detected early enough before the arrival of the target plus (b) a spatially specific multisensory integration mechanism triggered by the target processing time terminating within the time window of integration. It is shown that the time window of integration (TWIN) model introduced by the authors gives a reasonable quantitative account of the data relating observed SRT to the unobservable probability of integration and crossmodal warning for each SOA value under a high and low intensity level of the nontarget. |
Adele Diederich; Hans Colonius; Annette Schomburg Assessing age-related multisensory enhancement with the time-window-of-integration model Journal Article In: Neuropsychologia, vol. 46, no. 10, pp. 2556–2562, 2008. @article{Diederich2008b, Although from multisensory research a great deal is known about how the different senses interact, there is little knowledge as to the impact of aging on these multisensory processes. In this study, we measured saccadic reaction time (SRT) of aged and young individuals to the onset of a visual target stimulus with and without an accessory auditory stimulus occurring (focused attention task). The response time pattern for both groups was similar: mean SRT to bimodal stimuli was generally shorter than to unimodal stimuli, and mean bimodal SRT was shorter when the auditory accessory was presented ipsilaterally rather than contralaterally to the target. The elderly participants were considerably slower than the younger participants under all conditions but showed a greater multisensory enhancement, that is, they seem to benefit more from bimodal stimulus presentation. In an attempt to weigh the contributions of peripheral sensory processes relative to more central cognitive processes possibly responsible for the difference in the younger and older adults, the time-window-of-integration (TWIN) model for crossmodal interaction in saccadic eye movements developed by the authors was fitted to the data from both groups. The model parameters suggest that (i) there is a slowing of the peripheral sensory processing in the elderly, (ii) as a result of this slowing, the probability of integration is smaller in the elderly even with a wider time-window-of-integration, and (iii) multisensory integration, if it occurs, manifests itself in larger neural enhancement in the elderly; however, because of (ii), on average the integration effect is not large enough to compensate for the peripheral slowing in the elderly. |
Gregory J. Digirolamo; Jason S. McCarley; Arthur F. Kramer; Harry J. Griffin Voluntary and reflexive eye movements to illusory lengths Journal Article In: Visual Cognition, vol. 16, no. 1, pp. 68–89, 2008. @article{Digirolamo2008, Considerable debate surrounds the extent and manner that motor control is, like perception, susceptible to visual illusions. Using the Brentano version of the Mu ¨ller-Lyer illusion, we measured the accuracy of voluntary and reflexive eye movements to the endpoints of equal length line segments that appeared different (Experiment 1) and different length line segments that appeared equal (Experiment 3). Voluntary and reflexive saccades were both influenced by the illusion, but the former were more strongly biased and closer to the subjective percept. Experiment 2 demonstrated that these data were the results of the illusion and not centre-of- gravity effects. The representations underlying perception and action interact and this interaction produces biases for actions, particularly voluntary actions. |
Mieke Donk; Wieske Zoest Effects of salience are short-lived Journal Article In: Psychological Science, vol. 19, no. 7, pp. 733–739, 2008. @article{Donk2008, A salient event in the visual field tends to attract attention and the eyes. To account for the effects of salience on visual selection, models generally assume that the human visual system continuously holds information concerning the relative salience of objects in the visual field. Here we show that salience in fact drives vision only during the short time interval immediately following the onset of a visual scene. In a saccadic target-selection task, human performance in making an eye movement to the most salient element in a display was accurate when response latencies were short, but was at chance when response latencies were long. In a manual discrimination task, performance in making a judgment of salience was more accurate with brief than with long display durations. These results suggest that salience is represented in the visual system only briefly after a visual image enters the brain. |
Denise D. J. Grave; Constanze Hesse; Anne-Marie Brouwer; Volker H. Franz Fixation locations when grasping partly occluded objects Journal Article In: Journal of Vision, vol. 8, no. 7, pp. 1–11, 2008. @article{Grave2008, When grasping an object, subjects tend to look at the contact positions of the digits (A. M. Brouwer, V. H. Franz, D. Kerzel, & K. R. Gegenfurtner, 2005; R. S. Johansson, G. Westling, A. Bäckström, & J. R. Flanagan, 2001). However, these contact positions are not always visible due to occlusion. Subjects might look at occluded parts to determine the location of the contact positions based on extrapolated information. On the other hand, subjects might avoid looking at occluded parts since no object information can be gathered there. To find out where subjects fixate when grasping occluded objects, we let them grasp flat shapes with the index finger and thumb at predefined contact positions. Either the contact position of the thumb or the finger or both was occluded. In a control condition, a part of the object that does not involve the contact positions was occluded. The results showed that subjects did look at occluded object parts, suggesting that they used extrapolated object information for grasping. Additionally, they preferred to look in the direction of the index finger. When the contact position of the index finger was occluded, this tendency was inhibited. Thus, an occluder does not prevent fixations on occluded object parts, but it does affect fixation locations especially in conditions where the preferred fixation location is occluded. |
Marc H. E. Lussanet; Luciano Fadiga; Lars Michels; Rüdiger J. Seitz; Raimund Kleiser; Markus Lappe Interaction of visual hemifield and body view in biological motion perception Journal Article In: European Journal of Neuroscience, vol. 27, no. 2, pp. 514–522, 2008. @article{Lussanet2008, The brain network for the recognition of biological motion includes visual areas and structures of the mirror-neuron system. The latter respond during action execution as well as during action recognition. As motor and somatosensory areas predominantly represent the contralateral side of the body and visual areas predominantly process stimuli from the contralateral hemifield, we were interested in interactions between visual hemifield and action recognition. In the present study, human participants detected the facing direction of profile views of biological motion stimuli presented in the visual periphery. They recognized a right-facing body view of human motion better in the right visual hemifield than in the left; and a left-facing body view better in the left visual hemifield than in the right. In a subsequent fMRI experiment, performed with a similar task, two cortical areas in the left and right hemispheres were significantly correlated with the behavioural facing effect: primary somatosensory cortex (BA 2) and inferior frontal gyrus (BA 44). These areas were activated specifically when point-light stimuli presented in the contralateral visual hemifield displayed the side view of their contralateral body side. Our results indicate that the hemispheric specialization of one's own body map extends to the visual representation of the bodies of others. |
Denis Drieghe Foveal processing and word skipping during reading Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 4, pp. 856–860, 2008. @article{Drieghe2008, An eyetracking experiment is reported examining the assumption that a word is skipped during sentence reading because parafoveal processing during preceding fixations has reached an advanced level in recognizing that word. Word n was presented with reduced contrast, with case alternation, or normally. Reingold and Rayner (2006) reported that, in comparison to the normal condition, reduced contrast increased viewing times on word n but not on word n+1, whereas case alternation increased viewing times on both words. These patterns were reflected in the fixation times of the present experiment, but a striking dissociation was observed in the skipping of word n+1: The reduced contrast of word n decreased skipping of word n+1, whereas case alternation did not. Apart from the amount of parafoveal processing, the decision to skip word n+1 is also influenced by the ease of processing word n: Difficulties in processing word n lead to a more conservative strategy in the decision to skip word n+1. |
Jacob Duijnhouwer; Richard J. A. Wezel; Albert V. Van den Berg The role of motion capture in an illusory transformation of optic flow fields. Journal Article In: Journal of Vision, vol. 8, no. 4, pp. 1–18, 2008. @article{Duijnhouwer2008, In the optic flow illusion, the focus of an expanding optic flow field appears shifted when uniform flow is transparently superimposed. The shift is in the direction of the uniform flow, or "inducer." Current explanations relate the transformation of the expanding optic flow field to perceptual subtraction of the inducer signal. Alternatively, the shift might result from motion capture acting on the perceived focus position. To test this alternative, we replaced expanding target flow with contracting or rotating flow. Current explanations predict focus shifts in expanding and contracting flows that are opposite but of equal magnitude and parallel to the inducer. In rotary flow, the current explanations predict shifts that are perpendicular to the inducer. In contrast, we report larger shift for expansion than for contraction and a component of shift parallel to the inducer for rotary flow. The magnitude of this novel component of shift depended on the target flow speed, the inducer flow speed, and the presentation duration. These results support the idea that motion capture contributes substantially to the optic flow illusion. |
Kristie R. Dukewich; Raymond M. Klein; John Christie The effect of gaze on gaze direction while looking at art Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 6, pp. 1141–1147, 2008. @article{Dukewich2008, In highly controlled cuing experiments, conspecific gaze direction has powerful effects on an observer's attention. We explored the generality of this effect by using paintings in which the gaze direction of a key character had been carefully manipulated. Our observers looked at these paintings in one of three instructional states (neutral, social, or spatial) while we monitored their eye movements. Overt orienting was much less influenced by the critical gaze direction than what the cuing literature might suggest: An analysis of the direction of saccades following the first fixation of the critical gaze showed that observers were weakly biased to orient in the direction of the gaze. Over longer periods of viewing, however, this effect disappeared for all but the social condition. This restriction of gaze as an attentional cue to a social context is consistent with the idea that the evolution of gaze direction detection is rooted in social communication. The picture stimuli from this experiment can be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive. |
Jon Andoni Duñabeitia; Alberto Avilés; Manuel Carreiras Noah's ark: Influence of the number of associates in visual word recognition Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 6, pp. 1072–1077, 2008. @article{Dunabeitia2008, The main aim of this study was to explore the extent to which the number of associates of a word (NoA) influences lexical access, in four tasks that focus on different processes of visual word recognition: lexical decision, reading aloud, progressive demasking, and online sentence reading. Results consistently showed that words with a dense associative neighborhood (high-NoA words) were processed faster than words with a sparse neighborhood (low-NoA words), extending previous findings from English lexical decision and categorization experiments. These results are interpreted in terms of the higher degree of semantic richness of high-NoA words as compared with low-NoA words. |
Frank H. Durgin; Erika Doyle; Louisa Egan Upper-left gaze bias reveals competing search strategies in a reverse Stroop task Journal Article In: Acta Psychologica, vol. 127, no. 2, pp. 428–448, 2008. @article{Durgin2008, Three experiments with a total of 87 human observers revealed an upper-left spatial bias in the initial movement of gaze during visual search. The bias was present whether or not the explicit control of gaze was required for the task. This bias may be part of a search strategy that competed with the fixed-gaze parallel search strategy hypothesized by Durgin [Durgin, F. H. (2003). Translation and competition among internal representations in a reverse Stroop effect. Perception & Psychophysics, 65, 367-378.] for this task. When the spatial probabilities of the search target were manipulated either in accord with or in opposition to the existing upper-left bias, two orthogonal factors of interference in the latency data were differentially affected. The two factors corresponded to two different forms of representation and search. Target probabilities consistent with the gaze bias encouraged opportunistic serial search (including gaze shifts), while symmetrically opposing target probabilities produced latency patterns more consistent with parallel search based on a sensory code. |
Meghan Clayards; Michael K. Tanenhaus; Richard N. Aslin; Robert A. Jacobs Perception of speech reflects optimal use of probabilistic speech cues Journal Article In: Cognition, vol. 108, no. 3, pp. 804–809, 2008. @article{Clayards2008, Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial stops in English (e.g., "beach" and "peach"). Participants categorized words from distributions of VOT with wide or narrow variances. Uncertainty about word identity was measured by four-alternative forced-choice judgments and by the probability of looks to pictures. Both measures closely reflected the posterior probability of the word given the likelihood distributions of VOT, suggesting that listeners are sensitive to these distributions. |
Thérèse Collins; Tobias Schicke; Brigitte Röder Action goal selection and motor planning can be dissociated by tool use Journal Article In: Cognition, vol. 109, no. 3, pp. 363–371, 2008. @article{Collins2008, The preparation of eye or hand movements enhances visual perception at the upcoming movement end position. The spatial location of this influence of action on perception could be determined either by goal selection or by motor planning. We employed a tool use task to dissociate these two alternatives. The instructed goal location was a visual target to which participants pointed with the tip of a triangular hand-held tool. The motor endpoint was defined by the final fingertip position necessary to bring the tool tip onto the goal. We tested perceptual performance at both locations (tool tip endpoint, motor endpoint) with a visual discrimination task. Discrimination performance was enhanced in parallel at both spatial locations, but not at nearby and intermediate locations, suggesting that both action goal selection and motor planning contribute to visual perception. In addition, our results challenge the widely held view that tools extend the body schema and suggest instead that tool use enhances perception at those precise locations which are most relevant during tool action: the body part used to manipulate the tool, and the active tool tip. |
R. Contreras; Rachel Kolster; Henning U. Voss; Jamshid Ghajar; M. Suh; S. Bahar Eye-target synchronization in mild traumatic brain-injured patients Journal Article In: Journal of Biological Physics, vol. 34, no. 3-4, pp. 381–392, 2008. @article{Contreras2008, Eye-target synchronization is critical for effective smooth pursuit of a moving visual target. We apply the nonlinear dynamical technique of stochastic-phase synchronization to human visual pursuit of a moving target, in both normal and mild traumatic brain-injured (mTBI) patients. We observe significant fatigue effects in all subject populations, in which subjects synchronize better with the target during the first half of the trial than in the second half. The fatigue effect differed, however, between the normal and the mTBI populations and between old and young subpopulations of each group. In some cases, the younger (</=40 years old) normal subjects performed better than mTBI subjects and also better than older (>40 years old) normal subjects. Our results, however, suggest that further studies will be necessary before a standard of "normal" smooth pursuit synchronization can be developed. |
Manuel G. Calvo; Pedro Avero Affective priming of emotional pictures in parafoveal vision: Left visual field advantage Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 8, no. 1, pp. 41–53, 2008. @article{Calvo2008, This study investigated whether stimulus affective content can be extracted from visual scenes when these appear in parafoveal locations of the visual field and are foveally masked, and whether there is lateralization involved. Parafoveal prime pleasant or unpleasant scenes were presented for 150 msec 2.5° away from fixation and were followed by a foveal probe scene that was either congruent or incongruent in emotional valence with the prime. Participants responded whether the probe was emotionally positive or negative. Affective priming was demonstrated by shorter response latencies for congruent than for incongruent prime-probe pairs. This effect occurred when the prime was presented in the left visual field at a 300-msec prime-probe stimulus onset asynchrony, even when the prime and the probe were different in physical appearance and semantic category. This result reveals that the affective significance of emotional stimuli can be assessed early through covert attention mechanisms, in the absence of overt eye fixations on the stimuli, and suggests that right-hemisphere dominance is involved. Copyright 2008 Psychonomic Society, Inc. |
Manuel G. Calvo; Michael W. Eysenck Affective significance enhances covert attention: Roles of anxiety and word familiarity Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 11, pp. 1669–1686, 2008. @article{Calvo2008a, To investigate the processing of emotional words by covert attention, threat-related, positive, and neutral word primes were presented parafoveally (2.2 degrees away from fixation) for 150 ms, under gaze-contingent foveal masking, to prevent eye fixations. The primes were followed by a probe word in a lexical-decision task. In Experiment 1, results showed a parafoveal threat-anxiety superiority: Parafoveal prime threat words facilitated responses to probe threat words for high-anxiety individuals, in comparison with neutral and positive words, and relative to low-anxiety individuals. This reveals an advantage in threat processing by covert attention, without differences in overt attention. However, anxiety was also associated with greater familiarity with threat words, and the parafoveal priming effects were significantly reduced when familiarity was covaried out. To further examine the role of word knowledge, in Experiment 2, vocabulary and word familiarity were equated for low- and high-anxiety groups. In these conditions, the parafoveal threat-anxiety advantage disappeared. This suggests that the enhanced covert-attention effect depends on familiarity with words. |
Manuel G. Calvo; Lauri Nummenmaa Detection of emotional faces: Salient physical features guide effective visual search Journal Article In: Journal of Experimental Psychology: General, vol. 137, no. 3, pp. 471–494, 2008. @article{Calvo2008b, In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features–especially the smiling mouth–is responsible for facilitated initial orienting, which thus shortens detection. |
Manuel G. Calvo; Lauri Nummenmaa; Pedro Avero Visual search of emotional faces: Eye-movement assessment of component processes Journal Article In: Experimental Psychology, vol. 55, no. 6, pp. 359–370, 2008. @article{Calvo2008c, In a visual search task using photographs of real faces, a target emotional face was presented in an array of six neutral faces. Eye movements were monitored to assess attentional orienting and detection efficiency. Target faces with happy, surprised, and disgusted expressions were: (a) responded to more quickly and accurately, (b) localized and fixated earlier, and (c) detected as different faster and with fewer fixations, in comparison with fearful, angry, and sad target faces. This reveals a happy, surprised, and disgusted-face advantage in visual search, with earlier attentional orienting and more efficient detection. The pattern of findings remained equivalent across upright and inverted presentation conditions, which suggests that the search advantage involves processing of featural rather than configural information. Detection responses occurred generally after having fixated the target, which implies that detection of all facial expressions is post- rather than preattentional |
Manuel G. Calvo; Lauri Nummenmaa; Jukka Hyönä Emotional scenes in peripheral vision: Selective orienting and gist processing, but not content identification Journal Article In: Emotion, vol. 8, no. 1, pp. 68–80, 2008. @article{Calvo2008d, Emotional-neutral pairs of visual scenes were presented peripherally (with their inner edges 5.2 degrees away from fixation) as primes for 150 to 900 ms, followed by a centrally presented recognition probe scene, which was either identical in specific content to one of the primes or related in general content and affective valence. Results indicated that (a) if no foveal fixations on the primes were allowed, the false alarm rate for emotional probes was increased; (b) hit rate and sensitivity (A') were higher for emotional than for neutral probes only when a fixation was possible on only one prime; and (c) emotional scenes were more likely to attract the first fixation than neutral scenes. It is concluded that the specific content of emotional or neutral scenes is not processed in peripheral vision. Nevertheless, a coarse impression of emotional scenes may be extracted, which then leads to selective attentional orienting or–in the absence of overt attention–causes false alarms for related probes. |
Gideon P. Caplovitz; Nora A. Paymer; Peter U. Tse The drifting edge illusion: A stationary edge abutting an oriented drifting grating appears to move because of the 'other aperture problem' Journal Article In: Vision Research, vol. 48, no. 22, pp. 2403–2414, 2008. @article{Caplovitz2008, We describe the Drifting Edge Illusion (DEI), in which a stationary edge appears to move when it abuts a drifting grating. Although a single edge is sufficient to perceive DEI, a particularly compelling version of DEI occurs when a drifting grating is viewed through an oriented and stationary aperture. The magnitude of the illusion depends crucially on the orientations of the grating and aperture. Using psychophysics, we describe the relationship between the magnitude of DEI and the relative angle between the grating and aperture. Results are discussed in the context of the roles of occlusion, component-motion, and contour relationships in the interpretation of motion information. In particular, we suggest that the visual system is posed with solving an ambiguity other than the traditionally acknowledged aperture problem of determining the direction of motion of the drifting grating. In this 'second aperture problem' or 'edge problem', a motion signal may belong to either the occluded or occluding contour. That is, the motion along the contour can arise either because the grating is drifting or because the edge is drifting over a stationary grating. DEI appears to result from a misattribution of motion information generated by the drifting grating to the stationary contours of the aperture, as if the edges are interpreted to travel over the grating, although they are in fact stationary. |
Maria Nella Carminati; Roger P. G. Gompel; Christoph Scheepers; Manabu Arai Syntactic priming in comprehension: The role of argument order and animacy Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 5, pp. 1098–1110, 2008. @article{Carminati2008, Two visual-world eye-movement experiments investigated the nature of syntactic priming during comprehension–specifically, whether the priming effects in ditransitive prepositional object (PO) and double object (DO) structures (e.g., "The wizard will send the poison to the prince/the prince the poison?") are due to anticipation of structural properties following the verb (send) in the target sentence or to anticipation of animacy properties of the first postverbal noun. Shortly following the target verb onset, listeners looked at the recipient more (relative to the theme) following DO than PO primes, indicating that the structure of the prime affected listeners' eye gazes on the target scene. Crucially, this priming effect was the same irrespective of whether the postverbal nouns in the prime sentences did ("The monarch will send the painting to the president") or did not ("The monarch will send the envoy to the president") differ in animacy, suggesting that PO/DO priming in comprehension occurs because structural properties, rather than animacy features, are being primed when people process the ditransitive target verb. |
Jonathan S. A. Carriere; Daniel Eaton; Michael G. Reynolds; Mike J. Dixon; Daniel Smilek Grapheme–color synesthesia influences overt visual attention Journal Article In: Journal of Cognitive Neuroscience, vol. 21, no. 2, pp. 246–258, 2008. @article{Carriere2008, For individuals with grapheme–color synesthesia, achromatic letters and digits elicit vivid perceptual experiences of color. We report two experiments that evaluate whether synesthesia influences overt visual attention. In these experiments, two grapheme–color synesthetes viewed colored letters while their eye movements were monitored. Letters were presented in colors that were either congruent or incongruent with the synesthetes' colors. Eye tracking analysis showed that synesthetes exhibited a color congruity bias—a propensity to fixate congruently colored letters more often and for longer durations than incongruently colored letters—in a naturalistic free-viewing task. In a more structured visual search task, this congruity bias caused synesthetes to rapidly fixate and identify congruently colored target letters, but led to problems in identifying incongruently colored target letters. The results are discussed in terms of their implications for perception in synesthesia. |
Monica S. Castelhano; Alexander Pollatsek; Kyle R. Cave Typicality aids search for an unspecified target, but only in identification and not in attentional guidance Journal Article In: Psychonomic Bulletin & Review, vol. 15, no. 4, pp. 795–801, 2008. @article{Castelhano2008, Participants searched for a picture of an object, and the object was either a typical or an atypical category member. The object was cued by either the picture or its basic-level category name. Of greatest interest was whether it would be easier to search for typical objects than to search for atypical objects. The answer was"yes," but only in a qualified sense: There was a large typicality effect on response time only for name cues, and almost none of the effect was found in the time to locate (i.e., first fixate) the target. Instead, typicality influenced verification time-the time to respond to the target once it was fixated. Typicality is thus apparently irrelevant when the target is well specified by a picture cue; even when the target is underspecified (as with a name cue), it does not aid attentional guidance, but only facilitates categorization. |
David D. Cox; Alexander M. Papanastassiou; Daniel Oreper; Benjamin B. Andken; James J. DiCarlo High-resolution three-dimensional microelectrode brain mapping using stereo microfocal x-ray imaging Journal Article In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2966–2976, 2008. @article{Cox2008, Much of our knowledge of brain function has been gleaned from studies using microelectrodes to characterize the response properties of individual neurons in vivo. However, because it is difficult to accurately determine the location of a microelectrode tip within the brain, it is impossible to systematically map the fine three-dimensional spatial organization of many brain areas, especially in deep structures. Here, we present a practical method based on digital stereo microfocal X-ray imaging that makes it possible to estimate the three-dimensional position of each and every microelectrode recording site in "real time" during experimental sessions. We determined the system's ex vivo localization accuracy to be better than 50 microm, and we show how we have used this method to coregister hundreds of deep-brain microelectrode recordings in monkeys to a common frame of reference with median error of <150 microm. We further show how we can coregister those sites with magnetic resonance images (MRIs), allowing for comparison with anatomy, and laying the groundwork for more detailed electrophysiology/functional MRI comparison. Minimally, this method allows one to marry the single-cell specificity of microelectrode recording with the spatial mapping abilities of imaging techniques; furthermore, it has the potential of yielding fundamentally new kinds of high-resolution maps of brain function. |
Matthew T. Crawford; John J. Skowronski; Chris Stiff; Ute Leonards Seeing, but not thinking: Limiting the spread of spontaneous trait transference II Journal Article In: Journal of Experimental Social Psychology, vol. 44, no. 3, pp. 840–847, 2008. @article{Crawford2008, When an informant describes trait-implicative behavior of a target, the informant is often associated with the trait implied by the behavior and can be assigned heightened ratings on that trait (STT effects). Presentation of a target photo along with the description seemingly eliminates these effects. Using three different measures of visual attention, the results of two studies show the elimination of STT effects by target photo presentation cannot be attributed to associative mechanisms linked to enhanced visual attention to targets. Instead, presentation of a target's photo likely prompts perceivers to spontaneously make target inferences in much the same way they make spontaneous inferences about self-describers. As argued by Todorov and Uleman [Todorov, A., & Uleman, J. S. (2004). The person reference process in spontaneous trait inferences. Journal of Personality & Social Psychology, 87, 482-493], such attributional processing can preclude the formation of trait associations to informants. |
Sarah C. Creel; Richard N. Aslin; Michael K. Tanenhaus Heeding the voice of experience: The role of talker variation in lexical access Journal Article In: Cognition, vol. 106, no. 2, pp. 633–664, 2008. @article{Creel2008, Two experiments used the head-mounted eye-tracking methodology to examine the time course of lexical activation in the face of a non-phonemic cue, talker variation. We found that lexical competition was attenuated by consistent talker differences between words that would otherwise be lexical competitors. In Experiment 1, some English cohort word-pairs were consistently spoken by a single talker (male couch, male cows), while other word-pairs were spoken by different talkers (male sheep, female sheet). After repeated instances of talker-word pairings, words from different-talker pairs showed smaller proportions of competitor fixations than words from same-talker pairs. In Experiment 2, participants learned to identify black-and-white shapes from novel labels spoken by one of two talkers. All of the 16 novel labels were VCVCV word-forms atypical of, but not phonologically illegal in, English. Again, a word was consistently spoken by one talker, and its cohort or rhyme competitor was consistently spoken either by that same talker (same-talker competitor) or the other talker (different-talker competitor). Targets with different-talker cohorts received greater fixation proportions than targets with same-talker cohorts, while the reverse was true for fixations to cohort competitors; there were fewer erroneous selections of competitor referents for different-talker competitors than same-talker competitors. Overall, these results support a view of the lexicon in which entries contain extra-phonemic information. Extensions of the artificial lexicon paradigm and developmental implications are discussed. © 2007 Elsevier B.V. All rights reserved. |
Michael D. Crossland; Antony B. Morland; Mary P. Feely; Elisabeth Hagen; Gary S. Rubin The effect of age and fixation instability on retinotopic mapping of primary visual cortex Journal Article In: Investigative Ophthalmology & Visual Science, vol. 49, no. 8, pp. 3734–3739, 2008. @article{Crossland2008, PURPOSE: Functional magnetic resonance imaging (fMRI) experiments determining the retinotopic structure of visual cortex have commonly been performed on young adults, who are assumed to be able to maintain steady fixation throughout the trial duration. The authors quantified the effects of age and fixation stability on the quality of retinotopic maps of primary visual cortex. METHODS: With the use of a 3T fMRI scanner, the authors measured cortical activity in six older and six younger normally sighted participants observing an expanding flickering checkerboard stimulus of 30 degrees diameter. The area of flattened primary visual cortex (V1) showing any blood oxygen level-dependent (BOLD) activity to the visual stimulus and the area responding to the central 3.75 degrees of the stimulus (relating to the central ring of our target) were recorded. Fixation stability was measured while participants observed the same stimuli outside the scanner using an infrared gazetracker. RESULTS: There were no age-related changes in the area of V1. However, the proportion of V1 active to our visual stimulus was lower for the older observers than for the younger observers (overall activity: 89.8% of V1 area for older observers, 98.6% for younger observers; P <0.05). This effect was more pronounced for the central 3.75 degrees of the target (older subjects, 26.4%; younger subjects, 40.7%; P <0.02). No significant relationship existed between fixation stability and age or the magnitude of activity in the primary visual cortex. CONCLUSIONS: Although the cortical area remains unchanged, healthy older persons show less BOLD activity in V1 than do younger persons. Normal variations in fixation stability do not have a significant effect on the accuracy of experiments to determine the retinotopic structure of the visual cortex. |
Jan Churan; Farhan A. Khawaja; James M. G. Tsui; Christopher C. Pack Brief motion stimuli preferentially activate surround-suppressed neurons in macaque visual area MT Journal Article In: Current Biology, vol. 18, no. 22, pp. 1–6, 2008. @article{Churan2008, Intuitively one might think that larger objects should be easier to see, and indeed performance on visual tasks generally improves with increasing stimulus size [1,2]. Recently, a remarkable exception to this rule was reported [3]: when a high-contrast, moving stimulus is presented very briefly, motion perception deteriorates as stimulus size increases. This psychophysical surround suppression has been interpreted as a correlate of the neuronal surround suppression that is commonly found in the visual cortex [3-5]. However, many visual cortical neurons lack surround suppression, and so one might expect that the brain would simply use their outputs to discriminate the motion of large stimuli. Indeed previous work has generally found that observers rely on whichever neurons are most informative about the stimulus to perform similar psychophysical tasks [6]. Here we show that the responses of neurons in the middle temporal (MT) area of macaque monkeys provide a simple resolution to this paradox. We find that surround-suppressed MT neurons integrate motion signals relatively quickly, so that by comparison non-suppressed neurons respond poorly to brief stimuli. Thus, psychophysical surround suppression for brief stimuli can be viewed as a consequence of a strategy that weights neuronal responses according to how informative they are about a given stimulus. If this interpretation is correct, then it follows that any psychophysical experiment that uses brief motion stimuli will effectively probe the responses of MT neurons that have strong surround suppression. |
Lillian Chen; Julie E. Boland Dominance and context effects on activation of alternative homophone meanings Journal Article In: Memory & Cognition, vol. 36, no. 7, pp. 1306–1323, 2008. @article{Chen2008, Two eyetracking-during-listening experiments showed frequency and context effects on fixation probability for pictures representing multiple meanings of homophones. Participants heard either an imperative sentence instructing them to look at a homophone referent (Experiment 1) or a declarative sentence that was either neutral or biased toward the homophone's subordinate meaning (Experiment 2). At homophone onset in both experiments, the participants viewed four pictures: (1) a referent of one homophone meaning, (2) a shape competitor for a nonpictured homophone meaning, and (3) two unrelated filler objects. In Experiment 1, meaning dominance affected looks to both the homophone referent and the shape competitor. In Experiment 2, as compared with neutral contexts, subordinatebiased contexts lowered the fixation probability for shape competitors of dominant meanings, but shape competitors still attracted more looks than would be expected by chance. We discuss the consistencies and discrepancies of these findings with the selective access and reordered access theories of lexical ambiguity resolution. |
Sarah Brown-Schmidt; Christine Gunlogson; Michael K. Tanenhaus Addressees distinguish shared from private information when interpreting questions during interactive conversation Journal Article In: Cognition, vol. 107, no. 3, pp. 1122–1134, 2008. @article{BrownSchmidt2008, Two experiments examined the role of common ground in the production and on-line interpretation of wh-questions such as What's above the cow with shoes? Experiment 1 examined unscripted conversation, and found that speakers consistently use wh-questions to inquire about information known only to the addressee. Addressees were sensitive to this tendency, and quickly directed attention toward private entities when interpreting these questions. A second experiment replicated the interpretation findings in a more constrained setting. These results add to previous evidence that the common ground influences initial language processes, and suggests that the strength and polarity of common ground effects may depend on contributions of sentence type as well as the interactivity of the situation. |
Julie N. Buchan; Martin Paré; Kevin G. Munhall The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception Journal Article In: Brain Research, vol. 1242, pp. 162–171, 2008. @article{Buchan2008, During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces. |
Antimo Buonocore; Robert D. McIntosh Saccadic inhibition underlies the remote distractor effect Journal Article In: Experimental Brain Research, vol. 191, no. 1, pp. 117–122, 2008. @article{Buonocore2008, The remote distractor effect is a robust finding whereby a saccade to a lateralised visual target is delayed by the simultaneous, or near simultaneous, onset of a distractor in the opposite hemifield. Saccadic inhibition is a more recently discovered phenomenon whereby a transient change to the scene during a visual task induces a depression in saccadic frequency beginning within 70 ms, and maximal around 90-100 ms. We assessed whether saccadic inhibition is responsible for the increase in saccadic latency induced by remote distractors. Participants performed a simple saccadic task in which the delay between target and distractor was varied between 0, 25, 50, 100 and 150 ms. Examination of the distributions of saccadic latencies showed that each distractor produced a discrete dip in saccadic frequency, time-locked to distractor onset, conforming closely to the character of saccadic inhibition. We conclude that saccadic inhibition underlies the remote distractor effect. |
Ian Cunnings; Harald Clahsen The time-course of morphological constraints: A study of plurals inside derived words Journal Article In: The Mental Lexicon, vol. 3, no. 2, pp. 149–175, 2008. @article{Cunnings2008, The avoidance of regular but not irregular plurals inside compounds (e.g., *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and Experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing. |
Delphine Dahan; Sarah J. Drucker; Rebecca A. Scarborough Talker adaptation in speech perception: Adjusting the signal or the representations? Journal Article In: Cognition, vol. 108, no. 3, pp. 710–718, 2008. @article{Dahan2008, Past research has established that listeners can accommodate a wide range of talkers in understanding language. How this adjustment operates, however, is a matter of debate. Here, listeners were exposed to spoken words from a speaker of an American English dialect in which the vowel /æ/ is raised before /g/, but not before /k/. Results from two experiments showed that listeners' identification of /k/-final words like back (which are unaffected by the dialect) was facilitated by prior exposure to their dialect-affected /g/-final counterparts, e.g., bag. This facilitation occurred because the competition between interpretations, e.g., bag or back, while hearing the initial portion of the input [bæ], was mitigated by the reduced probability for the input to correspond to bag as produced by this talker. Thus, adaptation to an accent is not just a matter of adjusting the speech signal as it is being heard; adaptation involves dynamic adjustment of the representations stored in the lexicon, according to the characteristics of the speaker or the context. |
Stephen V. David; Benjamin Y. Hayden; James A. Mazer; Jack L. Gallant Attention to stimulus features shifts spectral tuning of V4 neurons during natural vision Journal Article In: Neuron, vol. 59, no. 3, pp. 509–521, 2008. @article{David2008, Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search. |
Scott L. Davis; Teresa C. Frohman; C. J. Crandall; M. J. Brown; D. A. Mills; Phillip D. Kramer; O. Stuve; Elliot M. Frohman Modeling Uhthoff's phenomenon in MS patients with internuclear ophthalmoparesis Journal Article In: Neurology, vol. 70, pp. 1098–1106, 2008. @article{Davis2008, Objective: The goal of this investigation was to demonstrate that internuclear ophthalmoparesis (INO) can be utilized to model the effects of body temperature-induced changes on the fidelity of axonal conduction in multiple sclerosis (Uhthoff's phenomenon). Methods: Ocular motor function was measured using infrared oculography at 10-minute intervals in patients with multiple sclerosis (MS) with INO (MS-INO; n=8), patients with MS without INO (MS-CON; n=8), and matched healthy controls (CON; n=8) at normothermic baseline, during whole-body heating (increase in core temperature 0.8°C as measured by an ingestible temperature probe and transabdominal telemetry), and after whole-body cooling. The versional disconjugacy index (velocity-VDI), the ratio of abducting/adducting eye movements for velocity, was calculated to assess changes in interocular disconjugacy. The first pass amplitude (FPA), the position of the adducting eye when the abducting eye achieves a centrifugal fixation target, was also computed. Results: Velocity-VDI and FPA in MS-INO patients was elevated (p<0.001) following whole body heating with respect to baseline measures, confirming a compromise in axonal electrical impulse transmission properties. Velocity-VDI and FPA in MS-INO patients was then restored to baseline values following whole-body cooling, confirming the reversible and stereotyped nature of this characteristic feature of demyelination. Conclusions: We have developed a neurophysiologic model for objectively understanding temperature-related reversible changes in axonal conduction in multiple sclerosis. Our observations corroborate the hypothesis that changes in core body temperature (heating and cooling) are associated with stereotypic decay and restoration in axonal conduction mechanisms. |
Britt Anderson; Ryan E. B. Mruczek; Keisuke Kawasaki; David L. Sheinberg Effects of familiarity on neural activity in monkey inferior temporal lobe Journal Article In: Cerebral Cortex, vol. 18, no. 11, pp. 2540–2552, 2008. @article{Anderson2008a, Long-term familiarity facilitates recognition of visual stimuli. To better understand the neural basis for this effect, we measured the local field potential (LFP) and multiunit spiking activity (MUA) from the inferior temporal (IT) lobe of behaving monkeys in response to novel and familiar images. In general, familiar images evoked larger amplitude LFPs whereas MUA responses were greater for novel images. Familiarity effects were attenuated by image rotations in the picture plane of 45 degrees. Decreasing image contrast led to more pronounced decreases in LFP response magnitude for novel, compared with familiar images, and resulted in more selective MUA response profiles for familiar images. The shape of individual LFP traces could be used for stimulus classification, and classification performance was better for the familiar image category. Recording the visual and auditory evoked LFP at multiple depths showed significant alterations in LFP morphology with distance changes of 2 mm. In summary, IT cortex shows local processing differences for familiar and novel images at a time scale and in a manner consistent with the observed behavioral advantage for classifying familiar images and rapidly detecting novel stimuli. |
Britt Anderson; David L. Sheinberg Effects of temporal context and temporal expectancy on neural activity in inferior temporal cortex Journal Article In: Neuropsychologia, vol. 46, no. 4, pp. 947–957, 2008. @article{Anderson2008, Timing is critical. The same event can mean different things at different times and some events are more likely to occur at one time than another. We used a cued visual classification task to evaluate how changes in temporal context affect neural responses in inferior temporal cortex, an extrastriate visual area known to be involved in object processing. On each trial a first image cued a temporal delay before a second target image appeared. The animal's task was to classify the second image by pressing one of two buttons previously associated with that target. All images were used as both cues and targets. Whether an image cued a delay time or signaled a button press depended entirely upon whether it was the first or second picture in a trial. This paradigm allowed us to compare inferior temporal cortex neural activity to the same image subdivided by temporal context and expectation. Neuronal spiking was more robust and visually evoked local field potentials (LFP's) larger for target presentations than for cue presentations. On invalidly cued trials, when targets appeared unexpectedly early, the magnitude of the evoked LFP was reduced and delayed and neuronal spiking was attenuated. Spike field coherence increased in the beta-gamma frequency range for expected targets. In conclusion, different neural responses in higher order ventral visual cortex may occur for the same visual image based on manipulations of temporal attention. |
Elaine J. Anderson; Sabira K. Mannan; Geraint Rees; Petroc Sumner; Christopher Kennard A role for spatial and nonspatial working memory processes in visual search Journal Article In: Experimental Psychology, vol. 55, no. 5, pp. 301–312, 2008. @article{Anderson2008b, Searching a cluttered visual scene for a specific item of interest can take several seconds to perform if the target item is difficult to discriminate from surrounding items. Whether working memory processes are utilized to guide the path of attentional selection during such searches remains under debate. Previous studies have found evidence to support a role for spatial working memory in inefficient search, but the role of nonspatial working memory remains unclear. Here, we directly compared the role of spatial and nonspatial working memory for both an efficient and inefficient search task. In Experiment 1, we used a dual-task paradigm to investigate the effect of performing visual search within the retention interval of a spatial working memory task. Importantly, by incorporating two working memory loads (low and high) we were able to make comparisons between dual-task conditions, rather than between dual-task and single-task conditions. This design allows any interference effects observed to be attributed to changes in memory load, rather than to nonspecific effects related to "dual-task" performance. We found that the efficiency of the inefficient search task declined as spatial memory load increased, but that the efficient search task remained efficient. These results suggest that spatial memory plays an important role in inefficient but not efficient search. In Experiment 2, participants performed the same visual search tasks within the retention interval of visually matched spatial and verbal working memory tasks. Critically, we found comparable dual-task interference between inefficient search and both the spatial and nonspatial working memory tasks, indicating that inefficient search recruits working memory processes common to both domains. |
Bernhard Angele; Timothy J. Slattery; Jinmian Yang; Reinhold Kliegl; Keith Rayner Parafoveal processing in reading: Manipulating n+1 and n+2 previews simultaneously Journal Article In: Visual Cognition, vol. 16, no. 6, pp. 697–707, 2008. @article{Angele2008, The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading. |
Sarah Bate; Catherine Haslam; Jeremy J. Tree; Timothy L. Hodgson Evidence of an eye movement-based memory effect in congenital prosopagnosia Journal Article In: Cortex, vol. 44, no. 7, pp. 806–819, 2008. @article{Bate2008, While extensive work has examined the role of covert recognition in acquired prosopagnosia, little attention has been directed to this process in the congenital form of the disorder. Indeed, evidence of covert recognition has only been demonstrated in one congenital case in which autonomic measures provided evidence of recognition (Jones and Tranel, 2001), whereas two investigations using behavioural indicators failed to demonstrate the effect (de Haan and Campbell, 1991; Bentin et al., 1999). In this paper, we use a behavioural indicator, an "eye movement-based memory effect" (Althoff and Cohen, 1999), to provide evidence of covert recognition in congenital prosopagnosia. In an initial experiment, we examined viewing strategies elicited to famous and novel faces in control participants, and found fewer fixations and reduced regional sampling for famous compared to novel faces. In a second experiment, we examined the same processes in a patient with congenital prosopagnosia (AA), and found some evidence of an eye movement-based memory effect regardless of his recognition accuracy. Finally, we examined whether a difference in scanning strategy was evident for those famous faces AA failed to explicitly recognise, and again found evidence of reduced sampling for famous faces. We use these findings to (a) provide evidence of intact structural representations in a case of congenital prosopagnosia, and (b) to suggest that covert recognition can be demonstrated using behavioural indicators in this disorder. |
Ensar Becic; Walter R. Boot; Arthur F. Kramer Training older adults to search more effectively: Scanning strategy and visual search in dynamic displays Journal Article In: Psychology and Aging, vol. 23, no. 2, pp. 461–466, 2008. @article{Becic2008, The authors examined the ability of older adults to modify their search strategies to detect changes in dynamic displays. Older adults who made few eye movements during search (i.e., covert searchers) were faster and more accurate compared with individuals who made many eye movements (i.e., overt searchers). When overt searchers were instructed to adopt a covert search strategy, target detection performance increased to the level of natural covert searchers. Similarly, covert searchers instructed to search overtly exhibited a decrease in target detection performance. These data suggest that with instructions and minimal practice, older adults can ameliorate the cost of a poor search strategy. |
Mark W. Becker; Ian P. Rasmussen Guidance of attention to objects and locations by long-term memory of natural scenes Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 6, pp. 1325–1338, 2008. @article{Becker2008, Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered. |
Eva Belke; Glyn W. Humphreys; Derrick G. Watson; Antje S. Meyer; Anna L. Telling Top-down effects of semantic knowledge in visual search are modulated by cognitive but not perceptual load Journal Article In: Perception and Psychophysics, vol. 70, no. 8, pp. 1444–1458, 2008. @article{Belke2008, Moores, Laiti, and Chelazzi (2003) found semantic interference from associate competitors during visual object search, demonstrating the existence of top-down semantic influences on the deployment of attention to objects. We examined whether effects of semantically related competitors (same-category members or associates) interacted with the effects of perceptual or cognitive load. We failed to find any interaction between competitor effects and perceptual load. However, the competitor effects increased significantly when participants were asked to retain one or five digits in memory throughout the search task. Analyses of eye movements and viewing times showed that a cognitive load did not affect the initial allocation of attention but rather the time it took participants to accept or reject an object as the target. We discuss the implications of our findings for theories of conceptual short-term memory and visual attention. |
Susan E. Brennan; Xin Chen; Christopher A. Dickinson; Mark B. Neider; Gregory J. Zelinsky Coordinating cognition: The costs and benefits of shared gaze during collaborative search Journal Article In: Cognition, vol. 106, no. 3, pp. 1465–1477, 2008. @article{Brennan2008, Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one searcher seeing a gaze-cursor indicating where the other was looking, and vice versa), shared-voice (by speaking to each other), and shared-gaze-plus-voice (by using both gaze-cursors and speech). Although collaborating pairs performed better than solitary searchers, search in the shared gaze condition was best of all: twice as fast and efficient as solitary search. People can successfully communicate and coordinate their searching labor using shared gaze alone. Strikingly, shared gaze search was even faster than shared-gaze-plus-voice search; speaking incurred substantial coordination costs. We conclude that shared gaze affords a highly efficient method of coordinating parallel activity in a time-critical spatial task. |
Elina Birmingham; Walter F. Bischof; Alan Kingstone Social attention and real-world scenes: The roles of action, competition and social content Journal Article In: Quarterly Journal of Experimental Psychology, vol. 61, no. 7, pp. 986–998, 2008. @article{Birmingham2008, The present study examined how social attention is influenced by social content and the presence of items that are available for attention. We monitored observers' eye movements while they freely viewed real-world social scenes containing either 1 or 3 people situated among a variety of objects. Building from the work of Yarbus (1965/1967) we hypothesized that observers would demonstrate a preferential bias to fixate the eyes of the people in the scene, although other items would also receive attention. In addition, we hypothesized that fixations to the eyes would increase as the social content (i.e., number of people) increased. Both hypotheses were supported by the data, and we also found that the level of activity in the scene influenced attention to eyes when social content was high. The present results provide support for the notion that the eyes are selected by others in order to extract social information. Our study also suggests a simple and surreptitious methodology for studying social attention to real-world stimuli in a range of populations, such as those with autism spectrum disorders. |
Elina Birmingham; Walter Bischof; Alan Kingstone Gaze selection in complex social scenes Journal Article In: Visual Cognition, vol. 16, no. 2-3, pp. 341–355, 2008. @article{Birmingham2008a, A great deal of recent research has sought to understand the factors and neural systems that mediate the orienting of spatial attention to a gazed-at location. What have rarely been examined, however, are the factors that are critical to the initial selection of gaze information from complex visual scenes. For instance, is gaze prioritized relative to other possible body parts and objects within a scene? The present study springboards from the seminal work of Yarbus (1965/1967), who had originally examined participants? scan paths while they viewed visual scenes containing one or more people. His work suggested to us that the selection of gaze information may depend on the task that is assigned to participants, the social content of the scene, and/or the activity level depicted within the scene. Our results show clearly that all of these factors can significantly modulate the selection of gaze information. Specifically, the selection of gaze was enhanced when the task was to describe the social attention within a scene, and when the social content and activity level in a scene were high. Nevertheless, it is also the case that participants always selected gaze information more than any other stimulus. Our study has broad implications for future investigations of social attention as well as resolving a number of longstanding issues that had undermined the classic original work of Yarbus. A great deal of recent research has sought to understand the factors and neural systems that mediate the orienting of spatial attention to a gazed-at location. What have rarely been examined, however, are the factors that are critical to the initial selection of gaze information from complex visual scenes. For instance, is gaze prioritized relative to other possible body parts and objects within a scene? The present study springboards from the seminal work of Yarbus (1965/1967), who had originally examined participants? scan paths while they viewed visual scenes containing one or more people. His work suggested to us that the selection of gaze information may depend on the task that is assigned to participants, the social content of the scene, and/or the activity level depicted within the scene. Our results show clearly that all of these factors can significantly modulate the selection of gaze information. Specifically, the selection of gaze was enhanced when the task was to describe the social attention within a scene, and when the social content and activity level in a scene were high. Nevertheless, it is also the case that participants always selected gaze information more than any other stimulus. Our study has broad implications for future investigations of social attention as well as resolving a number of longstanding issues that had undermined the classic original work of Yarbus. |
Jeremy B. Badler; Philippe Lefèvre; Marcus Missal Anticipatory pursuit is influenced by a concurrent timing task Journal Article In: Journal of Vision, vol. 8, no. 16, pp. 1–9, 2008. @article{Badler2008, The ability to predict upcoming events is important to compensate for relatively long sensory-motor delays. When stimuli are temporally regular, their prediction depends on a representation of elapsed time. However, it is well known that the allocation of attention to the timing of an upcoming event alters this representation. The role of attention on the temporal processing component of prediction was investigated in a visual smooth pursuit task that was performed either in isolation or concurrently with a manual response task. Subjects used smooth pursuit eye movements to accurately track a moving target after a constant-duration delay interval. In the manual response task, subjects had to estimate the instant of target motion onset by pressing a button. The onset of anticipatory pursuit eye movements was used to quantify the subject's estimate of elapsed time. We found that onset times were delayed significantly in the presence of the concurrent manual task relative to the pursuit task in isolation. There was also a correlation between the oculomotor and manual response latencies. In the framework of Scalar Timing Theory, the results are consistent with a centralized attentional gating mechanism that allocates clock resources between smooth pursuit preparation and the parallel timing task. |
Xuejun Bai; Guoli Yan; Simon P. Liversedge; Chuanli Zang; Keith Rayner Reading spaced and unspaced Chinese text: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 5, pp. 1277–1287, 2008. @article{Bai2008, Native Chinese readers' eye movements were monitored as they read text that did or did not demark word boundary information. In Experiment 1, sentences had 4 types of spacing: normal unspaced text, text with spaces between words, text with spaces between characters that yielded nonwords, and finally text with spaces between every character. The authors investigated whether the introduction of spaces into unspaced Chinese text facilitates reading and whether the word or, alternatively, the character is a unit of information that is of primary importance in Chinese reading. Global and local measures indicated that sentences with unfamiliar word spaced format were as easy to read as visually familiar unspaced text. Nonword spacing and a space between every character produced longer reading times. In Experiment 2, highlighting was used to create analogous conditions: normal Chinese text, highlighting that marked words, highlighting that yielded nonwords, and highlighting that marked each character. The data from both experiments clearly indicated that words, and not individual characters, are the unit of primary importance in Chinese reading. |
Brian P. Bailey; Shamsi T. Iqbal Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management Journal Article In: ACM Transactions on Computer-Human Interaction, vol. 14, no. 4, pp. 1–28, 2008. @article{Bailey2008, Notifications can have reduced interruption cost if delivered at moments of lower mental workload during task execution. Cognitive theorists have speculated that these moments occur at subtask boundaries. In this article, we empirically test this speculation by examining how workload changes during execution of goal-directed tasks, focusing on regions between adjacent chunks within the tasks, that is, the subtask boundaries. In a controlled experiment, users performed several interactive tasks while their pupil dilation, a reliable measure of workload, was continuously measured using an eye tracking system. The workload data was extracted from the pupil data, precisely aligned to the corresponding task models, and analyzed. Our principal findings include (i) workload changes throughout the execution of goal-directed tasks; (ii) workload exhibits transient decreases at subtask boundaries relative to the preceding subtasks; (iii) the amount of decrease tends to be greater at boundaries corresponding to the completion of larger chunks of the task; and (iv) different types of subtasks induce different amounts of workload. We situate these findings within resource theories of attention and discuss important implications for interruption management systems. |
Daniel Baldauf; Heiner Deubel Visual attention during the preparation of bimanual movements Journal Article In: Vision Research, vol. 48, no. 4, pp. 549–563, 2008. @article{Baldauf2008, We investigated the deployment of visual attention during the preparation of bimanually coordinated actions. In a dual-task paradigm participants had to execute bimanual pointing movements to different peripheral locations, and to identify target letters that had been briefly presented at various peripheral locations during the latency period before movement initialisation. The discrimination targets appeared either at the movement goal of the left or the right hand, or at other locations that were not movement-relevant in the particular trial. Performance in the letter discrimination task served as a measure for the distribution of visual attention during the action preparation. The results showed that the goal positions of both hands are selected before movement onset, revealing a superior discrimination performance at the action-relevant locations (Experiment 1). Selection-for-action in the preparation of bimanual movements involved attention being spread to both goal locations in parallel, independently of whether the targets had been cued by colour or semantically (Experiment 2). A comparison with perceptual performance in unimanual reaching suggested that the total amount of attentional resources that are distributed over the visual field depended on the demands of the primary motor task, with more attentional resources being deployed for the selection of multiple goal positions than for the selection of a single goal (Experiment 3). |
M. S. Baptista; C. Bohn; Reinhold Kliegl; Ralf Engbert; Jürgen Kurths Reconstruction of eye movements during blinks Journal Article In: Chaos, vol. 18, no. 1, pp. 1–15, 2008. @article{Baptista2008, In eye movement research in reading, the amount of data plays a crucial role for the validation of results. A methodological problem for the analysis of the eye movement in reading are blinks, when readers close their eyes. Blinking rate increases with increasing reading time, resulting in high data losses, especially for older adults or reading impaired subjects. We present a method, based on the symbolic sequence dynamics of the eye movements, that reconstructs the horizontal position of the eyes while the reader blinks. The method makes use of an observed fact that the movements of the eyes before closing or after opening contain information about the eyes movements during blinks. Test results indicate that our reconstruction method is superior to methods that use simpler interpolation approaches. In addition, analyses of the reconstructed data show no significant deviation from the usual behavior observed in readers. |
Dale J. Barr Pragmatic expectations and linguistic evidence: Listeners anticipate but do not integrate common ground Journal Article In: Cognition, vol. 109, no. 1, pp. 18–40, 2008. @article{Barr2008, When listeners search for the referent of a speaker's expression, they experience interference from privileged knowledge, knowledge outside of their 'common ground' with the speaker. Evidence is presented that this interference reflects limitations in lexical processing. In three experiments, listeners' eye movements were monitored as they searched for the target of a speaker's referring expression in a display that also contained a phonological competitor (e.g., bucket/buckle). Listeners anticipated that the speaker would refer to something in common ground, but they did not experience less interference from a competitor in privileged ground than from a matched competitor in common ground. In contrast, interference from the competitor was eliminated when it was ruled out by a semantic constraint. These findings support a view of comprehension as relying on multiple systems with distinct access to information and present a challenge for constraint-based views of common ground. |
Luke Barrington; Tim K. Marks; Janet Hui-wen Hsiao; Garrison W. Cottrell NIMBLE: A kernel density model of saccade-based visual memory Journal Article In: Journal of Vision, vol. 8, no. 14, pp. 17–17, 2008. @article{Barrington2008, We present a Bayesian version of J. Lacroix, J. Murre, and E. Postma's (2006) Natural Input Memory (NIM) model of saccadic visual memory. Our model, which we call NIMBLE (NIM with Bayesian Likelihood Estimation), uses a cognitively plausible image sampling technique that provides a foveated representation of image patches. We conceive of these memorized image fragments as samples from image class distributions and model the memory of these fragments using kernel density estimation. Using these models, we derive class-conditional probabilities of new image fragments and combine individual fragment probabilities to classify images. Our Bayesian formulation of the model extends easily to handle multi-class problems. We validate our model by demonstrating human levels of performance on a face recognition memory task and high accuracy on multi-category face and object identification. We also use NIMBLE to examine the change in beliefs as more fixations are taken from an image. Using fixation data collected from human subjects, we directly compare the performance of NIMBLE's memory component to human performance, demonstrating that using human fixation locations allows NIMBLE to recognize familiar faces with only a single fixation. |
Jennifer E. Arnold THE BACON not the bacon: How children and adults understand accented and unaccented noun phrases Journal Article In: Cognition, vol. 108, no. 1, pp. 69–99, 2008. @article{Arnold2008, Two eye-tracking experiments examine whether adults and 4- and 5-year-old children use the presence or absence of accenting to guide their interpretation of noun phrases (e.g., the bacon) with respect to the discourse context. Unaccented nouns tend to refer to contextually accessible referents, while accented variants tend to be used for less accessible entities. Experiment 1 confirms that accenting is informative for adults, who show a bias toward previously-mentioned objects beginning 300 ms after the onset of unaccented nouns and pronouns. But contrary to findings in the literature, accented words produced no observable bias. In Experiment 2, 4 and 5 year olds were also biased toward previously-mentioned objects with unaccented nouns and pronouns. This builds on findings of limits on children's on-line reference comprehension [Arnold, J. E., Brown-Schmidt, S., & Trueswell, J. C. (2007). Children's use of gender and order-of-mention during pronoun comprehension. Language and Cognitive Processes], showing that children's interpretation of unaccented nouns and pronouns is constrained in contexts with one single highly accessible object. |
Jennifer E. Arnold; Shin-Yi C. Lao Put in last position something previously unmentioned: Word order effects on referential expectancy and reference comprehension Journal Article In: Language and Cognitive Processes, vol. 23, no. 2, pp. 282–295, 2008. @article{Arnold2008a, Research has shown that the comprehension of definite referring expressions (e.g., "the triangle") tends to be faster for "given" (previously mentioned) referents, compared with new referents. This has been attributed to the presence of given information in the consciousness of discourse participants (e.g., Chafe, 1994) suggesting that given is always more accessible. By contrast, we find a bias toward new referents during the on-line comprehension of the direct object in heavy-NP-shifted word orders, e.g., "Put on the star the...." This order tends to be used for new direct objects; canonical unshifted orders are more common with given direct objects. Thus, word order provides probabilistic information about the givenness or newness of the direct object. Results from eyetracking and gating experiments show that the traditional given bias only occurs with unshifted orders; with heavy-NP-shifted orders, comprehenders expect the object to be new, and comprehension for new referents is facilitated. (Contains 2 figures and 3 tables.) |
Hillel Aviezer; Ran R. Hassin; Jennifer D. Ryan; Cheryl L. Grady; Josh Susskind; Adam Anderson; Morris Moscovitch; Shlomo Bentin Angry, disgusted, or afraid? Studies on the malleability of emotion perception Journal Article In: Psychological Science, vol. 19, no. 7, pp. 724–732, 2008. @article{Aviezer2008, Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly ‘‘read out'' from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels. |
Caroline Blais; Rachael E. Jack; Christoph Scheepers; Daniel Fiset; Roberto Caldara Culture shapes how we look at faces Journal Article In: PLoS ONE, vol. 3, no. 8, pp. e3022, 2008. @article{Blais2008, Background: Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern. Methodology/Principal Findings: Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face. Conclusions/Significance: These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures. |
Lizzy Bleumers; Peter De Graef; Karl Verfaillie; Johan Wagemans Eccentric grouping by proximity in multistable dot lattices Journal Article In: Vision Research, vol. 48, no. 2, pp. 179–192, 2008. @article{Bleumers2008, The Pure Distance Law predicts grouping by proximity in dot lattices that can be organised in four ways by grouping dots along parallel lines. It specifies a quantitative relationship between the relative probability of perceiving an organisation and the relative distance between the grouped dots. The current study was set up to investigate whether this principle holds both for centrally and for eccentrically displayed dot lattices. To this end, dot lattices were displayed either in central vision, or to the right of fixation with their closest border at 3° or 15°. We found that the Pure Distance Law adequately predicted grouping of centrally displayed dot lattices but did not capture the eccentric data well, even when the eccentric dot lattices were scaled. Specifically, a better fit was obtained when we included the possibility in the model that in some trials participants could not report an organisation and consequently responded randomly. A plausible interpretation for the occurrence of random responses in the eccentric conditions is that under these circumstances an attention shift is required from the locus of fixation towards the dot lattice, which occasionally fails to take place. When grouping could be reported, scale and eccentricity appeared to interact. The effect of the relative interdot distances on the perceptual organisation of the dot lattices was estimated to be stronger in peripheral vision than in central vision at the two largest scales, but this difference disappeared when the smallest scale was applied. |
Gary D. Bond Deception detection expertise Journal Article In: Law and Human Behavior, vol. 32, no. 4, pp. 339–351, 2008. @article{Bond2008, A lively debate between Bond and Uysal (2007, Law and Human Behavior, 31, 109-115) and O'Sullivan (2007, Law and Human Behavior, 31, 117-123) concerns whether there are experts in deception detection. Two experiments sought to (a) identify expert(s) in detection and assess them twice with four tests, and (b) study their detection behavior using eye tracking. Paroled felons produced videotaped statements that were presented to students and law enforcement personnel. Two experts were identified, both female Native American BIA correctional officers. Experts were over 80% accurate in the first assessment, and scored at 90% accuracy in the second assessment. In Signal Detection analyses, experts showed high discrimination, and did not evidence biased responding. They exploited nonverbal cues to make fast, accurate decisions. These highly-accurate individuals can be characterized as experts in deception detection. |
Verena S. Bonitz; Robert D. Gordon Attention to smoking-related and incongruous objects during scene viewing Journal Article In: Acta Psychologica, vol. 129, no. 2, pp. 255–263, 2008. @article{Bonitz2008, This study examined the influences of semantic characteristics of objects in real-world scenes on allocation of attention as reflected in eye movement measures. Stimuli consisted of full-color photographic scenes, and within each scene, the semantic salience of two target objects was manipulated while the objects' perceptual salience was kept constant. One of the target objects was either inconsistent or consistent with the scene category. In addition, the second target object was either smoking-related or neutral. Two groups of college students, namely current cigarette smokers (N = 18) and non-smokers (N = 19), viewed each scene for 10 s while their eye movements were recorded. While both groups showed preferential allocation of attention to inconsistent objects, smokers also selectively attended to smoking-related objects. Theoretical implications of the results are discussed. |
Shlomit Yuval-Greenberg; Orr Tomer; Alon S. Keren; Israel Nelken; Leon Y. Deouell Transient induced gamma-band response in EEG as a manifestation of miniature saccades Journal Article In: Neuron, vol. 58, no. 3, pp. 429–441, 2008. @article{YuvalGreenberg2008, The induced gamma-band EEG response (iGBR) recorded on the scalp is widely assumed to reflect synchronous neural oscillation associated with object representation, attention, memory, and consciousness. The most commonly reported EEG iGBR is a broadband transient increase in power at the gamma range ∼200-300 ms following stimulus onset. A conspicuous feature of this iGBR is the trial-to-trial poststimulus latency variability, which has been insufficiently addressed. Here, we show, using single-trial analysis of concomitant EEG and eye tracking, that this iGBR is tightly time locked to the onset of involuntary miniature eye movements and reflects a saccadic "spike potential." The time course of the iGBR is related to an increase in the rate of saccades following a period of poststimulus saccadic inhibition. Thus, whereas neuronal gamma-band oscillations were shown conclusively with other methods, the broadband transient iGBR recorded by scalp EEG reflects properties of miniature saccade dynamics rather than neuronal oscillations. |
Gregory J. Zelinsky A theory of eye movements during target acquisition Journal Article In: Psychological Review, vol. 115, pp. 787–835, 2008. @article{Zelinsky2008, The gaze movements accompanying target localization were examined via human observers and a computational model (Target Acquisition Model, TAM). Search contexts ranged from fully realistic scenes, to toys in a crib, to Os and Qs, and manipulations included set size, target eccentricity, and target-distractor similarity. Observers and the model always previewed the same targets and searched the identical displays. Behavioral and simulated eye movements were analyzed for acquisition accuracy, efficiency, and target guidance. TAM's behavior generally fell within the behavioral mean's 95% confidence interval for all measures in each experiment/condition. This agreement suggests that a fixed-parameter model using spatio-chromatic filters and a simulated retina, when driven by the correct visual routines, can be a good general purpose predictor of human target acquisition behavior. |
Gregory J. Zelinsky; Mark B. Neider An eye movement analysis of multiple object tracking in a realistic environment Journal Article In: Visual Cognition, vol. 16, no. 5, pp. 553–566, 2008. @article{Zelinsky2008a, To study multiple object tracking under naturalistic conditions, observers tracked 1–4 sharks (9 in total) swimming throughout an underwater scene. Accuracy was high in the Track 1–3 conditions (>92%), but declined when tracking 4 targets (78%). Gaze analyses revealed a dependency between tracking strategy and target number. Observers tracking 2 targets kept their gaze on the target centroid rather than individual objects; observers tracking 4 targets switched their gaze back-and-forth between sharks. Using an oculomotor method for identifying targets lost during tracking, we confirmed that this strategy shift was real and not an artifact of centroid definition. Moreover, we found that tracking errors increased with gaze time on targets, and decreased with time spent looking at the centroid. Depending on tracking load, both centroid and target-switching strategies are used, with accuracy improving with reliance on centroid tracking. An index juggling hypothesis is advanced to explain the suboptimal tendency to fixate tracked objects. |
Elizabeth Wonnacott; Elissa L. Newport; Michael K. Tanenhaus Acquiring and processing verb argument structure: Distributional learning in a miniature language Journal Article In: Cognitive Psychology, vol. 56, no. 3, pp. 165–209, 2008. @article{Wonnacott2008, Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings. |
Denise H. Wu; Anne Morganti; Anjan Chatterjee Neural substrates of processing path and manner information of a moving event Journal Article In: Neuropsychologia, vol. 46, no. 2, pp. 704–713, 2008. @article{Wu2008, Languages consistently distinguish the path and the manner of a moving event in different constituents, even if the specific constituents themselves vary across languages. Children also learn to categorize moving events according to their path and manner at different ages. Motivated by these linguistic and developmental observations, we employed fMRI to test the hypothesis that perception of and attention to path and manner of motion is segregated neurally. Moreover, we hypothesize that such segregation respects the "dorsal-where and ventral-what" organizational principle of vision. Consistent with this proposal, we found that attention to the path of a moving event was associated with greater activity within bilateral inferior/superior parietal lobules and the frontal eye-field, while attention to manner was associated with greater activity within bilateral postero-lateral inferior/middle temporal regions. Our data provide evidence that motion perception, traditionally considered as a dorsal "where" visual attribute, further segregates into dorsal path and ventral manner attributes. This neural segregation of the components of motion, which are linguistically tagged, points to a perceptual counterpart of the functional organization of concepts and language. |
Lu Qi Xiao; Jun-Yun Zhang; Rui Wang; Stanley A. Klein; Dennis M. Levi; Cong Yu Complete transfer of perceptual learning across retinal locations enabled by double training Journal Article In: Current Biology, vol. 18, no. 24, pp. 1922–1926, 2008. @article{Xiao2008, Practice improves discrimination of many basic visual features, such as contrast, orientation, and positional offset [1-7]. Perceptual learning of many of these tasks is found to be retinal location specific, in that learning transfers little to an untrained retinal location [1, 6-8]. In most perceptual learning models, this location specificity is interpreted as a pointer to a retinotopic early visual cortical locus of learning [1, 6-11]. Alternatively, an untested hypothesis is that learning could occur in a central site, but it consists of two separate aspects: learning to discriminate a specific stimulus feature ("feature learning"), and learning to deal with stimulus-nonspecific factors like local noise at the stimulus location ("location learning") [12]. Therefore, learning is not transferable to a new location that has never been location trained. To test this hypothesis, we developed a novel double-training paradigm that employed conventional feature training (e.g., contrast) at one location, and additional training with an irrelevant feature/task (e.g., orientation) at a second location, either simultaneously or at a different time. Our results showed that this additional location training enabled a complete transfer of feature learning (e.g., contrast) to the second location. This finding challenges location specificity and its inferred cortical retinotopy as central concepts to many perceptual-learning models and suggests that perceptual learning involves higher nonretinotopic brain areas that enable location transfer. |
Zhi-Lei Zhang; Christopher R. L. Cantor; Clifton M. Schor Effects of luminance and saccadic suppression on perisaccadic spatial distortions Journal Article In: Journal of vision, vol. 8, no. 14, pp. 22 1–18, 2008. @article{Zhang2008, Visual directions of foveal targets flashed just prior to the onset of a saccade are misperceived as shifted in the direction of the eye movement. We examined the effects of luminance level and temporal interactions on the amplitude of these perisaccadic spatial distortions (PSDs). PSDs were larger for both single and sequentially double-flashed stimuli with low than high luminance levels, and there was a reduction of PSDs for low luminance targets flashed immediately before the saccade. Significant temporal interactions were suggested by PSDs for a pair of sequentially presented flashes (ISI = 50 ms) that could not be predicted from the single-flash distortions: PSD increased for the first flash and decreased for the second compared to the single-flash distortions. We also found that when the flash pair was presented near saccade onset, the perceived distortion of the earlier flash overtook that of the later flash, even though the late flash occurred closer in time to the saccade. To explain these effects, we propose that stimulus-dependent nonlinearities (contrast gain control and saccadic suppression) influence the duration of the temporal impulse response of both single- and double-flashed stimuli. |
Mark Yates; John Friend; Danielle M. Ploetz The effect of phonological neighborhood density on eye movements during reading Journal Article In: Cognition, vol. 107, no. 2, pp. 685–692, 2008. @article{Yates2008, Recent research has indicated that phonological neighbors speed processing in a variety of isolated word recognition tasks. Nevertheless, as these tasks do not represent how we normally read, it is not clear if phonological neighborhood has an effect on the reading of sentences for meaning. In the research reported here, we evaluated whether phonological neighborhood density influences reading of target words embedded in sentences. The eye movement data clearly revealed that phonological neighborhood facilitated reading. This was evidenced by shorter fixations for words with large neighborhoods relative to words with small neighborhoods. These results are important in indicating that phonology is a crucial component of reading and that it affects early lexical processing. © 2007 Elsevier B.V. All rights reserved. |
Eiling Yee; Sheila E. Blumstein; Julie C. Sedivy Lexical-semantic activation in Broca's and Wernicke's aphasia: Evidence from eye movements Journal Article In: Journal of Cognitive Neuroscience, vol. 20, no. 4, pp. 592–612, 2008. @article{Yee2008, Lexical processing requires both activating stored representations, and selecting among active candidates. The current work uses an eye-tracking paradigm to conduct a detailed temporal investigation of lexical processing. Patients with Broca's and Wernicke's aphasia are studied to shed light on the roles of anterior and posterior brain regions in lexical processing as well as the effects of lexical competition on such processing. Experiment 1 investigates whether objects semantically related to an uttered word are preferentially fixated, e.g., given the auditory target 'hammer', do participants fixate a picture of a nail? Results show that, like normals, both groups of patients are more likely to fixate on an object semantically related to the target than an unrelated object. Experiment 2 explores whether Broca's and Wernicke's aphasics show competition effects when words share onsets with the uttered word, e.g., given the auditory target 'hammer', do participants fixate a picture of a hammock? Experiment 3 investigates whether these patients activate words semantically related to onset competitors of the uttered word, e.g., given the auditory target 'hammock' do participants fixate a nail due to partial activation of the onset competitor hammer? Results of Experiments 2 and 3 show pathological patterns of performance for both Broca's and Wernicke's aphasics under conditions of lexical onset competition. However, the patterns of deficit differed, suggesting different functional and computational roles for anterior and posterior areas in lexical processing. Implications of the findings for the functional architecture of the lexical processing system and its potential neural substrates are considered. |
Miao-Hsuan Yen; Jie-Li Tsai; Ovid J. L. Tzeng; Daisy L. Hung Eye movements and parafoveal word processing in reading Chinese Journal Article In: Memory & Cognition, vol. 36, no. 5, pp. 1033–1045, 2008. @article{Yen2008, In two experiments, a parafoveal lexicality effect in the reading of Chinese (a script that does not physically mark word boundaries) was examined. Both experiments used the boundary paradigm (Rayner, 1975) and indicated that the lexical properties of parafoveal words influenced eye movements. In Experiment 1, the preview stimulus was either a real word or a pseudoword. Targets with word previews, even unrelated ones, were more likely to be skipped than were those with pseudowords. In Experiment 2, all of the preview stimuli had the same first character as the target. Target words with same-morpheme previews were fixated for less time than were those with pseudoword previews, suggesting that morphological processing may be involved in extracting information from parafoveal words in Chinese reading. Together, the two experiments dealing with how words are processed in Chinese may provide some constraints on current computational models of reading. |
Peter Janssen; Siddharth Srivastava; Sien Ombelet; Guy A. Orban Coding of shape and position in macaque lateral Iintraparietal area Journal Article In: Journal of Neuroscience, vol. 28, no. 26, pp. 6679–6690, 2008. @article{Janssen2008, The analysis of object shape is critical for both object recognition and grasping. Areas in the intraparietal sulcus of the rhesus monkey are important for the visuomotor transformations underlying actions directed toward objects. The lateral intraparietal (LIP) area has strong anatomical connections with the anterior intraparietal area, which is known to control the shaping of the hand during grasping, and LIP neurons can respond selectively to simple two-dimensional shapes. Here we investigate the shape representation in area LIP of awake rhesus monkeys. Specifically, we determined to what extent LIP neurons are tuned to shape dimensions known to be relevant for grasping and assessed the invariance of their shape preferences with regard to changes in stimulus size and position in the receptive field. Most LIP neurons proved to be significantly tuned to multiple shape dimensions. The population of LIP neurons that were tested showed barely significant size invariance. Position invariance was present in a minority of the neurons tested. Many LIP neurons displayed spurious shape selectivity arising from accidental interactions between the stimulus and the receptive field. We observed pronounced differences in the receptive field profiles determined by presenting two different shapes. Almost all LIP neurons showed spatially selective saccadic activity, but the receptive field for saccades did not always correspond to the receptive field as determined using shapes. Our results demonstrate that a subpopulation of LIP neurons encodes stimulus shape. Furthermore, the shape representation in the dorsal visual stream appears to differ radically from the known representation of shape in the ventral visual stream. |
Wolfgang Jaschinski; Stephanie Jainta; Jörg Hoormann Comparison of shutter glasses and mirror stereoscope for measuring dynamic and static vergence Journal Article In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 1–7, 2008. @article{Jaschinski2008, Vergence eye movement recordings in response to disparity step stimuli require to present different stimuli to the two eyes. The traditional method is a mirror stereoscope. Shutter glasses are more convenient, but have disadvantages as limited repetition rate, residual cross task, and reduced luminance. Therefore, we compared both techniques measuring (1) dynamic disparity step responses for stimuli of 1 and 3 deg and (2) fixation disparity, the static vergence error. Shutter glasses and mirror stereoscope gave very similar dynamic responses with correlations of about 0.95 for the objectively measured vergence velocity and for the response amplitude reached 400 ms after the step stimulus (measured objectively with eye movement recordings and subjectively with dichoptic nonius lines). Both techniques also provided similar amounts of fixation disparity, tested with dichoptic nonius lines. |
Lee Hogarth; Anthony Dickinson; Molly Janowski; Aleksandra Nikitina; Theodora Duka The role of attentional bias in mediating human drug-seeking behaviour Journal Article In: Psychopharmacology, vol. 201, no. 1, pp. 29–41, 2008. @article{Hogarth2008a, RATIONALE: The attentional bias for drug cues is believed to be a causal cognitive process mediating human drug seeking and relapse. OBJECTIVES, METHODS AND RESULTS: To test this claim, we trained smokers on a tobacco conditioning procedure in which the conditioned stimulus (or S+) acquired parallel control of an attentional bias (measured with an eye tracker), tobacco expectancy and instrumental tobacco-seeking behaviour. Although this correlation between measures may be regarded as consistent with the claim that the attentional bias for the S+ mediated tobacco seeking, when a secondary task was added in the test phase, the attentional bias for the S+ was abolished, yet the control of tobacco expectancy and tobacco seeking remained intact. CONCLUSIONS: This dissociation suggests that the attentional bias for drug cues is not necessary for the control that drug cues exert over drug-seeking behaviour. The question raised by these data is what function does the attentional bias serve if it does not mediate drug seeking? |
Linus Holm; Johan Eriksson; Linus Andersson Looking as if you know: Systematic object inspection precedes object recognition Journal Article In: Journal of Vision, vol. 8, no. 4, pp. 1–7, 2008. @article{Holm2008, Sometimes we seem to look at the very object we are searching for, without consciously seeing it. How do we select object relevant information before we become aware of the object? We addressed this question in two recognition experiments involving pictures of fragmented objects. In Experiment 1, participants preferred to look at the target object rather than a control region 25 fixations prior to explicit recognition. Furthermore, participants inspected the target as if they had identified it around 9 fixations prior to explicit recognition. In Experiment 2, we investigated the influence of semantic knowledge in guiding object inspection prior to explicit recognition. Consistently, more specific knowledge about target identity made participants scan the fragmented stimulus more efficiently. For instance, non-target regions were rejected faster when participants knew the target object's name. Both experiments showed that participants were looking at the objects as if they knew them before they became aware of their identity. |
L. Elliot Hong; Kathleen A. Turano; Hugh B. O'Neill; Lei Hao; Ikwunga Wonodi; Robert P. McMahon; Amie Elliott; Gunvant K. Thaker Refining the predictive pursuit endophenotype in schizophrenia. Journal Article In: Biological Psychiatry, vol. 63, no. 5, pp. 458–464, 2008. @article{Hong2008, Background: To utilize fully a schizophrenia endophenotype in gene search and subsequent neurobiological studies, it is critical that the precise underlying physiologic deficit is identified. Abnormality in smooth pursuit eye movements is one of the endophenotypes of schizophrenia. The precise nature of the abnormality is unknown. Previous work has shown a reduced predictive pursuit response to a briefly masked (i.e., invisible) moving object in schizophrenia. However, the overt awareness of target removal can confound the measurement. Methods: This study employed a novel method that covertly stabilized the moving target image onto the fovea. The foveal stabilization was implemented after the target on a monitor had oscillated at least for one cycle and near the change of direction when the eye velocity momentarily reached zero. Thus, the subsequent pursuit eye movements were completely predictive and internally driven. Eye velocity during this foveally stabilized smooth pursuit was compared among schizophrenia patients (n = 45), their unaffected first-degree relatives (n = 42), and healthy comparison subjects (n = 22). Results: Schizophrenia patients and their unaffected relatives performed similarly and both had substantially reduced predictive pursuit acceleration and velocity under the foveally stabilized condition. Conclusions: These findings show that inability to maintain internal representation of the target motion or integration of such information into a predictive response may be the specific brain deficit indexed by the smooth pursuit endophenotype in schizophrenia. Similar performance between patients and unaffected relatives suggests that the refined predictive pursuit measure may index a less complex genetic origin of the eye-tracking deficits in schizophrenia families. |