All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2011 |
Katharine B. Porter; Gideon P. Caplovitz; Peter J. Kohler; Christina M. Ackerman; Peter U. Tse Rotational and translational motion interact independently with form Journal Article In: Vision Research, vol. 51, no. 23-24, pp. 2478–2487, 2011. @article{Porter2011, Do the mechanisms that underlie the perception of translational and rotational object motion show evidence of independent processing? By probing the perceived speed of translating and/or rotating objects, we find that an object's form contributes in independent ways to the processing of translational and rotational motion: In the context of translational motion, it has been shown that the more elongated an object is along its direction of motion, the faster it is perceived to translate; in the context of rotational motion, it has been shown that the sharper the maxima of curvature along an object's contour, the faster it appears to rotate. Here we demonstrate that such rotational form-motion interactions are due solely to the rotational component of combined rotational and translational motion. We conclude that the perception of rotational motion relies on form-motion interactions that are independent of the processing underlying translational motion. |
Elsie Premereur; Wim Vanduffel; Peter Janssen Functional heterogeneity of macaque lateral intraparietal neurons Journal Article In: Journal of Neuroscience, vol. 31, no. 34, pp. 12307–12317, 2011. @article{Premereur2011, The macaque lateral intraparietal area (LIP) has been implicated in manycognitive processes, ranging from saccade planning and spatial attention to timing and categorization. Importantly, different research groups have used different criteria for including LIP neurons in their studies. While some research groups have selected LIP neurons based on the presence of memory-delay activity, other research groups have used other criteria such as visual, presaccadic, and/or memory activity. We recorded from LIP neurons that were selected based on spatially selective saccadic activity but regardless ofmemory-delay activity in macaque monkeys. To test anticipatory climbing activity, we used a delayed visually guided saccade task with a unimodal schedule ofgo-times, for which the conditional probability that the go-signal will occur rises monotonically as a function of time. A subpopulation of LIP neurons showed anticipatory activity that mimicked the subjective hazard rate ofthe go-signal when the animal was planning a saccade toward the receptive field. Alarge subgroup ofLIP neurons, however, did not modulate their firing rates according to the subjective hazard function. These non-anticipatory neurons were strongly influenced by salient visual stimuli appearing in their receptive field, but less so by the direction ofthe impending saccade. Thus, LIP contains a heterogeneous population ofneurons related to saccade planning or visual salience, and these neurons are spatially intermixed.Our results suggest that between-study differences in neuronal selectionmayhave contributed significantly to the findings of different research groups with respect to the functional role ofarea LIP. |
Kerstin Preuschoff; Bernard Marius Hart; Wolfgang Einhäuser Pupil dilation signals surprise: Evidence for noradrenaline's role in decision making Journal Article In: Frontiers in Neuroscience, vol. 5, pp. 115, 2011. @article{Preuschoff2011, Our decisions are guided by the rewards we expect. These expectations are often based on incomplete knowledge and are thus subject to uncertainty. While the neurophysiology of expected rewards is well understood, less is known about the physiology of uncertainty. We hypothesize that uncertainty, or more specifically errors in judging uncertainty, are reflected in pupil dilation, a marker that has frequently been associated with decision-making, but so far has remained largely elusive to quantitative models. To test this hypothesis, we measure pupil dilation while observers perform an auditory gambling task. This task dissociates two key decision variables – uncertainty and reward – and their errors from each other and from the act of the decision itself. We first demonstrate that the pupil does not signal expected reward or uncertainty per se, but instead signals surprise, that is, errors in judging uncertainty. While this general finding is independent of the precise quantification of these decision variables, we then analyze this effect with respect to a specific mathematical model of uncertainty and surprise, namely risk and risk prediction error. Using this quantification, we find that pupil dilation and risk prediction error are indeed highly correlated. Under the assumption of a tight link between noradrenaline (NA) and pupil size under constant illumination, our data may be interpreted as empirical evidence for the hypothesis that NA plays the same role for uncertainty as dopamine does for reward, namely the encoding of error signals. |
Krista E. Overvliet; E. Azañón; S. Soto-Faraco Somatosensory saccades reveal the timing of tactile spatial remapping Journal Article In: Neuropsychologia, vol. 49, no. 11, pp. 3046–3052, 2011. @article{Overvliet2011, Remapping tactile events from skin to external space is an essential process for human behaviour. It allows us to refer tactile sensations to their actual externally based location, by combining anatomically based somatosensory information with proprioceptive information about the current body posture. We examined the time course of tactile remapping by recording speeded saccadic responses to somatosensory stimuli delivered to the hands. We conducted two experiments in which arm posture varied (crossed or uncrossed), so that anatomical and external frames of reference were either put in spatial conflict or were aligned. The data showed that saccade onset latencies in the crossed hands conditions were slower than in the uncrossed hands condition, suggesting that, in the crossed hands condition, remapping had to be completed before a correct saccade could be executed. Saccades to tactile stimuli when the hands were crossed were sometimes initiated to the wrong direction and then corrected in-flight, resulting in a turn-around saccade. These turn-around saccades were more likely to occur in short-latency responses, compared to onset latencies of saccades that went straight to target. The latter suggests that participants were postponing their saccade until the time the tactile event was represented according to the current body posture. We propose that the difference between saccade onset latencies of crossed and uncrossed hand postures, and between the onset of a turn-around saccade and a straight saccade in the crossed hand posture, reveal the timing of tactile spatial remapping. |
Müge Özbek; Markus Bindemann Exploring the time course of face matching: Temporal constraints impair unfamiliar face identification under temporally unconstrained viewing Journal Article In: Vision Research, vol. 51, no. 19, pp. 2145–2155, 2011. @article{Oezbek2011, The identification of unfamiliar faces has been studied extensively with matching tasks, in which observers decide if pairs of photographs depict the same person (identity matches) or different people (mismatches). In experimental studies in this field, performance is usually self-paced under the assumption that this will encourage best-possible accuracy. Here, we examined the temporal characteristics of this task by limiting display times and tracking observers' eye movements. Observers were required to make match/mismatch decisions to pairs of faces shown for 200, 500, 1000, or 2000. ms, or for an unlimited duration. Peak accuracy was reached within 2000. ms and two fixations to each face. However, intermixing exposure conditions produced a context effect that generally reduced accuracy on identity mismatch trials, even when unlimited viewing of faces was possible. These findings indicate that less than 2. s are required for face matching when exposure times are variable, but temporal constraints should be avoided altogether if accuracy is truly paramount. The implications of these findings are discussed. |
Adam Palanica; Roxane J. Itier Searching for a perceived gaze direction using eye tracking Journal Article In: Journal of Vision, vol. 11, no. 2, pp. 1–13, 2011. @article{Palanica2011, The purpose of the current study was to use eye tracking to better understand the “stare-in-the-crowd effect”—the notion that direct gaze is more easily detected than averted gaze in a crowd of opposite-gaze distractors. Stimuli were displays of four full characters aligned across the monitor (one target and three distractors). Participants completed a visual search task in which they were asked to detect the location of either a direct gaze or an averted gaze target. Reaction time (RT) results indicated faster responses to direct than averted gaze only for characters situated in the far peripheral visual fields. Eye movements confirmed a serial search strategy (definitely ruling out any pop-out effects) and revealed different exploration patterns between hemifields. The latency before the first fixation on target strongly correlated with response RTs. In the LVF, that latency was also faster for direct than averted gaze targets, suggesting that the response asymmetry in favor of direct gaze stemmed from faster direct gaze target detection. In the RVF, however, the response bias to direct gaze seemed not due to a faster visual detection but rather to a different cognitive mechanism. Direct gaze targets were also responded to even faster when their position was congruent with the direction of gaze of distractors. These findings suggest that the detection asymmetry for direct gaze is highly dependent on target position and influenced by social contexts. |
Sebastian Pannasch; Johannes Schulz; Boris M. Velichkovsky On the control of visual fixation durations in free viewing of complex images Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 4, pp. 1120–1132, 2011. @article{Pannasch2011, The mechanisms for the substantial variation in the durations of visual fixations in scene perception are not yet well understood. During free viewing of paintings, gaze-contingent irrelevant distractors (Exp. 1) and non-gaze-related time-locked display changes (Exp. 2) were presented. We demonstrated that any visual change-its onset and offset-prolongs the ongoing fixation (i.e., delays the following saccade), strongly suggesting that fixation durations are under the direct control of the stimulus information. The strongest influence of distraction was observed for fixations preceded by saccades within the parafoveal range (<5° of visual angle). We assume that these fixations contribute to the focal in contrast to the ambient mode of attention (Pannasch & Velichkovsky, Visual Cognition, 17, 1109-1131, 2009; Velichkovsky, Memory, 10, 405-419, 2002). Recent findings about two distinct "subpopulations of fixations," one under the direct and another under the indirect control of stimulation (e.g., Henderson & Smith, Visual Cognition, 17, 1055-1082, 2009), are reconsidered in view of these results. |
Muriel T. N. Panouillères; Christian Urquizar; Roméo Salemme; Denis Pélisson In: PLoS ONE, vol. 6, no. 2, pp. e17329, 2011. @article{Panouilleres2011, When goal-directed movements are inaccurate, two responses are generated by the brain: a fast motor correction toward the target and an adaptive motor recalibration developing progressively across subsequent trials. For the saccadic system, there is a clear dissociation between the fast motor correction (corrective saccade production) and the adaptive motor recalibration (primary saccade modification). Error signals used to trigger corrective saccades and to induce adaptation are based on post-saccadic visual feedback. The goal of this study was to determine if similar or different error signals are involved in saccadic adaptation and in corrective saccade generation. Saccadic accuracy was experimentally altered by systematically displacing the visual target during motor execution. Post-saccadic error signals were studied by manipulating visual information in two ways. First, the duration of the displaced target after primary saccade termination was set at 15, 50, 100 or 800 ms in different adaptation sessions. Second, in some sessions, the displaced target was followed by a visual mask that interfered with visual processing. Because they rely on different mechanisms, the adaptation of reactive saccades and the adaptation of voluntary saccades were both evaluated. We found that saccadic adaptation and corrective saccade production were both affected by the manipulations of post-saccadic visual information, but in different ways. This first finding suggests that different types of error signal processing are involved in the induction of these two motor corrections. Interestingly, voluntary saccades required a longer duration of post-saccadic target presentation to reach the same amount of adaptation as reactive saccades. Finally, the visual mask interfered with the production of corrective saccades only during the voluntary saccades adaptation task. These last observations suggest that post-saccadic perception depends on the previously performed action and that the differences between saccade categories of motor correction and adaptation occur at an early level of visual processing. |
Caroline Paquette; Joyce Fung Old age affects gaze and postural coordination Journal Article In: Gait and Posture, vol. 33, no. 2, pp. 227–232, 2011. @article{Paquette2011, Visual tracking of the surrounding environment is an important daily task, often executed simultaneously with the regulation of upright balance. Visual and postural coordination may be affected by aging which is associated with a decline in sensory and motor functions. The aim of the present study was to assess the effects of aging on the control of saccadic and smooth pursuit eye movements when standing on a moving surface. Nineteen young and 12 elderly subjects tracked a visual target presented as unpredictable smooth pursuit or saccadic displacements. Subjects were instructed to maintain gaze on target during quiet stance with or without yaw surface rotations. Elderly subjects followed both saccadic and pursuit targets with less accuracy than young subjects. Moreover, elderly subjects responded with longer time lags during saccadic target shifts and executed more catch-up saccades during smooth pursuits than younger subjects. Standing on a moving surface induced larger target-gaze errors. Catch-up saccades during pursuit occurred more frequently during surface perturbations. Our results suggest that visual tracking abilities decline with age and that postural challenge affects accuracy but not timing of gaze responses. Such declines observed with aging may result from multiple but minor sensory and motor deficits. |
Rodrigo Quian Quiroga; Carlos Pedreira How do we see art: An eye-tracker study Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 98, 2011. @article{Quiroga2011, We describe the pattern of fixations of subjects looking at figurative and abstract paintings from different artists (Molina, Mondrian, Rembrandt, della Francesca) and at modified versions in which different aspects of these art pieces were altered with simple digital manipulations. We show that the fixations of the subjects followed some general common principles (e.g., being attracted to saliency regions) but with a large variability for the figurative paintings, according to the subject's personal appreciation and knowledge. In particular, we found different gazing patterns depending on whether the subject saw the original or the modified version of the painting first. We conclude that the study of gazing patterns obtained by using the eye-tracker technology gives a useful approach to quantify how subjects observe art. |
Stefan Rach; Adele Diederich; Hans Colonius On quantifying multisensory interaction effects in reaction time and detection rate Journal Article In: Psychological Research, vol. 75, no. 2, pp. 77–94, 2011. @article{Rach2011, Both mean reaction time (RT) and detection rate (DR) are important measures for assessing the amount of multisensory interaction occurring in crossmodal experiments, but they are often applied separately. Here we demonstrate that measuring multisensory performance using either RT or DR alone misses out on important information. We suggest an integration of RT and DR into a single measure of multisensory performance: the first index (MRE*) is based on an arithmetic combination of RT and DR, the second (MPE) is constructed from parameters derived from fitting a sequential sampling model to RT and DR data simultaneously. Our approach is illustrated by data from two audio-visual experiments. In the first, a redundant targets detection experiment using stimuli of different intensity, both measures yield similar pattern of results supporting the "principle of inverse effectiveness". The second experiment, introducing stimulus onset asynchrony and differing instructions (focused attention vs. redundant targets task) further supports the usefulness of both indices. Statistical properties of both measures are investigated via bootstrapping procedures. |
M. Raemaekers; Douwe P. Bergsma; Richard J. A. Wezel; G. J. Wildt; Albert V. Berg In: Journal of Neurophysiology, vol. 105, no. 2, pp. 872–882, 2011. @article{Raemaekers2011, Cerebral blindness is a loss of vision as a result of postchiasmatic damage to the visual pathways. Parts of the lost visual field can be restored through training. However, the neuronal mechanisms through which training effects occur are still unclear. We therefore assessed training-induced changes in brain function in eight patients with cerebral blindness. Visual fields were measured with perimetry and retinotopic maps were acquired with functional magnetic resonance imaging (fMRI) before and after vision restoration training. We assessed differences in hemodynamic responses between sessions that represented changes in amplitudes of neural responses and changes in receptive field locations and sizes. Perimetry results showed highly varied visual field recovery with shifts of the central visual field border ranging between 1 and 7°. fMRI results showed that, although retinotopic maps were mostly stable over sessions, there was a small shift of receptive field locations toward a higher eccentricity after training in addition to increases in receptive field sizes. In patients with bilateral brain activation, these effects were stronger in the affected than in the intact hemisphere. Changes in receptive field size and location could account for limited visual field recovery (± 1°), although it could not account for the large increases in visual field size that were observed in some patients. Furthermore, the retinotopic maps strongly matched perimetry measurements before training. These results are taken to indicate that local visual field enlargements are caused by receptive field changes in early visual cortex, whereas large-scale improvement cannot be explained by this mechanism. |
Manuel Perea; Chie Nakatani; Cees Leeuwen Transposition effects in reading Japanese Kana: Are they orthographic in nature? Journal Article In: Memory & Cognition, vol. 39, no. 4, pp. 700–707, 2011. @article{Perea2011, One critical question for the front end of models of visual-word recognition and reading is whether the stage of letter position coding is purely orthographic or whether phonology is (to some degree) involved. To explore this issue, we conducted a silent reading experiment in Japanese Kana–a script in which orthography and phonology can be easily separated–using a technique that is highly sensitive to phonological effects (i.e., Rayner's (1975) boundary technique). Results showed shorter fixation times on the target word when the parafoveal preview was a transposed-mora nonword (a.ri.me.ka [アリメカ]-a.me.ri.ka [アメリカ]) than when the preview was a replacement-mora nonword (a.ka.ho.ka [アカホカ] -a.me.ri.ka [アメリカ]). More critically, fixation times on the target word were remarkably similar when the parafoveal preview was a transposed-consonant nonword (a.re.mi.ka [アレミカ]-a.ri.me.ka [アリメカ]) and when the parafoveal preview was an orthographic control nonword (a.ke.hi.ka [アケヒカ]-a.me.ri.ka [アメリカ]). Thus, these findings offer strong support for the view that letter/mora position coding during silent reading is orthographic in nature. |
Gerald Pfeffer; Mathias Abegg; A. Talia Vertinsky; Isabella Ceccherini; Francesco Caroli; Jason J. S. Barton The ocular motor features of adult-onset alexander disease: A case and review of the literature Journal Article In: Journal of Neuro-Ophthalmology, vol. 31, no. 2, pp. 155–159, 2011. @article{Pfeffer2011, A 51-year-old Chinese man presented with gaze-evoked nystagmus, impaired smooth pursuit and vestibular ocular reflex cancellation, and saccadic dysmetria, along with a family history suggestive of late-onset autosomal dominant parkinsonism. MRI revealed abnormalities of the medulla and cervical spinal cord typical of adult-onset Alexander disease, and genetic testing showed homozygosity for the p.D295N polymorphic allele in the gene encoding the glial fibrillary acidic protein. A review of the literature shows that ocular signs are frequent in adult-onset Alexander disease, most commonly gaze-evoked nystagmus, pendular nystagmus, and/or oculopalatal myoclonus, and less commonly ptosis, miosis, and saccadic dysmetria. These signs are consistent with the propensity of adult-onset Alexander disease to cause medullary abnormalities on neuroimaging. |
Tobias Pflugshaupt; Julia Suchan; Marc André Mandler; Alexander N. Sokolov; Susanne Trauzettel-Klosinski; Hans-Otto Karnath Do patients with pure alexia suffer from a specific word form processing deficit? Evidence from 'wrods with trasnpsoed letetrs' Journal Article In: Neuropsychologia, vol. 49, no. 5, pp. 1294–1301, 2011. @article{Pflugshaupt2011, It is widely accepted that letter-by-letter reading and a pronounced increase in reading time as a function of word length are the hallmark features of pure alexia. Why patients show these two phenomena with respect to underlying cognitive mechanisms is, however, much less clear. Two main hypotheses have been proposed, i.e. impaired discrimination of letters and deficient processing of word forms. While the former deficit can easily be investigated in isolation, previous findings favouring the latter seem confounded. Applying a word reading paradigm with systematically manipulated letter orders in two patients with pure alexia, we demonstrate a word form processing deficit that is not attributable to sublexical letter discrimination difficulties. Moreover, pure alexia-like fixation patterns could be induced in healthy adults by having them read sentences including words with transposed letters, so-called 'jumbled words'. This further corroborates a key role of deficient word form processing in pure alexia. With regard to basic reading research, the present study extends recent evidence for relative, rather than precise, encoding of letter position in the brain.. |
Ye Wang; Bogdan F. Iliescu; Jianfu Ma; Kresimir Josic; Valentin Dragoi Adaptive changes in neuronal synchronization in macaque V4 Journal Article In: Journal of Neuroscience, vol. 31, no. 37, pp. 13204–13213, 2011. @article{Wang2011b, A fundamental property of cortical neurons is the capacity to exhibit adaptive changes or plasticity. Whether adaptive changes in cortical responses are accompanied by changes in synchrony between individual neurons and local population activity in sensory cortex is unclear. This issue is important as synchronized neural activity is hypothesized to play an important role in propagating information in neuronal circuits. Here, we show that rapid adaptation (300 ms) to a stimulus of fixed orientation modulates the strength of oscillatory neuronal synchronization in macaque visual cortex (area V4) and influences the ability of neurons to distinguish small changes in stimulus orientation. Specifically, rapid adaptation increases the synchronization of individual neuronal responses with local population activity in the gamma frequency band (30-80 Hz). In contrast to previous reports that gamma synchronization is associated with an increase in firing rates in V4, we found that the postadaptation increase in gamma synchronization is associated with a decrease in neuronal responses. The increase in gamma-band synchronization after adaptation is functionally significant as it is correlated with an improvement in neuronal orientation discrimination performance. Thus, adaptive synchronization between the spiking activity of individual neurons and their local population can enhance temporally insensitive, rate-based-coding schemes for sensory discrimination. |
Zheng Wang; Anna W. Roe Trial-to-trial noise cancellation of cortical field potentials in awake macaques by autoregression model with exogenous input (ARX) Journal Article In: Journal of Neuroscience Methods, vol. 194, no. 2, pp. 266–273, 2011. @article{Wang2011a, Gamma band synchronization has drawn increasing interest with respect to its potential role in neuronal encoding strategy and behavior in awake, behaving animals. However, contamination of these recordings by power line noise can confound the analysis and interpretation of cortical local field potential (LFP). Existing denoising methods are plagued by inadequate noise reduction, inaccuracies, and even introduction of new noise components. To carefully and more completely remove such contamination, we propose an automatic method based on the concept of adaptive noise cancellation that utilizes the correlative features of common noise sources, and implement with AutoRegressive model with eXogenous Input (ARX). We apply this technique to both simulated data and LFPs recorded in the primary visual cortex of awake macaque monkeys. The analyses here demonstrate a greater degree of accurate noise removal than conventional notch filters. Our method leaves desired signal intact and does not introduce artificial noise components. Application of this method to awake monkey V1 recordings reveals a significant power increase in the gamma range evoked by visual stimulation. Our findings suggest that the ARX denoising procedure will be an important pre-processing step in the analysis of large volumes of cortical LFP data as well as high frequency (gamma-band related) electroencephalography/magnetoencephalography (EEG/MEG) applications, one which will help to convincingly dissociate this notorious artifact from gamma-band activity. |
Zhong I. Wang; Louis F. Dell'Osso A unifying model-based hypothesis for the diverse waveforms of infantile nystagmus syndrome Journal Article In: Journal of Eye Movement Research, vol. 4, no. 1, pp. 1–18, 2011. @article{Wang2011, We expanded the original behavioral Ocular Motor System (OMS) model for Infantile Nystagmus Syndrome (INS) by incorporating common types of jerk waveforms within a unifying mechanism. Alexander's law relationships were used to produce desired INS null positions and sharpness. At various gaze angles, these relationships influenced the IN slow-phase amplitudes differently, thereby mimicking the gaze-angle effects of INS patients. Transitions from pseudopendular with foveating saccades to jerk waveforms required replacing braking saccades with foveating fast phases and adding a resettable neural integrator in the pursuit pre-motor circuitry. The robust simulations of accurate OMS behavior in the presence of diverse INS waveforms demonstrate that they can all be generated by a loss of pursuit-system damping, supporting this hypothetical origin. |
Tessa Warren; Erik D. Reichle; Nikole D. Patson Lexical and post-lexical complexity effects on eye movements Journal Article In: Journal of Eye Movement Research, vol. 4, no. 1, pp. 1–10, 2011. @article{Warren2011, The current study investigated how a post-lexical complexity manipulation followed by a lexical complexity manipulation affects eye movements during reading. Both manipula- tions caused disruption in all measures on the manipulated words, but the patterns of spill- over differed. Critically, the effects of the two kinds of manipulations did not interact, and there was no evidence that post-lexical processing difficulty delayed lexical processing on the next word (c.f. Henderson & Ferreira, 1990). This suggests that post-lexical processing of one word and lexical processing of the next can proceed independently and likely in parallel. This finding is consistent with the assumptions of the E-Z Reader model of eye movement control in reading (Reichle, Warren, & McConnell, 2009). |
Tamara L. Watson; B. Krekelberg An equivalent noise investigation of saccadic suppression Journal Article In: Journal of Neuroscience, vol. 31, no. 17, pp. 6535–6541, 2011. @article{Watson2011, Visual stimuli presented just before or during an eye movement are more difficult to detect than those same visual stimuli presented during fixation. This laboratory phenomenon-behavioral saccadic suppression-is thought to underlie the everyday experience of not perceiving the motion created by our own eye movements-saccadic omission. At the neural level, many cortical and sub cortical areas respond differently to perisaccadic visual stimuli than to stimuli presented during fixation. Those neural response changes, however, are complex and the link to the behavioral phenomena of reduced detect ability remains tentative.We used awellestablished model of human visual detection perform ance to provide a quantitative description of behavioral saccadic suppression and thereby allow amore focused search for its neural mechanisms. We used an equivalent noise method to distinguish between three mechanisms that could underlie saccadic suppression. The first hypothesized mechanism reduces the gain of the visual system, the second increases internal noise levels in a stimulus-dependent manner, and the third increases stimulus uncertainty. All three mechanisms predict that perisaccadic stimuli should be more difficult to detect, but each mechanism predicts a unique pattern of detectability as a function of the amount of external noise. Our experimental finding was that saccades increased detection threshold sat low external noise, but had little influence on thresholds at high levels of external noise. A formal analysis of these data in the equivalent noise analysis framework showed that the most parsimonious mechanism underlying saccadic suppression is a stimulus-independent reduction in response gain. |
Matthew David Weaver; Johan Lauwereyns Attentional capture and hold: the oculomotor correlates of the change detection advantage for faces Journal Article In: Psychological Research, vol. 75, no. 1, pp. 10–23, 2011. @article{Weaver2011, The present study investigated the influence of semantic information on overt attention. Semantic influence on attentional capture and hold mechanisms was explored by measuring oculomotor correlates of the reaction time (RT) and accuracy advantage for faces in the change detection task. We also examined whether the face advantage was due to mandatory processing of faces or an idiosyncratic strategy by participants, by manipulating preknowledge of the object category in which to expect a change. An RT and accuracy advantage was found for detecting changes in faces compared to other objects of less social and biological significance, in the form of greater attentional capture and hold. The faster attentional capture by faces appeared to overcompensate for the longer hold, to produce faster and more accurate manual responses. Preknowledge did not eliminate the face advantage, suggesting that faces receive mandatory processing when competing for attention with stimuli of less sociobiological salience. |
Matthew David Weaver; Johan Lauwereyns; Jan Theeuwes The effect of semantic information on saccade trajectory deviations Journal Article In: Vision Research, vol. 51, no. 10, pp. 1124–1128, 2011. @article{Weaver2011a, In recent years, many studies have explored the conditions in which irrelevant visual distractors affect saccades trajectories. These previous studies mainly focused on the low-level stimulus characteristics and how they affect the magnitude of curvature. The present study explored the possible effect of high level semantic information on saccade curvature. Semantic saliency was manipulated by presenting irrelevant peripheral taboo versus neutral cue words in a spatial cuing paradigm that allowed for the measurement of trajectory deviations. Findings showed larger saccade trajectory deviations away from taboo (versus neutral) cue words when making a saccade towards another location. This indicates that due to their high semantic saliency, more inhibition was necessarily applied to taboo cue locations to effectively suppress their competing as saccade targets. |
Marine Vernet; Qing Yang; Zoï Kapoula Guiding binocular saccades during reading: A TMS study of the PPC Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 14, 2011. @article{Vernet2011, Reading is an activity based on complex sequences of binocular saccades and fixations. During saccades, the eyes do not move together perfectly: saccades could end with a misalignment, compromising fused vision. During fixations, small disconjugate drift can partly reduce this misalignment. We hypothesized that maintaining eye alignment during reading involves active monitoring from posterior parietal cortex (PPC); this goes against traditional views considering only downstream binocular control. Nine young adults read a text; transcranial magnetic stimulation (TMS) was applied over the PPC every 5 ± 0.2 s. Eye movements were recorded binocularly with Eyelink II. Stimulation had three major effects: (1) disturbance of eye alignment during fixation; (2) increase of saccade disconjugacy leading to eye misalignment; (3) decrease of eye alignment reduction during fixation drift. The effects depend on the side; the right PPC was more involved in maintaining alignment over the motor sequence. Thus, the PPC is actively involved in the control of binocular eye alignment during reading, allowing clear vision. Cortical activation during reading is related to linguistic processes and motor control per se. The study might be of interest for the understanding of deficits of binocular coordination, encountered in several populations, e.g., in children with dyslexia. |
Eduardo Vidal-Abarca; Tomás Martinez; Ladislao Salmerón; Raquel Cerdán; Ramiro Gilabert; Laura Gil; Amelia Mañá; Ana C. Llorens; Ricardo Ferris Recording online processes in task-oriented reading with Read&Answer Journal Article In: Behavior Research Methods, vol. 43, no. 1, pp. 179–192, 2011. @article{VidalAbarca2011, We present an application to study task-oriented reading processes called Read&Answer. The application mimics paper-and-pencil situations in which a reader interacts with one or more documents to perform a specific task, such as answering questions, writing an essay, or similar activities. Read&Answer presents documents and questions with a mask. The reader unmasks documents and questions so that only a piece of information is available at a time. This way the entire interaction between the reader and the documents on the task is recorded and can be analyzed. We describe Read&Answer and present its applications for research and assessment. Finally, we explain two studies that compare readers' performance on Read&Answer with students' reading times and comprehension levels on a paper-and-pencil task, and on a computer task recorded with eye-tracking. The use of Read&Answer produced similar comprehension scores, although it changed the pattern of reading times. |
Eleonora Vig; Michael Dorr; Thomas Martinetz; Erhardt Barth Eye movements show optimal average anticipation with natural dynamic scenes Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 79–88, 2011. @article{Vig2011, A less studied component of gaze allocation in dynamic real-world scenes is the time lag of eye movements in responding to dynamic attention-capturing events. Despite the vast amount of research on anticipatory gaze behaviour in natural situations, such as action execution and observation, little is known about the predictive nature of eye movements when viewing different types of natural or realistic scene sequences. In the present study, we quantify the degree of anticipation during the free viewing of dynamic natural scenes. The cross-correlation analysis of image-based saliency maps with an empirical saliency measure derived from eye movement data reveals the existence of predictive mechanisms responsible for a near-zero average lag between dynamic changes of the environment and the responding eye movements. We also show that the degree of anticipation is reduced when moving away from natural scenes by introducing camera motion, jump cuts, and film-editing. |
Melissa L. -H. Võ; John M. Henderson Object-scene inconsistencies do not capture gaze: evidence from the flash-preview moving-window paradigm Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 6, pp. 1742–1753, 2011. @article{Vo2011, In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene. |
Alice K. Welham; Andy J. Wills Unitization, similarity, and overt attention in categorization and exposure Journal Article In: Memory & Cognition, vol. 39, no. 8, pp. 1518–1533, 2011. @article{Welham2011, Unitization, the creation of new stimulus features by the fusion of preexisting features, is one of the hypothesized processes of perceptual learning (Goldstone Annual Review of Psychology, 49:585-612, 1998). Some argue that unitization occurs to the extent that it is required for successful task performance (e.g., Shiffrin & Lightfoot, 1997), while others argue that unitization is largely independent of functionality (e.g., McLaren & Mackintosh Animal Learning & Behavior, 30:177-200, 2000). Across three experiments, employing supervised category learning and unsupervised exposure, we investigated three predictions of the McLaren and Mackintosh (Animal Learning & Behavior, 30:177-200, 2000) model: (1) Unitization is accompanied by an initial increase in the subjective similarity of stimuli sharing a unitized component; (2) unitization of a configuration occurs through exposure to its components, even when the task does not require it; (3) as unitization approaches completion, salience of the unitized component may be reduced. Our data supported these predictions. We also found that unitization is associated with increases in overt attention to the unitized component, as measured through eye tracking. |
Jessica Werthmann; Anne Roefs; Chantal Nederkoorn; Karin Mogg; Brendan P. Bradley; Anita Jansen Can(not) take my eyes off it: Attention bias for food in overweight participants Journal Article In: Health Psychology, vol. 30, no. 5, pp. 561–569, 2011. @article{Werthmann2011, Objective: The aim of the current study was to investigate attention biases for food cues, craving, and overeating in overweight and healthy-weight participants. Specifically, it was tested whether attention allocation processes toward high-fat foods differ between overweight and normal weight individuals and whether selective attention biases for food cues are related to craving and food intake. Method: Eye movements were recorded as a direct index of attention allocation in a sample of 22 overweight/obese and 29 healthy-weight female students during a visual probe task with food pictures. In addition, self-reported craving and actual food intake during a bogus "taste-test" were assessed. Results: Overweight participants showed an approach-avoidance pattern of attention allocation toward high-fat food. Overweight participants directed their first gaze more often toward food pictures than healthy-weight individuals, but subsequently showed reduced maintenance of attention on these pictures. For overweight participants, craving was related to initial orientation toward food. Moreover, overweight participants consumed significantly more snack food than healthy-weight participants. Conclusion: Results emphasize the importance of identifying different attention bias components in overweight individuals with regard to craving and subsequent overeating. |
Gregory L. West; Naseem Al-Aidroos; Josh Susskind; Jay Pratt Emotion and action: The effect of fear on saccadic performance Journal Article In: Experimental Brain Research, vol. 209, no. 1, pp. 153–158, 2011. @article{West2011, According to evolutionary accounts, emotions originated to prepare an organism for action (Darwin 1872; Frijda 1986). To investigate this putative relationship between emotion and action, we examined the effect of an emotional stimulus on oculomotor actions controlled by the superior colliculus (SC), which has connections with subcortical structures involved in the perceptual prioritization of emotion, such as the amygdala through the pulvinar. The pulvinar connects the amygdala to cells in the SC responsible for the speed of saccade execution, while not affecting the spatial component of the saccade. We tested the effect of emotion on both temporal and spatial signatures of oculomotor functioning using a gap-distractor paradigm. Changes in spatial programming were examined through saccadic curvature in response to a remote distractor stimulus, while changes in temporal execution were examined using a fixation gap manipulation. We show that following the presentation of a task-irrelevant fearful face, the temporal but not the spatial component of the saccade generation system was affected. |
Sarah J. White; Tessa Warren; Erik D. Reichle Parafoveal preview during reading: Effects of sentence position Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1221–1238, 2011. @article{White2011, Two experiments examined parafoveal preview for words located in the middle of sentences and at sentence boundaries. Parafoveal processing was shown to occur for words at sentence-initial, mid-sentence, and sentence-final positions. Both Experiments 1 and 2 showed reduced effects of preview on regressions out for sentence-initial words. In addition, Experiment 2 showed reduced preview effects on first-pass reading times for sentence-initial words. These effects of sentence position on preview could result from either reduced parafoveal processing for sentence-initial words or other processes specific to word reading at sentence boundaries. In addition to the effects of preview, the experiments also demonstrate variability in the effects of sentence wrap-up on different reading measures, indicating that the presence and time course of wrap-up effects may be modulated by text-specific factors. We also report simulations of Experiment 2 using version 10 of E-Z Reader (Reichle, Warren, & McConnell, 2009), designed to explore the possible mechanisms underlying parafoveal preview at sentence boundaries. |
Ben D. B. Willmore; James A. Mazer; Jack L. Gallant Sparse coding in striate and extrastriate visual cortex Journal Article In: Journal of Neurophysiology, vol. 105, no. 6, pp. 2907–2919, 2011. @article{Willmore2011, Theoretical studies of mammalian cortex argue that efficient neural codes should be sparse. However, theoretical and experimental studies have used different definitions of the term "sparse" leading to three assumptions about the nature of sparse codes. First, codes that have high lifetime sparseness require few action potentials. Second, lifetime-sparse codes are also population-sparse. Third, neural codes are optimized to maximize lifetime sparseness. Here, we examine these assumptions in detail and test their validity in primate visual cortex. We show that lifetime and population sparseness are not necessarily correlated and that a code may have high lifetime sparseness regardless of how many action potentials it uses. We measure lifetime sparseness during presentation of natural images in three areas of macaque visual cortex, V1, V2, and V4. We find that lifetime sparseness does not increase across the visual hierarchy. This suggests that the neural code is not simply optimized to maximize lifetime sparseness. We also find that firing rates during a challenging visual task are higher than theoretical values based on metabolic limits and that responses in V1, V2, and V4 are well-described by exponential distributions. These findings are consistent with the hypothesis that neurons are optimized to maximize information transmission subject to metabolic constraints on mean firing rate. |
Sara A. Winges; John F. Soechting Spatial and temporal aspects of cognitive influences on smooth pursuit Journal Article In: Experimental Brain Research, vol. 211, no. 1, pp. 27–36, 2011. @article{Winges2011, It is well known that prediction is used to overcome processing delays within the motor system and ocular control is no exception. Motion extrapolation is one mechanism that can be used to overcome the visual processing delay. Expectations based on previous experience or cognitive cues are also capable of overcoming this delay. The present experiment was designed to examine how smooth pursuit is altered by cognitive information about the time and/or direction of an upcoming change in target direction. Subjects visually tracked a cursor as it moved at a constant velocity on a computer screen. The target initially moved from left to right and then abruptly reversed horizontal direction and traveled along one of seven possible oblique paths. In half of the trials, a cue was present throughout the trial to signal the position (as well as the time), and/or the direction of the upcoming change. Whenever a position cue (which will be referred to as a timing cue throughout the paper) was present, there were clear anticipatory adjustments to the horizontal velocity component of smooth pursuit. In the presence of a timing cue, a directional cue also led to anticipatory adjustments in the vertical velocity, and hence the direction of smooth pursuit. However, without the timing cue, a directional cue alone produced no anticipation. Thus, in this task, a cognitive spatial cue about the new direction could not be used unless it was made explicit in the time domain. |
Heather Winskel Orthographic and phonological parafoveal processing of consonants, vowels, and tones when reading Thai Journal Article In: Applied Psycholinguistics, vol. 32, no. 4, pp. 739–759, 2011. @article{Winskel2011, Four eye movement experiments investigated whether readers use parafoveal input to gain information about the phonological or orthographic forms of consonants, vowels, and tones in word recognition when reading Thai silently. Target words were presented in sentences preceded by parafoveal previews in which consonant, vowel, or tone information was manipulated. Previews of homophonous consonants (Experiment I) and concordant vowels (Experiment 2) did not substantially facilitate processing of the target word, whereas the identical previews did. Hence, orthography appears to be playing the prominent role in early word recognition for consonants and vowels. Incorrect tone marker previews (Experiment 3) substantially retarded the subsequent processing of the target word, indicating that lexical tone plays an important role in early word recognition. Vowels in VOP (Experiment 4) did not facilitate processing, which points to vowel position being a significant factor. Primarily, orthographic codes of consonants and vowels (HOP) in conjunction with tone information are assembled from parafoveal input and used for early lexical access. |
Andi K. Winterboer; Martin I. Tietze; Maria K. Wolters; Johanna D. Moore The user model-based summarize and refine approach improves information presentation in spoken dialog systems Journal Article In: Computer Speech and Language, vol. 25, no. 2, pp. 175–191, 2011. @article{Winterboer2011, A common task for spoken dialog systems (SDS) is to help users select a suitable option (e.g., flight, hotel, and restaurant) from the set of options available. As the number of options increases, the system must have strategies for generating summaries that enable the user to browse the option space efficiently and successfully. In the user-model based summarize and refine approach (UMSR, Demberg and Moore, 2006), options are clustered to maximize utility with respect to a user model, and linguistic devices such as discourse cues and adverbials are used to highlight the trade-offs among the presented items. In a Wizard-of-Oz experiment, we show that the UMSR approach leads to improvements in task success, efficiency, and user satisfaction compared to an approach that clusters the available options to maximize coverage of the domain (Polifroni et al., 2003). In both a laboratory experiment and a web-based experimental paradigm employing the Amazon Mechanical Turk platform, we show that the discourse cues in UMSR summaries help users compare different options and choose between options, even though they do not improve verbatim recall. This effect was observed for both written and spoken stimuli. |
C. Witzel; Karl R. Gegenfurtner Is there a lateralized category effect for color? Journal Article In: Journal of Vision, vol. 11, no. 12, pp. 16–16, 2011. @article{Witzel2011, According to the lateralized category effect for color, the influence of color category borders on color perception in fast reaction time tasks is significantly stronger in the right visual field than in the left. This finding has directly related behavioral category effects to the hemispheric lateralization of language. Multiple succeeding articles have built on these findings. We ran ten different versions of the two original experiments with overall 230 naive observers. We carefully controlled the rendering of the stimulus colors and determined the genuine color categories with an appropriate naming method. Congruent with the classical pattern of a category effect, reaction times in the visual search task were lower when the two colors to be discriminated belonged to different color categories than when they belonged to the same category. However, these effects were not lateralized: They appeared to the same extent in both visual fields. |
Lynsey Wolter; Kristen Skovbroten Gorman; Michael K. Tanenhaus Scalar reference, contrast and discourse: Separating effects of linguistic discourse from availability of the referent Journal Article In: Journal of Memory and Language, vol. 65, no. 3, pp. 299–317, 2011. @article{Wolter2011, Listeners expect that a definite noun phrase with a pre-nominal scalar adjective (e.g., the big ...) will refer to an entity that is part of a set of objects contrasting on the scalar dimension, e.g., size (Sedivy, Tanenhaus, Chambers, & Carlson, 1999). Two visual world experiments demonstrate that uttering a referring expression with a scalar adjective makes all members of the relevant contrast set more salient in the discourse model, facilitating subsequent reference to other members of that contrast set. Moreover, this discourse effect is caused primarily by linguistic mention of a scalar adjective and not by the listener's prior visual or perceptual experience. These experiments demonstrate that language processing is sensitive to which information was introduced by linguistic mention, and that the visual world paradigm can be use to tease apart the separate contributions of visual and linguistic information to reference resolution. |
Jason H. Wong; Matthew S. Peterson The interaction between memorized objects and abrupt onsets in oculomotor capture Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 6, pp. 1768–1779, 2011. @article{Wong2011, Recent evidence has been found for a source of task-irrelevant oculomotor capture (defined as when a salient event draws the eyes away from a primary task) that originates from working memory. An object memorized for a nonsearch task can capture the eyes during search. Here, an experiment was conducted that generated interactions between the presence of a memorized object (a colored disk) with the abrupt onset of a new object during visual search. The goal was to compare memory-driven oculomotor capture to oculomotor capture caused by an abrupt onset. This has implications for saccade programming theories, which have little to say about saccades that are influenced by object working memory. Results showed that memorized objects capture the eyes at nearly the same rate as abrupt onsets. When the abrupt onset and a memorized color coincide in the same object, this combination leads to even greater oculomotor capture. Finally, latencies support the competitive integration model: Shorter saccade latencies were found when the memorized color combined with the onset captured the eyes, as compared to either color or onset only. Longer latencies were also found when the color and onset occurred in the same display but were spatially separated. |
Wieske Zoest; Amelia R. Hunt Saccadic eye movements and perceptual judgments reveal a shared visual representation that is increasingly accurate over time Journal Article In: Vision Research, vol. 51, no. 1, pp. 111–119, 2011. @article{Zoest2011, Although there is evidence to suggest visual illusions affect perceptual judgments more than actions, many studies have failed to detect task-dependant dissociations. In two experiments we attempt to resolve the contradiction by exploring the time-course of visual illusion effects on both saccadic eye movements and perceptual judgments, using the Judd illusion. The results showed that, regardless of whether a saccadic response or a perceptual judgement was made, the illusory bias was larger when responses were based on less information, that is, when saccadic latencies were short, or display duration was brief. The time-course of the effect was similar for both the saccadic responses and perceptual judgements, suggesting that both modes may be driven by a shared visual representation. Changes in the strength of the illusion over time also highlight the importance of controlling for the latency of different response systems when evaluating possible dissociations between them. |
Joris Vangeneugden; Patrick A. De Maziere; Marc M. Van Hulle; Tobias Jaeggli; Luc Van Van Gool; Rufin Vogels Distinct mechanisms for coding of visual actions in macaque temporal cortex Journal Article In: Journal of Neuroscience, vol. 31, no. 2, pp. 385–401, 2011. @article{Vangeneugden2011, Temporal cortical neurons are known to respond to visual dynamic-action displays. Many human psychophysical and functional imaging studies examining biological motion perception have used treadmill walking, in contrast to previous macaque single-cell studies. We assessed the coding of locomotion in rhesus monkey (Macacamulatta) temporal cortex using movies of stationary walkers,varying both form and motion (i.e.,different facing directions) or varying only the frame sequence (i.e.,forward vs backward walking). The majority of superiortemporal sulcus and inferior temporal neurons were selective for facing direction, whereas a minority distinguished forward from backward walking. Support vector machines using the temporal cortical population responses as input classified facing direction well, but forward and backward walking less so. Classification performance for the latter improved markedly when the within-action response modulation was considered, reflecting differences in momentarybody poses within the locomotion sequences. Responses to static pose presentations predicted the responses during the course of the action. Analyses of the responses to walking sequences wherein the start frame was varied across trials showed that some neurons also carried a snapshot sequence signal. Such sequence information was present in neurons that responded to static snapshot presen- tations and in neurons that required motion. Our data suggest that actions area nalyzed by temporal cortical neurons using distinct mechanisms. Most neurons predominantly signal momentary pose. In addition, temporal cortical neurons, including those responding to static pose, are sensitive to pose sequence, which can contribute to the signaling oflearned action sequences. |
Shravan Vasishth; Heiner Drenhaus Locality in German Journal Article In: Dialogue and Discourse, vol. 2, no. 1, pp. 59–82, 2011. @article{Vasishth2011, Three experiments (self-paced reading, eyetracking and an ERP study) show that in relative clauses, increasing the distance between the relativized noun and the relative-clause verb makes it more difficult to process the relative-clause verb (the so-called locality effect). This result is consistent with the predictions of several theories (Gibson, 2000; Lewis and Vasishth, 2005), and contradicts the recent claim (Levy, 2008) that in relative-clause structures increasing argument-verb distance makes processing easier at the verb. Levy's expectation-based account predicts that the expectation for a verb becomes sharper as distance is increased and therefore processing becomes easier at the verb. We argue that, in addition to expectation effects (which are seen in the eyetracking study in first-pass regression probability), processing load alsoincreases with increasing distance. This contradicts Levy's claim that heightened expectation leadsto lower processing cost. Dependency- resolution cost and expectation-based facilitation are jointly responsible for determining processing cost. |
B. -E. Verhoef; Rufin Vogels; Peter Janssen Synchronization between the end stages of the dorsal and the ventral visual stream Journal Article In: Journal of Neurophysiology, vol. 105, no. 5, pp. 2030–2042, 2011. @article{Verhoef2011, The end stage areas of the ventral (IT) and the dorsal (AIP) visual streams encode the shape of disparity-defined three-dimensional (3D) surfaces. Recent anatomical tracer studies have found direct reciprocal connections between the 3D-shape selective areas in IT and AIP. Whether these anatomical connections are used to facilitate 3D-shape perception is still unknown. We simultaneously recorded multi-unit activity (MUA) and local field potentials in IT and AIP while monkeys discriminated between concave and convex 3D shapes and measured the degree to which the activity in IT and AIP synchronized during the task. We observed strong beta-band synchronization between IT and AIP preceding stimulus onset that decreased shortly after stimulus onset and became modulated by stereo-signal strength and stimulus contrast during the later portion of the stimulus period. The beta-coherence modulation was unrelated to task-difficulty, regionally specific, and dependent on the MUA selectivity of the pairs of sites under study. The beta-spike-field coherence in AIP predicted the upcoming choice of the monkey. Several convergent lines of evidence suggested AIP as the primary source of the AIP-IT synchronized activity. The synchronized beta activity seemed to occur during perceptual anticipation and when the system has stabilized to a particular perceptual state but not during active visual processing. Our findings demonstrate for the first time that synchronized activity exists between the end stages of the dorsal and ventral stream during 3D-shape discrimination. |
Sarah Uzzaman; Steve Joordens The eyes know what you are thinking: Eye movements as an objective measure of mind wandering Journal Article In: Consciousness and Cognition, vol. 20, no. 4, pp. 1882–1886, 2011. @article{Uzzaman2011, Paralleling the recent work by Reichle, Reineberg, and Schooler (2010), we explore the use of eye movements as an objective measure of mind wandering while participants performed a reading task. Participants were placed in a self-classified probe-caught mind wandering paradigm while their eye movements were recorded. They were randomly probed every 2-3. min and were required to indicate whether their mind had been wandering. The results show that eye movements were generally less complex when participants reported mind wandering episodes, with both duration and frequency of within-word regressions, for example, becoming significantly reduced. This is consistent with the theoretical claim that the cognitive processes that normally influence eye movements to enhance semantic processing during reading exert less control during mind wandering episodes. |
Seppo Vainio; Raymond Bertram; Anneli Pajunen; Jukka Hyönä Processing modifier-head agreement in long Finnish words: Evidence from eye movements Journal Article In: Acta Linguistica Hungarica, vol. 58, no. 1, pp. 134–156, 2011. @article{Vainio2011, The present study investigates whether processing of an inflected Finnish noun is facilitated when preceded by a modifier in the same case ending. In Finnish, modifiers agree with their head nouns both in case and in number and the agreement is expressed by means of suffixes (e.g., vanha/ssa talo/ssa 'old/in house/in' –> 'in the old house'). Vainio et al. (2003; 2008) showed processing benefits for this kind of modifier-head agreement, when the head nouns were relatively short. However, the effect showed up relatively late in the processing stream, such that word n + 1, the word following the target noun talo/ssa, was read faster when it was preceded by an agreeing modifier (vanha/ssa) than when no modifier was present. This led Vainio et al. to the conclusion that agreement exerts its effect at a later stage, namely at the level of syntactic integration and not at the level of lexical access. The current study investigates whether the same holds when head nouns are considerably longer (e.g., kaupungin/talo/ssa 'city house/in' –> 'in the city hall'). Our results show that the effect of agreement is facilitative in case of longer head nouns as well, but – in contrast to what was found for shorter words – the effect not only appeared late, but was also observed in earlier processing measures. It thus seems that, in processing long words, benefits related to modifier-head agreement are not confined to post-lexical syntactic integration processes, but extend to lexical identification of the head. Adapted from the source document |
Eva Van Assche; Denis Drieghe; Wouter Duyck; Marijke Welvaert; Robert J. Hartsuiker The influence of semantic constraints on bilingual word recognition during sentence reading Journal Article In: Journal of Memory and Language, vol. 64, no. 1, pp. 88–107, 2011. @article{VanAssche2011, The present study investigates how semantic constraint of a sentence context modulates language-non-selective activation in bilingual visual word recognition. We recorded Dutch-English bilinguals' eye movements while they read cognates and controls in low and high semantically constraining sentences in their second language. Early and late eye-movement measures yielded cognate facilitation, both for low- and high-constraint sentences. Facilitation increased gradually as a function of cross-lingual overlap between translation equivalents. A control experiment showed that the same stimuli did not yield cognate effects in English monolingual controls, ensuring that these effects were not due to any uncontrolled stimulus characteristics. The present study supports models of bilingual word recognition with a limited role for top-down influences of semantic constraints on lexical access in both early and later stages of bilingual word recognition. |
Marije Beilen; Remco J. Renken; Erik S. Groenewold; Frans W. Cornelissen Attentional window set by expected relevance of environmental signals Journal Article In: PLoS ONE, vol. 6, no. 6, pp. e21262, 2011. @article{Beilen2011, The existence of an attentional window–a limited region in visual space at which attention is directed–has been invoked to explain why sudden visual onsets may or may not capture overt or covert attention. Here, we test the hypothesis that observers voluntarily control the size of this attentional window to regulate whether or not environmental signals can capture attention. We have used a novel approach to test this: participants eye-movements were tracked while they performed a search task that required dynamic gaze-shifts. During the search task, abrupt onsets were presented that cued the target positions at different levels of congruency. The participant knew these levels. We determined oculomotor capture efficiency for onsets that appeared at different viewing eccentricities. From these, we could derive the participant's attentional window size as a function of onset congruency. We find that the window was small during the presentation of low-congruency onsets, but increased monotonically in size with an increase in the expected congruency of the onsets. This indicates that the attentional window is under voluntary control and is set according to the expected relevance of environmental signals for the observer's momentary behavioral goals. Moreover, our approach provides a new and exciting method to directly measure the size of the attentional window. |
Goedele Van Belle; Thomas Busigny; Philippe Lefèvre; Sven Joubert; Olivier Felician; Francesco Gentile; Bruno Rossion; Philippe Lefèvre; Sven Joubert; Olivier Felician; Francesco Gentile; Bruno Rossion Impairment of holistic face perception following right occipito-temporal damage in prosopagnosia: Converging evidence from gaze-contingency Journal Article In: Neuropsychologia, vol. 49, no. 11, pp. 3145–3150, 2011. @article{VanBelle2011, Gaze-contingency is a method traditionally used to investigate the perceptual span in reading by selectively revealing/masking a portion of the visual field in real time. Introducing this approach in face perception research showed that the performance pattern of a brain-damaged patient with acquired prosopagnosia (PS) in a face matching task was reversed, as compared to normal observers: the patient showed almost no further decrease of performance when only one facial part (eye, mouth, nose, etc.) was available at a time (foveal window condition, forcing part-based analysis), but a very large impairment when the fixated part was selectively masked (mask condition, promoting holistic perception) (Van Belle, De Graef, Verfaillie, Busigny, & Rossion, 2010a; Van Belle, De Graef, Verfaillie, Rossion, & Lefèvre, 2010b). Here we tested the same manipulation in a recently reported case of pure prosopagnosia (GG) with unilateral right hemisphere damage (Busigny, Joubert, Felician, Ceccaldi, & Rossion, 2010). Contrary to normal observers, GG was also significantly more impaired with a mask than with a window, demonstrating impairment with holistic face perception. Together with our previous study, these observations support a generalized account of acquired prosopagnosia as a critical impairment of holistic (individual) face perception, implying that this function is a key element of normal human face recognition. Furthermore, the similar behavioral pattern of the two patients despite different lesion localizations supports a distributed network view of the neural face processing structures, suggesting that the key function of human face processing, namely holistic perception of individual faces, requires the activity of several brain areas of the right hemisphere and their mutual connectivity. |
Lise Van der Haegen; Marc Brysbaert The mechanisms underlying the interhemispheric integration of information in foveal word recognition: Evidence for transcortical inhibition Journal Article In: Brain and Language, vol. 118, no. 3, pp. 81–89, 2011. @article{VanderHaegen2011, Words are processed as units. This is not as evident as it seems, given the division of the human cerebral cortex in two hemispheres and the partial decussation of the optic tract. In two experiments, we investigated what underlies the unity of foveally presented words: A bilateral projection of visual input in foveal vision, or interhemispheric inhibition and integration as proposed by the SERIOL model of visual word recognition. Experiment 1 made use of pairs of words and nonwords with a length of four letters each. Participants had to name the word and ignore the nonword. The visual field in which the word was presented and the distance between the word and the nonword were manipulated. The results showed that the typical right visual field advantage was observed only when the word and the nonword were clearly separated. When the distance between them became smaller, the right visual field advantage turned into a left visual field advantage, in line with the interhemispheric inhibition mechanism postulated by the SERIOL model. Experiment 2, using 5-letters stimuli, confirmed that this result was not due to the eccentricity of the word relative to the fixation location but to the distance between the word and the nonword. |
Lise Van der Haegen; Qing Cai; Ruth Seurinck; Marc Brysbaert Further fMRI validation of the visual half field technique as an indicator of language laterality: A large-group analysis Journal Article In: Neuropsychologia, vol. 49, no. 10, pp. 2879–2888, 2011. @article{VanderHaegen2011a, The best established lateralized cerebral function is speech production, with the majority of the population having left hemisphere dominance. An important question is how to best assess the laterality of this function. Neuroimaging techniques such as functional Magnetic Resonance Imaging (fMRI) are increasingly used in clinical settings to replace the invasive Wada-test. We evaluated the usefulness of behavioral visual half field (VHF) tasks for screening a large sample of healthy left-handers. Laterality indices (LIs) calculated on the basis of the latencies in a word and picture naming VHF task were compared to the brain activity measured in a silent word generation task in fMRI (pars opercularis/BA44 and pars triangularis/BA45). Results confirmed the usefulness of the VHF-tasks as a screening device. None of the left-handed participants with clear right visual field (RVF) advantages in the picture and word naming task showed right hemisphere dominance in the scanner. In contrast, 16/20 participants with a left visual field (LVF) advantage in both word and picture naming turned out to have atypical right brain dominance. Results were less clear for participants who failed to show clear VHF asymmetries (below 20 ms RVF advantage and below 60 ms LVF advantage) or who had inconsistent asymmetries in picture and word naming. These results indicate that the behavioral tasks can mainly provide useful information about the direction of speech dominance when both VHF differences clearly point in the same direction. |
Stefan Van der Stigchel; Jelmer P. De Vries; R. Bethlehem; Jan Theeuwes A global effect of capture saccades Journal Article In: Experimental Brain Research, vol. 210, no. 1, pp. 57–65, 2011. @article{VanderStigchel2011, When two target elements are presented in close proximity, the endpoint of a saccade is generally positioned at an intermediate location ('global effect'). Here, we investigated whether the global effect also occurs for eye movements executed to distracting elements. To this end, we adapted the oculomotor capture paradigm such that on a subset of trials, two distractors were presented. When the two distractors were closely aligned, erroneous eye movements were initiated to a location in between the two distractors. Even though to a lesser extent, this effect was also present when the two distractors were presented further apart. In a second experiment, we investigated the global effect for eye movements in the presence of two targets. A strong global effect was observed when two targets were presented closely aligned, while this effect was absent when the targets were further apart. This study shows that there is a global effect when saccades are captured by distractors. This 'capture global' effect is different from the traditional global effect that occurs when two targets are presented because the global effect of capture saccades also occurs for remote elements. The spatial dynamics of this global effect will be explained in terms of the population coding theory. |
Julie A. Van Dyke; Brian McElree Cue-dependent interference in comprehension Journal Article In: Journal of Memory and Language, vol. 65, no. 3, pp. 247–263, 2011. @article{VanDyke2011, The role of interference as a primary determinant of forgetting in memory has long been accepted, however its role as a contributor to poor comprehension is just beginning to be understood. The current paper reports two studies, in which speed-accuracy tradeoff and eye-tracking methodologies were used with the same materials to provide converging evidence for the role of syntactic and semantic cues as mediators of both proactive (PI) and retroactive interference (RI) during comprehension. Consistent with previous work (e.g., Van Dyke & Lewis, 2003), we found that syntactic constraints at the retrieval site are among the cues that drive retrieval in comprehension, and that these constraints effectively limit interference from potential distractors with semantic/pragmatic properties in common with the target constituent. The data are discussed in terms of a cue-overload account, in which interference both arises from and is mediated through a direct-access retrieval mechanism that utilizes a linear, weighted cue-combinatoric scheme. |
Huihui Zhou; Robert Desimone Feature-based attention in the Frontal Eye Field and area V4 during visual search Journal Article In: Neuron, vol. 70, no. 6, pp. 1205–1217, 2011. @article{Zhou2011, When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target. |
Eckart Zimmermann; David C. Burr; M. Concetta Morrone Spatiotopic visual maps revealed by saccadic adaptation in humans Journal Article In: Current Biology, vol. 21, no. 16, pp. 1380–1384, 2011. @article{Zimmermann2011, Saccadic adaptation [1] is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system [2, 3], but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error [4-7]. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals. |
Serap Yiǧit-Elliott; John Palmer; Cathleen M. Moore Distinguishing blocking from attenuation in visual selective attention Journal Article In: Psychological Science, vol. 22, no. 6, pp. 771–780, 2011. @article{YiǧitElliott2011, Sensory information must be processed selectively in order to represent the world and guide behavior. How does such selection occur? Here we consider two alternative classes of selection mechanisms: In blocking, unattended stimuli are blocked entirely from access to downstream processes, and in attenuation, unattended stimuli are reduced in strength but if strong enough can still access downstream processes. Existing evidence as to whether blocking or attenuation is a more accurate model of human performance is mixed. Capitalizing on a general distinction between blocking and attenuation—blocking cannot be overcome by strong stimuli, whereas attenuation can—we measured how attention interacted with the strength of stimuli in two spatial selection paradigms, spatial filtering and spatial monitoring. The evidence was consistent with blocking for the filtering paradigm and with attenuation for the monitoring paradigm. This approach provides a general measure of the fate of unattended stimuli. |
Shlomit Yuval-Greenberg; Leon Y. Deouell Scalp-recorded induced gamma-band responses to auditory stimulation and its correlations with saccadic muscle-activity Journal Article In: Brain Topography, vol. 24, no. 1, pp. 30–39, 2011. @article{YuvalGreenberg2011, We previously showed that the transient broadband induced gamma-band response in EEG (iGBRtb) appearing around 200-300 ms following a visual stimulus reflects the contraction of extra-ocular muscles involved in the execution of saccades, rather than neural oscillations. Several previous studies reported induced gamma-band responses also following auditory stimulation. It is still an open question whether, similarly to visual paradigms, such auditory paradigms are also sensitive to the saccadic confound. In the current study we address this question using simultaneous eye-tracking and EEG recordings during an auditory oddball paradigm. Subjects were instructed to respond to a rare target defined by sound source location, while fixating on a central screen. Results show that, similar to what was found in visual paradigms, saccadic rate displayed typical temporal dynamics including a post-stimulus decrease followed by an increase. This increase was more moderate, had a longer latency, and was less consistent across subjects than was found in the visual case. Crucially, the temporal dynamics of the induced gamma response were similar to those of saccadic-rate modulation. This suggests that the auditory induced gamma-band responses recorded on the scalp may also be affected by saccadic muscle activity. |
C. Yu-Wai-Man; K. Petheram; A. W. Davidson; T. Williams; P. G. Griffiths A supranuclear disorder of ocular motility as a rare initial presentation of motor neurone disease Journal Article In: Neuro-Ophthalmology, vol. 35, no. 1, pp. 38–39, 2011. @article{YuWaiMan2011, A case is described of motor neurone disease presenting with an ocular motor disorder characterised by saccadic intrusions, impaired horizontal and vertical saccades, and apraxia of eyelid opening. The occurrence of eye movement abnormalities in motor neurone disease is discussed. |
Michael Zehetleitner; Michael Hegenloh; Hermann J. Muller Visually guided pointing movements are driven by the salience map Journal Article In: Journal of Vision, vol. 11, no. 1, pp. 24–24, 2011. @article{Zehetleitner2011, Visual salience maps are assumed to mediate target selection decisions in a motor-unspecific manner; accordingly, modulations of salience influence yes/no target detection or left/right localization responses in manual key-press search tasks, as well as ocular or skeletal movements to the target. Although widely accepted, this core assumption is based on little psychophysical evidence. At least four modulations of salience are known to influence the speed of visual search for feature singletons: (i) feature contrast, (ii) cross-trial dimension sequence and (iii) semantic pre-cueing of the target dimension, and (iv) dimensional target redundancy. If salience guides also manual pointing movements, their initiation latencies (and durations) should be affected by the same four manipulations of salience. Four experiments, each examining one of these manipulations, revealed this to be the case. Thus, these effects are seen independently of the motor response required to signal the perceptual decision (e.g., directed manual pointing as well as simple yes/no detection responses). This supports the notion of a motor-unspecific salience map, which guides covert attention as well as overt eye and hand movements. |
Qing Yang; Zoï Kapoula Distinct control of initiation and metrics of memory-guided saccades and vergence by the FEF: A TMS study Journal Article In: PLoS ONE, vol. 6, no. 5, pp. e20322, 2011. @article{Yang2011, BACKGROUND: The initiation of memory guided saccades is known to be controlled by the frontal eye field (FEF). Recent physiological studies showed the existence of an area close to FEF that controls also vergence initiation and execution. This study is to explore the effect of transcranial magnetic simulation (TMS) over FEF on the control of memory-guided saccade-vergence eye movements. METHODOLOGY/PRINCIPAL FINDINGS: Subjects had to make an eye movement in dark towards a target flashed 1 sec earlier (memory delay); the location of the target relative to fixation point was such as to require either a vergence along the median plane, or a saccade, or a saccade with vergence; trials were interleaved. Single pulse TMS was applied on the left or right FEF; it was delivered at 100 ms after the end of memory delay, i.e. extinction of fixation LED that was the "go" signal. Twelve healthy subjects participated in the study. TMS of left or right FEF prolonged the latency of all types of eye movements; the increase varied from 21 to 56 ms and was particularly strong for the divergence movements. This indicates that FEF is involved in the initiation of all types of memory guided movement in the 3D space. TMS of the FEF also altered the accuracy but only for leftward saccades combined with either convergence or divergence; intrasaccadic vergence also increased after TMS of the FEF. CONCLUSIONS/SIGNIFICANCE: The results suggest anisotropy in the quality of space memory and are discussed in the context of other known perceptual motor anisotropies. |
Shun-Nan Yang; Yu-Chi Tai; Hannu Laukkanen; James E. Sheedy Effects of ocular transverse chromatic aberration on peripheral word identification Journal Article In: Vision Research, vol. 51, no. 21-22, pp. 2273–2281, 2011. @article{Yang2011a, Transverse chromatic aberration (TCA) smears the retinal image of peripheral stimuli. We previously found that TCA significantly reduces the ability to recognize letters presented in the near fovea by degrading image quality and exacerbating crowding effect from adjacent letters. The present study examined whether TCA has a significant effect on near foveal and peripheral word identification, and whether within-word orthographic facilitation interacts with TCA effect to affect word identification. Subjects were briefly presented a 6- to 7-letter word of high or low frequency in each trial. Target words were generated with weak or strong horizontal color fringe to attenuate the TCA in the right periphery and exacerbate it in the left. The center of the target word was 1°, 2°, 4°, and 6° to the left or right of a fixation point. Subject's eye position was monitored with an eye-tracker to ensure proper fixation before target presentation. They were required to report the identity of the target word as soon and accurately as possible. Results show significant effect of color fringe on the latency and accuracy of word recognition, indicating existing TCA effect. Observed TCA effect was more salient in the right periphery, and was affected by word frequency more there. Individuals' subjective preference of color-fringed text was correlated to the TCA effect in the near periphery. Our results suggest that TCA significantly affects peripheral word identification, especially when it is located in the right periphery. Contextual facilitation such as word frequency interacts with TCA to influence the accuracy and latency of word recognition. |
Victoria Yanulevskaya; Jan Bernard C. Marsman; Frans W. Cornelissen; Jan Mark Geusebroek An image statistics-based model for fixation prediction Journal Article In: Cognitive Computation, vol. 3, no. 1, pp. 94–104, 2011. @article{Yanulevskaya2011, The problem of predicting where people look at, or equivalently salient region detection, has been related to the statistics of several types of low-level image features. Among these features, contrast and edge information seem to have the highest correlation with the fixation locations. The contrast distribution of natural images can be adequately characterized using a two-parameter Weibull distribution. This distribution catches the structure of local contrast and edge frequency in a highly meaningful way. We exploit these observations and investigate whether the parameters of the Weibull distribution constitute a simple model for predicting where people fixate when viewing natural images. Using a set of images with associated eye movements, we assess the joint distribution of the Weibull parameters at fixated and non-fixated regions. Then, we build a simple classifier based on the log-likelihood ratio between these two joint distributions. Our results show that as few as two values per image region are already enough to achieve a performance comparable with the state-of-the-art in bottom-up saliency prediction. |
Bo Yao; Christoph Scheepers Contextual modulation of reading rate for direct versus indirect speech quotations Journal Article In: Cognition, vol. 121, no. 3, pp. 447–453, 2011. @article{Yao2011, In human communication, direct speech (e.g., Mary said: " I'm hungry" ) is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2, eye-tracking) read written stories that contained either a direct speech or an indirect speech quotation. The context preceding those quotations described a situation that implied either a fast-speaking or a slow-speaking quoted protagonist. It was found that this context manipulation affected reading rates (in both oral and silent reading) for direct speech quotations, but not for indirect speech quotations. This suggests that readers are more likely to engage in perceptual simulations of the reported speech act when reading direct speech as opposed to meaning-equivalent indirect speech quotations, as part of a more vivid representation of the former. |
Jian-Gao Yao; Xin Gao; Hong-Mei Yan; Chao-Yi Li Field of attention for instantaneous object recognition Journal Article In: PLoS ONE, vol. 6, no. 1, pp. e16343, 2011. @article{Yao2011a, Instantaneous object discrimination and categorization are fundamental cognitive capacities performed with the guidance of visual attention. Visual attention enables selection of a salient object within a limited area of the visual field; we referred to as "field of attention" (FA). Though there is some evidence concerning the spatial extent of object recognition, the following questions still remain unknown: (a) how large is the FA for rapid object categorization, (b) how accuracy of attention is distributed over the FA, and (c) how fast complex objects can be categorized when presented against backgrounds formed by natural scenes. |
Eiling Yee; Stacy Huffstetler; Sharon L. Thompson-Schill Function follows form: Activation of shape and function features during object identification Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 3, pp. 348–363, 2011. @article{Yee2011, Most theories of semantic memory characterize knowledge of a given object as comprising a set of semantic features. But how does conceptual activation of these features proceed during object identification? We present the results of a pair of experiments that demonstrate that object recognition is a dynamically unfolding process in which function follows form. We used eye movements to explore whether activating one object's concept leads to the activation of others that share perceptual (shape) or abstract (function) features. Participants viewed 4-picture displays and clicked on the picture corresponding to a heard word. In critical trials, the conceptual representation of 1 of the objects in the display was similar in shape or function (i.e., its purpose) to the heard word. Importantly, this similarity was not apparent in the visual depictions (e.g., for the target Frisbee, the shape-related object was a triangular slice of pizza, a shape that a Frisbee cannot take); preferential fixations on the related object were therefore attributable to overlap of the conceptual representations on the relevant features. We observed relatedness effects for both shape and function, but shape effects occurred earlier than function effects. We discuss the implications of these findings for current accounts of the representation of semantic memory. |
Li-Hao Yeh; Ana I. Schwartz; Aaron L. Baule The impact of text-structure strategy instruction on the text recall and eye-movement patterns of second language English readers Journal Article In: Reading Psychology, vol. 32, no. 6, pp. 495–519, 2011. @article{Yeh2011, Previous studies have demonstrated the efficacy of the Text Structure Strategy for improving text recall. The strategy emphasizes the identification of text structure for encoding and recalling information. Traditionally, the efficacy of this strategy has been measured through free recall. The present study examined whether recall and eye-movement patterns of second language English readers would benefit from training on the strategy. Participants' free recall and eye-movement patterns were measured before and after training. There was a significant increase in recall at posttest and a change in eye-movement patterns, reflecting additional processing time of phrases and words signaling the text structure. |
Z. V. J. Woodhead; S. L. E. Brownsett; N. S. Dhanjal; C. Beckmann; Richard J. S. Wise The visual word form system in context Journal Article In: Journal of Neuroscience, vol. 31, no. 1, pp. 193–199, 2011. @article{Woodhead2011, According to the “modular” hypothesis, reading is a serial feedforward process, with part of left ventral occipitotemporal cortex the earliest component tuned to familiar orthographic stimuli. Beyond this region, the model predicts no response to arrays of false font in reading-related neural pathways. An alternative “connectionist” hypothesis proposes that reading depends on interactions between feedforward projections from visual cortex and feedback projections from phonological and semantic systems, with no visual component exclusive to orthographic stimuli. This is compatible with automatic processing of false font throughout visual and heteromodal sensory pathways that support reading, in which responses to words may be greater than, but not exclusive of, responses to false font. This functional imaging study investigated these alternative hypotheses by using narrative texts and equivalent arrays of false font and varying the hemifield of presentation using rapid serial visual presentation. The “null” baseline comprised a decision on visually presented numbers. Preferential activity for narratives relative to false font, insensitive to hemifield of presentation, was distributed along the ventral left temporal lobe and along the extent of both superior temporal sulci. Throughout this system, activity during the false font conditions was significantly greater than during the number task, with activity specific to the number task confined to the intraparietal sulci. Therefore, both words and false font are extensively processed along the same temporal neocortical pathways, separate from the more dorsal pathways that process numbers. These results are incompatible with a serial, feedforward model of reading. |
Jessica M. Wright; Adam P. Morris; Bart Krekelberg Weighted integration of visual position information Journal Article In: Journal of Vision, vol. 11, no. 14, pp. 11–11, 2011. @article{Wright2011, The ability to localize visual objects is a fundamental component of human behavior and requires the integration of position information from object components. The retinal eccentricity of a stimulus and the locus of spatial attention can affect object localization, but it is unclear whether these factors alter the global localization of the object, the localization of object components, or both. We used psychophysical methods in humans to quantify behavioral responses in a centroid estimation task. Subjects located the centroid of briefly presented random dot patterns (RDPs). A peripheral cue was used to bias attention toward one side of the display. We found that although subjects were able to localize centroid positions reliably, they typically had a bias toward the fovea and a shift toward the locus of attention. We compared quantitative models that explain these effects either as biased global localization of the RDPs or as anisotropic integration of weighted dot component positions. A model that allowed retinal eccentricity and spatial attention to alter the weights assigned to individual dot positions best explained subjects' performance. These results show that global position perception depends on both the retinal eccentricity of stimulus components and their positions relative to the current locus of attention. |
Minnan Xu-Wilson; Jing Tian; Reza Shadmehr; David S. Zee TMS perturbs saccade trajectories and unmasks an internal feedback controller for saccades Journal Article In: Journal of Neuroscience, vol. 31, no. 32, pp. 11537–11546, 2011. @article{XuWilson2011, When we applied a single pulse of transcranial magnetic stimulation (TMS) to any part of the human head during a saccadic eye movement, the ongoing eye velocity was reduced as early as 45 ms after the TMS, and lasted ∼32 ms. The perturbation to the saccade trajectory was not due to a mechanical effect of the lid on the eye (e.g., from blinks). When the saccade involved coordinated movements of both the eyes and the lids, e.g., in vertical saccades, TMS produced a synchronized inhibition of the motor commands to both eye and lid muscles. The TMS-induced perturbation of the eye trajectory did not show habituation with repetition, and was present in both pro-saccades and anti-saccades. Despite the perturbation, the eye trajectory was corrected within the same saccade with compensatory motor commands that guided the eyes to the target. This within-saccade correction did not rely on visual input, suggesting that the brain monitored the oculomotor commands as the saccade unfolded, maintained a real-time estimate of the position of the eyes, and corrected for the perturbation. TMS disrupted saccades regardless of the location of the coil on the head, suggesting that the coil discharge engages a nonhabituating startle-like reflex system. This system affects ongoing motor commands upstream of the oculomotor neurons, possibly at the level of the superior colliculus or omnipause neurons. Therefore, a TMS pulse centrally perturbs saccadic motor commands, which are monitored possibly via efference copy and are corrected via internal feedback. |
Hamed Zivari Adab; Rufin Vogels Practicing coarse orientation discrimination improves orientation signals in macaque cortical area V4 Journal Article In: Current Biology, vol. 21, no. 19, pp. 1661–1666, 2011. @article{ZivariAdab2011, Practice improves the performance in visual tasks, but mechanisms underlying this adult brain plasticity are unclear. Single-cell studies reported no [1], weak [2], or moderate [3, 4] perceptual learning-related changes in macaque visual areas V1 and V4, whereas none were found in middle temporal (MT) [5]. These conflicting results and modeling of human (e.g., [6, 7]) and monkey data [8] suggested that changes in the readout of visual cortical signals underlie perceptual learning, rather than changes in these signals. In the V4 learning studies, monkeys discriminated small differences in orientation, whereas in the MT study, the animals discriminated opponent motion directions. Analogous to the latter study, we trained monkeys to discriminate static orthogonal orientations masked by noise. V4 neurons showed robust increases in their capacity to discriminate the trained orientations during the course of the training. This effect was observed during discrimination and passive fixation but specifically for the trained orientations. The improvement in neural discrimination was due to decreased response variability and an increase of the difference between the mean responses for the two trained orientations. These findings demonstrate that perceptual learning in a coarse discrimination task indeed can change the response properties of a cortical sensory area. |
Marc Zirnsak; R. G. K. Gerhards; Roozbeh Kiani; Markus Lappe; Fred H. Hamker Anticipatory saccade target processing and the presaccadic transfer of visual features Journal Article In: Journal of Neuroscience, vol. 31, no. 49, pp. 17887–17891, 2011. @article{Zirnsak2011, As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007). |
Yang Zhang; Ming Zhang Spatial working memory load impairs manual but not saccadic inhibition of return Journal Article In: Vision Research, vol. 51, no. 1, pp. 147–153, 2011. @article{Zhang2011, Although spatial working memory has been shown to play a central role in manual IOR (Castel, Pratt, & Craik, 2003), it is so far unclear whether spatial working memory is involved in saccadic IOR. The present study sought to address this question by using a dual task paradigm, in which the participants performed an IOR task while keeping a set of locations in spatial working memory. While manual IOR was eliminated, saccadic IOR was not affected by spatial working memory load. These findings suggest that saccadic IOR does not rely on spatial working memory to process inhibitory tagging. |
Anke Huckauf; Mario H. Urbina Object selection in gaze controlled systems: What you don't look at is what you get Journal Article In: ACM Transactions on Applied Perception, vol. 8, no. 2, pp. 1–14, 2011. @article{Huckauf2011, Controlling computers using eye movements can provide a fast and efficient alternative to the computer mouse. However, implementing object selection in gaze-controlled systems is still a challenge. Dwell times or fixations on a certain object typically used to elicit the selection of this object show several disadvantages. We studied deviations of critical thresholds by an individual and task-specific adaptation method. This demonstrated an enormous variability of optimal dwell times. We developed an alternative approach using antisaccades for selection. For selection by antisaccades, highlighted objects are copied to one side of the object. The object is selected when fixating to the side opposed to that copy requiring to inhibit an automatic gaze shift toward new objects. Both techniques were compared in a selection task. Two experiments revealed superior performance in terms of errors for the individually adapted dwell times. Antisaccades provide an alternative approach to dwell time selection, but they did not show an improvement over dwell time. We discuss potential improvements in the antisaccade implementation with which antisaccades might become a serious alternative to dwell times for object selection in gaze-controlled systems. |
V. C. Huddy; Timothy L. Hodgson; M. A. Ron; Thomas R. E. Barnes; Eileen M. Joyce Abnormal negative feedback processing in first episode schizophrenia: Evidence from an oculomotor rule switching task Journal Article In: Psychological Medicine, vol. 41, no. 9, pp. 1805–1814, 2011. @article{Huddy2011, Background. Previous studies have shown that patients with schizophrenia are impaired on executive tasks, where positive and negative feedbacks are used to update task rules or switch attention. However, research to date using saccadic tasks has not revealed clear deficits in task switching in these patients. The present study used an oculomotor ‘ rule switching ' task to investigate the use of negative feedback when switching between task rules in people with schizophrenia. Method. A total of 50 patients with first episode schizophrenia and 25 healthy controls performed a task in which the association between a centrally presented visual cue and the direction of a saccade could change from trial to trial. Rule changes were heralded by an unexpected negative feedback, indicating that the cue-response mapping had reversed. Results. Schizophrenia patients were found to make increased errors following a rule switch, but these were almost entirely the result of executing saccades away from the location at which the negative feedback had been presented on the preceding trial. This impairment in negative feedback processing was independent of IQ. Conclusions. The results not only confirm the existence of a basic deficit in stimulus–response rule switching in schizophrenia, but also suggest that this arises from aberrant processing of response outcomes, resulting in a failure to appropriately update rules. The findings are discussed in the context of neurological and pharmacological abnormalities in the conditions that may disrupt prediction error signalling in schizophrenia. |
Lynn Huestegge; Jos J. Adam Oculomotor interference during manual response preparation: Evidence from the response-cueing paradigm Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 3, pp. 702–707, 2011. @article{Huestegge2011, Preparation provided by visual location cues is known to speed up behavior. However, the role of concurrent saccades in response to visual cues remains unclear. In this study, participants performed a spatial precueing task by pressing one of four response keys with one of four fingers (two of each hand) while eye movements were monitored. Prior to the stimulus, we presented a neutral cue (baseline), a hand cue (corresponding to left vs. right positions), or a finger cue (corresponding to inner vs. outer positions). Participants either remained fixated on a central fixation point or moved their eyes freely. The results demonstrated that saccades during the cueing interval altered the pattern of cueing effects. Finger cueing trials in which saccades were spatially incompatible (vs. compatible) with the subsequently required manual response exhibited slower manual RTs. We propose that interference between saccades and manual responses affects manual motor preparation. |
Lynn Huestegge; Andrea M. Philipp Effects of spatial compatibility on integration processes in graph comprehension Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 6, pp. 1903–1915, 2011. @article{Huestegge2011a, A precondition for efficiently understanding and memorizing graphs is the integration of all relevant graph elements and their meaning. In the present study, we analyzed integration processes by manipulating the spatial compatibility between elements in the data region and the legend. In Experiment 1, participants judged whether bar graphs depicting either statistical main effects or interactions correspond to previously presented statements. In Experiments 2 and 3, the same was tested with line graphs of varying complexity. In Experiment 4, participants memorized line graphs for a subsequent validation task. Throughout the experiments, eye movements were recorded. The results indicated that data-legend compatibility reduced the time needed to understand graphs, as well as the time needed to retrieve relevant graph information from memory. These advantages went hand in hand with a decrease of gaze transitions between the data region and the legend, indicating that data-legend compatibility decreases the difficulty of integration processes. |
Falk Huettig; Gerry T. M. Altmann In: Quarterly Journal of Experimental Psychology, vol. 64, no. 1, pp. 122–145, 2011. @article{Huettig2011, Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment. |
Falk Huettig; James M. Mcqueen The nature of the visual environment induces implicit biases during language-mediated visual search Journal Article In: Memory & Cognition, vol. 39, no. 6, pp. 1068–1084, 2011. @article{Huettig2011a, Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search. |
Amelia R. Hunt; P. Cavanagh Remapped visual masking Journal Article In: Journal of Vision, vol. 11, no. 1, pp. 1–8, 2011. @article{Hunt2011, Cells in saccade control areas respond if a saccade is about to bring a target into their receptive fields (J. R. Duhamel, C. L. Colby, & M. R. Goldberg, 1992). This remapping process should shift the retinal location from which attention selects target information (P. Cavanagh, A. R. Hunt, S. R. Afraz, & M. Rolfs, 2010). We examined this attention shift in a masking experiment where target and mask were presented just before an eye movement. In a control condition with no eye movement, masks interfered with target identification only when they spatially overlapped. Just before a saccade, however, a mask overlapping the target had less effect, whereas a mask placed in the target's remapped location was quite effective. The remapped location is the retinal position the target will have after the upcoming saccade, which corresponds to neither the retinotopic nor spatiotopic location of the target before the saccade. Both effects are consistent with a pre-saccadic shift in the location from which attention selects target information. In the case of retinally aligned target and mask, the shift of attention away from the target location reduces masking, but when the mask appears at the target's remapped location, attention's shift to that location brings in mask information that interferes with the target identification. |
Marc Hurwitz; Derick Valadao; James Danckert Static versus dynamic judgments of spatial extent Journal Article In: Experimental Brain Research, vol. 209, no. 2, pp. 271–286, 2011. @article{Hurwitz2011, Research exploring how scanning affects judgments of spatial extent has produced conflicting results. We conducted four experiments on line bisection judgments measuring ocular and pointing behavior, with line length, position, speed, acceleration, and direction of scanning manipulated. Ocular and pointing judgments produced distinct patterns. For static judgments (i.e., no scanning), the eyes were sensitive to position and line length with pointing much less sensitive to these factors. For dynamic judgments (i.e., scanning the line), bisection biases were influenced by the speed of scanning but not acceleration, while both ocular and pointing results varied with scan direction. We suggest that static and dynamic probes of spatial judgments are different. Furthermore, the substantial differences seen between static and dynamic bisection suggest the two invoke different neural processes for computing spatial extent for ocular and pointing judgments. |
Samuel B. Hutton; S. Nolte The effect of gaze cues on attention to print advertisements Journal Article In: Applied Cognitive Psychology, vol. 25, no. 6, pp. 887–892, 2011. @article{Hutton2011, Print advertisements often employ images of humans whose gaze may be focussed on an object or region within the advertisement. Gaze cues are powerful factors in determining the focus of our attention, but there have been no systematic studies exploring the impact of gaze cues on attention to print advertisements. We tracked participants' eyes whilst they read an on-screen magazine containing advertisements in which the model either looked at the product being advertised or towards the viewer. When the model's gaze was directed at the product, participants spent longer looking at the product, the brand logo and the rest of the advertisement compared to when the model's gaze was directed towards the viewer. These results demonstrate that the focus of reader's attention can be readily manipulated by gaze cues provided by models in advertisements, and that these influences go beyond simply drawing attention to the cued area of the advertisement. |
Alex D. Hwang; Hsueh-Cheng Wang; Marc Pomplun Semantic guidance of eye movements in real-world scenes Journal Article In: Vision Research, vol. 51, no. 10, pp. 1192–1205, 2011. @article{Hwang2011, The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. |
Jukka Hyönä; Raymond Bertram Optimal viewing position effects in reading Finnish Journal Article In: Vision Research, vol. 51, no. 11, pp. 1279–1287, 2011. @article{Hyoenae2011, The present study examined effects of the initial landing position in words on eye behavior during reading of long and short Finnish compound words. The study replicated OVP and IOVP effects previously found in French, German and English - languages structurally distinct from Finnish, suggesting that the effects generalize across structurally different alphabetic languages. The results are consistent with the view that the landing position effects appear at the prelexical stage of word processing, as landing position effects were not modulated by word frequency. Moreover, the OVP effects are in line with a visuomotor explanation making recourse to visual acuity constraints. |
Lorelei R. Howard; Dharshan Kumaran; Hauður F. Ólafsdóttir; Hugo J. Spiers Double dissociation between hippocampal and parahippocampal responses to object-background context and scene novelty Journal Article In: Journal of Neuroscience, vol. 31, no. 14, pp. 5253–5261, 2011. @article{Howard2011, Several recent models of medial temporal lobe (MTL) function have proposed that the parahippocampal cortex processes context information, the perirhinal cortex processes item information, and the hippocampus binds together items and contexts. While evidence for a clear functional distinction between the perirhinal cortex and other regions within the MTL has been well supported, it has been less clear whether such a dissociation exists between the hippocampus and parahippocampal cortex. In the current study, we use a novel approach applying a functional magnetic resonance imaging adaptation paradigm to address these issues. During scanning, human subjects performed an incidental target detection task while viewing trial-unique sequentially presented pairs of natural scenes, each containing a single prominent object. We observed a striking double dissociation between the hippocampus and parahippocampal cortex, with the former showing a selective sensitivity to changes in the spatial relationship between objects and their background context and the latter engaged only by scene novelty. Our findings provide compelling support for the hypothesis that rapid item-context binding is a function of the hippocampus, rather than the parahippocampal cortex, with the former acting to detect relational novelty of this nature through its function as a match-mismatch detector. |
Yanbo Hu; Robin Walker The neural basis of parallel saccade programming: An fMRI study Journal Article In: Journal of Cognitive Neuroscience, vol. 23, no. 11, pp. 3669–3680, 2011. @article{Hu2011, The neural basis of parallel saccade programming was examined in an event-related fMRI study using a variation of the double-step saccade paradigm. Two double-step conditions were used: one enabled the second saccade to be partially programmed in parallel with the first saccade while in a second condition both saccades had to be prepared serially. The inter-saccadic interval, observed in the parallel programming (PP) condition, was significantly reduced compared with latency in the serial programming (SP) condition and also to the latency of single saccades in control conditions. The fMRI analysis revealed greater activity (BOLD response) in the frontal and parietal eye fields for the PP condition compared with the SP double-step condition and when compared with the single-saccade control conditions. By contrast, activity in the supplementary eye fields was greater for the double-step condition than the single-step condition but did not distinguish between the PP and SP requirements. The role of the frontal eye fields in PP may be related to the advanced temporal preparation and increased salience of the second saccade goal that may mediate activity in other downstream structures, such as the superior colliculus. The parietal lobes may be involved in the preparation for spatial remapping, which is required in double-step conditions. The supplementary eye fields appear to have a more general role in planning saccade sequences that may be related to error monitoring and the control over the execution of the correct sequence of responses. |
Yi Ting Huang; Peter C. Gordon Distinguishing the time course of lexical and discourse processes through context, coreference, and quantified expressions Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 4, pp. 966–978, 2011. @article{Huang2011, How does prior context influence lexical and discourse-level processing during real-time language comprehension? Experiment 1 examined whether the referential ambiguity introduced by a repeated, anaphoric expression had an immediate or delayed effect on lexical and discourse processing, using an eye-tracking-while-reading task. Eye movements indicated facilitated recognition of repeated expressions, suggesting that prior context can rapidly influence lexical processing. However, context effects at the discourse level affected later processing, appearing in longer regression-path durations 2 words after the anaphor and in greater rereading times of the antecedent expression. Experiments 2 and 3 explored the nature of this delay by examining the role of the preceding context in activating relevant representations. Offline and online interpretations confirmed that relevant referents were activated following the critical context. Nevertheless, their initial unavailability during comprehension suggests a robust temporal division between lexical and discourse-level processing. |
Yu-feng Huang; Feng-yang Kuo An eye-tracking investigation of internet consumers' decision deliberateness Journal Article In: Internet Research, vol. 21, no. 5, pp. 541–561, 2011. @article{Huang2011a, Purpose – Because presentation formats, i.e. table v. graph, in shopping web sites may promote or inhibit deliberate consumer decision making, it is important to understand the effects of information presentation on deliberateness. This paper seeks to empirically test whether the table format enhances deliberate decision making, while the web map weakens the process. In addition, deliberateness can be influenced by the decision orientation, i.e. emotionally charged or accuracy oriented. Thus, the paper further examines the effect of presentations across these two decision orientations. Design/methodology/approach – Objective and detailed description of the decision process is used to examine the effects. A two (decision orientation: positive emotion v. accuracy) by two (presentation: map v. table) eye-tracking experiment is designed. Deliberateness is quantified with the information processing pattern summarized from eye movement data. Participants are required to make preferential choices from simple decision tasks. Findings – The results confirm that the table strengthens while the map weakens deliberateness. In addition, this effect is mostly evident across the two decision orientations. An explorative factor analysis further reveals that there are two major attention distribution functions (global v. local) underlying the decision process. Research limitations/implications – Only simple decision tasks are used in the present study and therefore complex tasks should be introduced to examine the effects in the future. Practical implications – For consumers, they should become aware that the table facilitates while the map diminishes deliberateness. For web businesses, they may try to strengthen the impulsivity in a web map filled with emotional stimuli. Originality/value – This research is one of the first attempts to investigate the joint effects of presentations and decision orientations on decision deliberateness in the internet domain. The eye movement data are also valuable because previous studies seldom provided such detailed description of the decision process. |
Tuomo Häikiö; Raymond Bertram; Jukka Hyönä The development of whole-word representations in compound word processing: Evidence from eye fixation patterns of elementary school children Journal Article In: Applied Psycholinguistics, vol. 32, no. 3, pp. 533–551, 2011. @article{Haeikioe2011, The role of morphology in reading development was examined by measuring participants' eye movements while they read sentences containing either a hyphenated (e.g., ulko-ovi “front door”) or concatenated (e.g., autopeli “racing game”) compound. The participants were Finnish second, fourth, and sixth graders (aged 8, 10, and 12 years, respectively). Fast second graders and all four and sixth graders read concatenated compounds faster than hyphenated compounds. This suggests that they resort to slower morpheme-based processing for hyphenated compounds but prefer to process concatenated compounds via whole-word representations. In contrast, slow second graders' fixation durations were shorter for hyphenated than concatenated compounds. This implies that they process all compounds via constituent morphemes and that hyphenation comes to aid in this process. |
Timothy D. Hanks; Mark E. Mazurek; Roozbeh Kiani; Elisabeth Hopp; Michael N. Shadlen Elapsed decision time affects the weighting of prior probability in a perceptual decision task Journal Article In: Journal of Neuroscience, vol. 31, no. 17, pp. 6339–6352, 2011. @article{Hanks2011, Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (1) decisions that linger tend to arise from less reliable evidence, and (2) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal area (LIP) of rhesus monkeys performing this task. |
Albrecht W. Inhoff; Matthew S. Solomon; Ralph Radach; Bradley A. Seymour Temporal dynamics of the eye-voice span and eye movement control during oral reading Journal Article In: Journal of Cognitive Psychology, vol. 23, no. 5, pp. 543–558, 2011. @article{Inhoff2011, The distance between eye movements and articulation during oral reading, commonly referred to as the eye?voice span, has been a classic issue of experimental reading research since Buswell (1921). To examine the influence of the span on eye movement control, synchronised recordings of eye position and speech production were obtained during fluent oral reading. The viewing of a word almost always preceded its articulation, and the interval between the onset of a word's fixation and the onset of its articulation was approximately 500 ms. The identification and articulation of a word were closely coupled, and the fixation?speech interval was regulated through immediate adjustments of word viewing duration, unless the interval was relatively long. In this case, the lag between identification and articulation was often reduced through a regression that moved the eyes back in the text. These results indicate that models of eye movement control during oral reading need to include a mechanism that maintains a close linkage between the identification and articulation of words through continuous oculomotor adjustments. |
Lisa Irmen; Eva Schumann Processing grammatical gender of role nouns: Further evidence from eye movements Journal Article In: Journal of Cognitive Psychology, vol. 23, no. 8, pp. 998–1014, 2011. @article{Irmen2011, Two eye-tracking experiments investigated the effects of masculine versus feminine grammatical gender on the processing of role nouns and on establishing coreference relations. Participants read sentences with the basic structure My kinship term is a role noun prepositional phrase such as My brother is a singer in a band. Role nouns were either masculine or feminine. Kinship terms were lexically male or female and in this way specified referent gender, i.e., the sex of the person referred to. Experiment 1 tested a fully crossed design including items with an incorrect combination of lexically male kinship term and feminine role name. Experiment 2 tested only correct combinations of grammatical and lexical/referential gender to control for possible effects of the incorrect items of Experiment 1. In early stages of processing, feminine role nouns, but not masculine ones, were fixated longer when grammatical and referential gender were contradictory. In later stages of sentence wrap-up there were longer fixations for sentences with masculine than for those with feminine role nouns. Results of both experiments indicate that, for feminine role nouns, cues to referent gender are integrated immediately, whereas a late integration obtains for masculine forms. |
David E. Irwin Where does attention go when you blink? Journal Article In: Attention, Perception, & Psychophysics, vol. 73, no. 5, pp. 1374–1384, 2011. @article{Irwin2011, Many studies have shown that covert visual attention precedes saccadic eye movements to locations in space. The present research investigated whether the allocation of attention is similarly affected by eye blinks. Subjects completed a partial-report task under blink and no-blink conditions. Experiment 1 showed that blinking facilitated report of the bottom row of the stimulus array: Accuracy for the bottom row increased and mislocation errors decreased under blink, as compared with no-blink, conditions, indicating that blinking influenced the allocation of visual attention. Experiment 2 showed that this was true even when subjects were biased to attend elsewhere. These results indicate that attention moves downward before a blink in an involuntary fashion. The eyes also move downward during blinks, so attention may precede blink-induced eye movements just as it precedes saccades and other types of eye movements. |
L. Issen; Krystel R. Huxlin; David C. Knill Spatial integration of optic flow information in direction of heading judgments Journal Article In: Journal of Vision, vol. 11, no. 6, pp. 1–16, 2011. @article{Issen2011, While we know that humans are extremely sensitive to optic flow information about direction of heading, we do not know how they integrate information across the visual field. We adapted the standard cue perturbation paradigm to investigate how young adult observers integrate optic flow information from different regions of the visual field to judge direction of heading. First, subjects judged direction of heading when viewing a three-dimensional field of random dots simulating linear translation through the world. We independently perturbed the flow in one visual field quadrant to indicate a different direction of heading relative to the other three quadrants. We then used subjects' judgments of direction of heading to estimate the relative influence of flow information in each quadrant on perception. Human subjects behaved similarly to the ideal observer in terms of integrating motion information across the visual field with one exception: Subjects overweighted information in the upper half of the visual field. The upper-field bias was robust under several different stimulus conditions, suggesting that it may represent a physiological adaptation to the uneven distribution of task-relevant motion information in our visual world. |
Stephanie Jainta The pupil reflects motor preparation for saccades – even before the eye starts to move Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 97, 2011. @article{Jainta2011, The eye produces saccadic eye movements whose reaction times are perhaps the shortest in humans. Saccade latencies reflect ongoing cortical processing and, generally, shorter latencies are supposed to reflect advanced motor preparation. The dilation of the eye's pupil is reported to reflect cortical processing as well. Eight participants made saccades in a gap and overlap paradigm (in pure and mixed blocks), which we used in order to produce a variety of different saccade latencies. Saccades and pupil size were measured with the EyeLink II. The pattern in pupil dilation resembled that of a gap effect: for gap blocks, pupil dilations were larger compared to overlap blocks; mixing gap and overlap trials reduced the pupil dilation for gap trials thereby inducing a switching cost. Furthermore, saccade latencies across all tasks predicted the magnitude of pupil dilations post hoc: the longer the saccade latency the smaller the pupil dilation before the eye actually began to move. In accordance with observations for manual responses, we conclude that pupil dilations prior to saccade execution reflect advanced motor preparations and therefore provide valid indicator qualities for ongoing cortical processes. |
Stephanie Jainta; Anne Dehnert; Sven P. Heinrich; Wolfgang Jaschinski Binocular coordination during reading of blurred and nonblurred text Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 13, pp. 9416–9424, 2011. @article{Jainta2011a, PURPOSE: Reading a text requires vergence angle adjustments, so that the images in the two eyes fall on corresponding retinal areas. Vergence adjustments bring the two retinal images into Panum's fusional area and therefore, small remaining errors or regulations do not lead to double vision. The present study evaluated dynamic and static aspects of the binocular coordination when upcoming text was blurred. METHODS: Binocular eye movements and accommodation responses were simultaneously measured for 20 participants while reading single, nonblurred sentences and while the text was blurred as if it were seen by a person in whom the combination of refraction and accommodation deviated from the stimulus plane by 0.5 D. RESULTS: Text comprehension did not change, even though fixation times increased for reading blurred sentences. The disconjugacy during saccades was also not affected by blurred text presentations, but the vergence adjustment during fixations was reduced. Further, for blurred text, the overall vergence angle shifted in the exo direction, and this shift correlated with the individual heterophoria. Accommodation measures showed that the lag of accommodation was slightly larger for reading blurred sentences and that the shift in vergence angle was larger when the individual lag of accommodation was also larger. CONCLUSIONS: The results suggest that reading comprehension is robust against changes in binocular coordination that result from moderate text degradation; nevertheless, these changes are likely to be linked to the development of fatigue and visual strain in near reading conditions. |
Sarah R. Heilbronner; Benjamin Y. Hayden; Michael L. Platt Decision salience signals in posterior cingulate cortex Journal Article In: Frontiers in Neuroscience, vol. 5, pp. 55, 2011. @article{Heilbronner2011, Despite its phylogenetic antiquity and clinical importance, the posterior cingulate cortex (CGp) remains an enigmatic nexus of attention, memory, motivation, and decision making. Here we show that CGp neurons track decision salience - the degree to which an option differs from a standard - but not the subjective value of a decision. To do this, we recorded the spiking activity of CGp neurons in monkeys choosing between options varying in reward-related risk, delay to reward, and social outcomes, each of which varied in level of decision salience. Firing rates were higher when monkeys chose the risky option, consistent with their risk-seeking preferences, but were also higher when monkeys chose the delayed and social options, contradicting their preferences. Thus, across decision contexts, neuronal activity was uncorrelated with how much monkeys valued a given option, as inferred from choice. Instead, neuronal activity signaled the deviation of the chosen option from the standard, independently of how it differed. The observed decision salience signals suggest a role for CGp in the flexible allocation of neural resources to motivationally significant information, akin to the role of attention in selective processing of sensory inputs. |
Stephen J. Heinen; Z. Jin; Scott N. J. Watamaniuk Flexibility of foveal attention during ocular pursuit Journal Article In: Journal of Vision, vol. 11, no. 2, pp. 1–12, 2011. @article{Heinen2011, Smooth pursuit of natural objects requires flexible allocation of attention to inspect features. However, it has been reported that attention is focused at the fovea during pursuit. We ask here if foveal attention is obligatory during pursuit, or if it can be disengaged. Observers tracked a stimulus composed of a central dot surrounded by four others and identified one of the dots when it dimmed. Extinguishing the center dot before the dimming improved task performance, suggesting that attention was released from it. To determine if the center dot automatically usurped attention, we provided the pursuit system with an alternative sensory signal by adding peripheral motion that moved with the stimulus. This also improved identification performance, evidence that a central target does not necessarily require attention during pursuit. Identification performance at the central dot also improved, suggesting that the spatial extent of the background did not attract attention to the periphery; instead, peripheral motion freed pursuit attention from the central dot, affording better identification performance. The results show that attention can be flexibly allocated during pursuit and imply that attention resources for pursuit of small and large objects come from different sources. |
Jennifer J. Heisz; Jennifer D. Ryan The effects of prior exposure on face processing in younger and older adults Journal Article In: Frontiers in Aging Neuroscience, vol. 3, pp. 15, 2011. @article{Heisz2011, Older adults differ from their younger counterparts in the way they view faces. We assessed whether older adults can use past experience to mitigate these typical face-processing differences; that is, we examined whether there are age-related differences in the use of memory to support current processing. Eye movements of older and younger adults were monitored as they viewed faces that varied in the type/amount of prior exposure. Prior exposure was manipulated by including famous and novel faces, and by presenting faces up to five times. We expected that older adults may have difficulty quickly establishing new representations to aid in the processing of recently presented faces, but would be able to invoke face representations that have been stored in memory long ago to aid in the processing of famous faces. Indeed, younger adults displayed effects of recent exposure with a decrease in the total fixations to the faces and a gradual increase in the proportion of fixations to the eyes. These effects of recent exposure were largely absent in older adults. In contrast, the effect of fame, revealed by a subtle increase in fixations to the inner features of famous compared to non-famous faces, was similar for younger and older adults. Our results suggest that older adults' current processing can benefit from lifetime experience, however the full benefit of recent experience on face processing is not realized in older adults. |
Richard W. Hertle; Dongsheng Yang; Kenneth Adams; Roxanne Caterino Surgery for the treatment of vertical head posturing associated with infantile nystagmus syndrome: Results in 24 patients Journal Article In: Clinical and Experimental Ophthalmology, vol. 39, no. 1, pp. 37–46, 2011. @article{Hertle2011, Background: The study of the clinical and electrophysiological effects of eye muscle surgery on patients with infantile nystagmus has broadened our knowledge of the disease and its interventions. Design: Prospective, comparative, interventional case series. Participants: Twenty-four patients with a vertical head posture because of electrophysiologically diagnosed infantile nystagmus syndrome. The ages ranged from 2.5 to 38 years and follow up averaged 14.0 months. Methods: Thirteen patients with a chin-down posture had a bilateral superior rectus recession, inferior oblique myectomy and a horizontal rectus recession or tenotomy. Those 11 with a chin-up posture had a bilateral superior oblique tenectomy, inferior rectus recession and a horizontal rectus recession or tenotomy. Main Outcome Measures: Outcome measures included: demography, eye/systemic conditions and preoperative and postoperative; binocular, best optically corrected, null zone acuity, head posture, null zone foveation time and nystagmus waveform changes. Results: Associated conditions were strabismus in 66%, ametropia in 96%, amblyopia in 46% and optic nerve, foveal dysplasia or albinism in 54%. Null zone acuity increased at least 0.1 logMAR in 20 patients (P < 0.05 group mean change). Patients had significant (P < 0.05) improvements in degrees of head posture, average foveation time in milliseconds and infantile nystagmus syndrome waveform improvements. Conclusions: This study illustrates a successful surgical approach to treatment and provides expectations of ocular motor and visual results after vertical head posture surgery because of an eccentric gaze null in patients with infantile nystagmus syndrome. |
Arvid Herwig; Gernot Horstmann Action-effect associations revealed by eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 3, pp. 531–537, 2011. @article{Herwig2011, We move our eyes not only to get information, but also to supply information to our fellows. The latter eye movements can be considered as goal-directed actions to elicit changes in our counterparts. In two eye-tracking experiments, participants looked at neutral faces that changed facial expression 100 ms after the gaze fell upon them. We show that participants anticipate a change in facial expression and direct their first saccade more often to the mouth region of a neutral face about to change into a happy one and to the eyebrows region of a neutral face about to change into an angry expression. Moreover, saccades in response to facial expressions are initiated more quickly to the position where the expression was previously triggered. Saccade-effect associations are easily acquired and are used to guide the eyes if participants freely select where to look next (Experiment 1), but not if saccades are triggered by external stimuli (Experiment 2). |
Till S. Hartmann; Frank Bremmer; Thomas D. Albright; Bart Krekelberg Receptive field positions in area MT during slow eye movements Journal Article In: Journal of Neuroscience, vol. 31, no. 29, pp. 10437–10444, 2011. @article{Hartmann2011, Perceptual stability requires the integration of information across eye movements. We first tested the hypothesis that motion signals are integrated by neurons whose receptive fields (RFs) do not move with the eye but stay fixed in the world. Specifically, we measured the RF properties of neurons in the middle temporal area (MT) of macaques (Macaca mulatta) during the slow phase of optokinetic nystagmus. Using a novel method to estimate RF locations for both spikes and local field potentials, we found that the location on the retina that changed spike rates or local field potentials did not change with eye position; RFs moved with the eye. Second, we tested the hypothesis that neurons link information across eye positions by remapping the retinal location of their RFs to future locations. To test this, we compared RF locations during leftward and rightward slow phases of optokinetic nystagmus. We found no evidence for remapping during slow eye movements; the RF location was not affected by eye-movement direction. Together, our results show that RFs of MT neurons and the aggregate activity reflected in local field potentials are yoked to the eye during slow eye movements. This implies that individual MT neurons do not integrate sensory information from a single position in the world across eye movements. Future research will have to determine whether such integration, and the construction of perceptual stability, takes place in the form of a distributed population code in eye-centered visual cortex or is deferred to downstream areas. |
Andreas Hartwig; W. Neil Charman; Hema Radhakrishnan Accommodative response to peripheral stimuli in myopes and emmetropes Journal Article In: Ophthalmic and Physiological Optics, vol. 31, no. 1, pp. 91–99, 2011. @article{Hartwig2011, Purpose: It has been suggested that peripheral refractive error may influence eye growth and the development of axial refractive error, implying that the peripheral retina is sensitive to defocus. This study aimed to evaluate the steady-state accommodative response to peripheral stimuli in 10 young, adult myopes (mean spherical equivalent error -2.10 ± 1.72 D, median -1.63 D, range -0.83 to -6.00 D) and 10 emmetropes (mean spherical equivalent error -0.02 ± 0.35 D, median +0.08 D, range -0.50 to +0.50 D). Methods: The subjects were asked to view monocularly the centre of a screen displaying each of a series of eccentric accommodative targets placed at 5, 10 and 15°. An axial target was viewed for comparison purposes. Accommodation was measured using an open-field autorefractor, each stimulus being varied between about 0 and 4 D with spherical trial lenses placed in front of the viewing eye. Results: The results confirm that the peripheral retina is sensitive to optical focus, up to field angles of at least 15°, with accommodative responses weakening as the peripheral angle increases. There is some evidence that peripheral accommodation may be less effective in myopes than emmetropes. Conclusions: Although peripheral accommodation can be demonstrated in the absence of a central stimulus, the accommodation response is normally dominated by the central stimulus and it seems unlikely that peripheral accommodation effects play an important role in refractive development. |
Andreas Hartwig; Emma Gowen; W. Neil Charman; Hema Radhakrishnan Analysis of head position used by myopes and emmetropes when performing a near-vision reading task Journal Article In: Vision Research, vol. 51, no. 14, pp. 1712–1717, 2011. @article{Hartwig2011b, The aim of the study was to compare head posture in young, adult emmetropes and corrected myopes during a reading task. Thirty-two (32) myopes (mean spherical equivalent: -3.46 ± 2.35. D) and 22 emmetropes (mean spherical equivalent: -0.03 ± 0.36. D) participated in the study. Of the myopes, 16 were progressing (rate of progression ≥-0.5. D over the previous 2. years), 12 were stable (changes of -0.25. D or less over 2. years) and four could not be classified. Seated subjects were asked to read a text binocularly in their habitual posture. To measure head posture, two simultaneous images were recorded from different directions. In a separate study with the same subjects and conditions, a motion monitor was used to track head posture for 1. min. The habitual reading distance was measured in both studies, together with the stereoscopic acuity and fixation disparity for each subject.The results of the photographic study showed no significant differences in head posture or reading distance between the myopic and emmetropic groups (p>0.05) but there was some evidence that downward pitch angles were greater in progressing myopes than in non-progressing myopes (p=0.03). No correlations were observed between the binocular parameters and head posture. Reading distances were systematically shorter with the helmet-mounted eye tracker and it was concluded that posture was affected by the weight of the equipment. With this reservation, it appeared that the rate of change of downward pitch angle over the 1-min recording session increased with the subject's rate of myopia progression (correlation between myopia progression and slope of pitch: r2=-0.69 |