Nadia Alahyane; Anne-Dominique Devauchelle; Roméo Salemme; Denis Pélisson
Spatial transfer of adaptation of scanning voluntary saccades in humans Journal Article
In: Neuroreport, vol. 19, no. 1, pp. 37–41, 2008.
The properties and neural substrates of the adaptive mechanisms that maintain over time the accuracy of voluntary, internally triggered saccades are still poorly understood. Here, we used transfer tests to evaluate the spatial properties of adaptation of scanning voluntary saccades. We found that an adaptive reduction of the size of a horizontal rightward 7 degrees saccade transferred to other saccades of a wide range of amplitudes and directions. This transfer decreased as tested saccades increasingly differed in amplitude or direction from the trained saccade, being null for vertical and leftward saccades. Voluntary saccade adaptation thus presents bounded, but large adaptation fields, suggesting that at least part of the underlying neural substrate encodes saccades as vectors.
Ensar Becic; Walter R. Boot; Arthur F. Kramer
Training older adults to search more effectively: Scanning strategy and visual search in dynamic displays Journal Article
In: Psychology and Aging, vol. 23, no. 2, pp. 461–466, 2008.
The authors examined the ability of older adults to modify their search strategies to detect changes in dynamic displays. Older adults who made few eye movements during search (i.e., covert searchers) were faster and more accurate compared with individuals who made many eye movements (i.e., overt searchers). When overt searchers were instructed to adopt a covert search strategy, target detection performance increased to the level of natural covert searchers. Similarly, covert searchers instructed to search overtly exhibited a decrease in target detection performance. These data suggest that with instructions and minimal practice, older adults can ameliorate the cost of a poor search strategy.
Mark W. Becker; Ian P. Rasmussen
Guidance of attention to objects and locations by long-term memory of natural scenes Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 6, pp. 1325–1338, 2008.
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.
Eva Belke; Glyn W. Humphreys; Derrick G. Watson; Antje S. Meyer; Anna L. Telling
Top-down effects of semantic knowledge in visual search are modulated by cognitive but not perceptual load Journal Article
In: Perception and Psychophysics, vol. 70, no. 8, pp. 1444–1458, 2008.
Moores, Laiti, and Chelazzi (2003) found semantic interference from associate competitors during visual object search, demonstrating the existence of top-down semantic influences on the deployment of attention to objects. We examined whether effects of semantically related competitors (same-category members or associates) interacted with the effects of perceptual or cognitive load. We failed to find any interaction between competitor effects and perceptual load. However, the competitor effects increased significantly when participants were asked to retain one or five digits in memory throughout the search task. Analyses of eye movements and viewing times showed that a cognitive load did not affect the initial allocation of attention but rather the time it took participants to accept or reject an object as the target. We discuss the implications of our findings for theories of conceptual short-term memory and visual attention.
Naseem Al-aidroos; Jos J. Adam; Martin H. Fischer; Jay Pratt
Structured perceptual arrays and the modulation of Fitts's Law: Examining saccadic eye movements Journal Article
In: Journal of Motor Behavior, vol. 40, no. 2, pp. 155–164, 2008.
On the basis of recent observations of a modulation of Fitts's law for manual pointing movements in structured visual arrays (J. J. Adam, R. Mol, J. Pratt, & M. H. Fischer, 2006; J. Pratt, J. J. Adam, & M. H. Fischer, 2007), the authors examined whether a similar modulation occurs for saccadic eye move- ments. Healthy participants (N = 19) made horizontal saccades to targets that appeared randomly in 1 of 4 positions, either on an empty background or within 1 of 4 placeholder boxes. Whereas in previous studies, placeholders caused a decrease in movement time (MT) without the normal decrease in movement accuracy predicted by Fitts's law, placeholders in the present experiment increased saccadic accuracy (decreased endpoint variability) with- out an increase in MT. The present results extend the findings of J. J. Adam et al. of a modulation of Fitts's law from the temporal domain to the spatial domain and from manual movements to eye movements.
Britt Anderson; Ryan E. B. Mruczek; Keisuke Kawasaki; David L. Sheinberg
Effects of familiarity on neural activity in monkey inferior temporal lobe Journal Article
In: Cerebral Cortex, vol. 18, no. 11, pp. 2540–2552, 2008.
Long-term familiarity facilitates recognition of visual stimuli. To better understand the neural basis for this effect, we measured the local field potential (LFP) and multiunit spiking activity (MUA) from the inferior temporal (IT) lobe of behaving monkeys in response to novel and familiar images. In general, familiar images evoked larger amplitude LFPs whereas MUA responses were greater for novel images. Familiarity effects were attenuated by image rotations in the picture plane of 45 degrees. Decreasing image contrast led to more pronounced decreases in LFP response magnitude for novel, compared with familiar images, and resulted in more selective MUA response profiles for familiar images. The shape of individual LFP traces could be used for stimulus classification, and classification performance was better for the familiar image category. Recording the visual and auditory evoked LFP at multiple depths showed significant alterations in LFP morphology with distance changes of 2 mm. In summary, IT cortex shows local processing differences for familiar and novel images at a time scale and in a manner consistent with the observed behavioral advantage for classifying familiar images and rapidly detecting novel stimuli.
Britt Anderson; David L. Sheinberg
Effects of temporal context and temporal expectancy on neural activity in inferior temporal cortex Journal Article
In: Neuropsychologia, vol. 46, no. 4, pp. 947–957, 2008.
Timing is critical. The same event can mean different things at different times and some events are more likely to occur at one time than another. We used a cued visual classification task to evaluate how changes in temporal context affect neural responses in inferior temporal cortex, an extrastriate visual area known to be involved in object processing. On each trial a first image cued a temporal delay before a second target image appeared. The animal's task was to classify the second image by pressing one of two buttons previously associated with that target. All images were used as both cues and targets. Whether an image cued a delay time or signaled a button press depended entirely upon whether it was the first or second picture in a trial. This paradigm allowed us to compare inferior temporal cortex neural activity to the same image subdivided by temporal context and expectation. Neuronal spiking was more robust and visually evoked local field potentials (LFP's) larger for target presentations than for cue presentations. On invalidly cued trials, when targets appeared unexpectedly early, the magnitude of the evoked LFP was reduced and delayed and neuronal spiking was attenuated. Spike field coherence increased in the beta-gamma frequency range for expected targets. In conclusion, different neural responses in higher order ventral visual cortex may occur for the same visual image based on manipulations of temporal attention.
Elaine J. Anderson; Sabira K. Mannan; Geraint Rees; Petroc Sumner; Christopher Kennard
A role for spatial and nonspatial working memory processes in visual search Journal Article
In: Experimental Psychology, vol. 55, no. 5, pp. 301–312, 2008.
Searching a cluttered visual scene for a specific item of interest can take several seconds to perform if the target item is difficult to discriminate from surrounding items. Whether working memory processes are utilized to guide the path of attentional selection during such searches remains under debate. Previous studies have found evidence to support a role for spatial working memory in inefficient search, but the role of nonspatial working memory remains unclear. Here, we directly compared the role of spatial and nonspatial working memory for both an efficient and inefficient search task. In Experiment 1, we used a dual-task paradigm to investigate the effect of performing visual search within the retention interval of a spatial working memory task. Importantly, by incorporating two working memory loads (low and high) we were able to make comparisons between dual-task conditions, rather than between dual-task and single-task conditions. This design allows any interference effects observed to be attributed to changes in memory load, rather than to nonspecific effects related to "dual-task" performance. We found that the efficiency of the inefficient search task declined as spatial memory load increased, but that the efficient search task remained efficient. These results suggest that spatial memory plays an important role in inefficient but not efficient search. In Experiment 2, participants performed the same visual search tasks within the retention interval of visually matched spatial and verbal working memory tasks. Critically, we found comparable dual-task interference between inefficient search and both the spatial and nonspatial working memory tasks, indicating that inefficient search recruits working memory processes common to both domains.
Bernhard Angele; Timothy J. Slattery; Jinmian Yang; Reinhold Kliegl; Keith Rayner
Parafoveal processing in reading: Manipulating n+1 and n+2 previews simultaneously Journal Article
In: Visual Cognition, vol. 16, no. 6, pp. 697–707, 2008.
The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading.
Jennifer E. Arnold
THE BACON not the bacon: How children and adults understand accented and unaccented noun phrases Journal Article
In: Cognition, vol. 108, no. 1, pp. 69–99, 2008.
Two eye-tracking experiments examine whether adults and 4- and 5-year-old children use the presence or absence of accenting to guide their interpretation of noun phrases (e.g., the bacon) with respect to the discourse context. Unaccented nouns tend to refer to contextually accessible referents, while accented variants tend to be used for less accessible entities. Experiment 1 confirms that accenting is informative for adults, who show a bias toward previously-mentioned objects beginning 300 ms after the onset of unaccented nouns and pronouns. But contrary to findings in the literature, accented words produced no observable bias. In Experiment 2, 4 and 5 year olds were also biased toward previously-mentioned objects with unaccented nouns and pronouns. This builds on findings of limits on children's on-line reference comprehension [Arnold, J. E., Brown-Schmidt, S., & Trueswell, J. C. (2007). Children's use of gender and order-of-mention during pronoun comprehension. Language and Cognitive Processes], showing that children's interpretation of unaccented nouns and pronouns is constrained in contexts with one single highly accessible object.
Jennifer E. Arnold; Shin-Yi C. Lao
Put in last position something previously unmentioned: Word order effects on referential expectancy and reference comprehension Journal Article
In: Language and Cognitive Processes, vol. 23, no. 2, pp. 282–295, 2008.
Research has shown that the comprehension of definite referring expressions (e.g., "the triangle") tends to be faster for "given" (previously mentioned) referents, compared with new referents. This has been attributed to the presence of given information in the consciousness of discourse participants (e.g., Chafe, 1994) suggesting that given is always more accessible. By contrast, we find a bias toward new referents during the on-line comprehension of the direct object in heavy-NP-shifted word orders, e.g., "Put on the star the...." This order tends to be used for new direct objects; canonical unshifted orders are more common with given direct objects. Thus, word order provides probabilistic information about the givenness or newness of the direct object. Results from eyetracking and gating experiments show that the traditional given bias only occurs with unshifted orders; with heavy-NP-shifted orders, comprehenders expect the object to be new, and comprehension for new referents is facilitated. (Contains 2 figures and 3 tables.)
Hillel Aviezer; Ran R. Hassin; Jennifer D. Ryan; Cheryl L. Grady; Josh Susskind; Adam Anderson; Morris Moscovitch; Shlomo Bentin
Angry, disgusted, or afraid? Studies on the malleability of emotion perception Journal Article
In: Psychological Science, vol. 19, no. 7, pp. 724–732, 2008.
Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly ‘‘read out'' from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels.
Gary D. Bond
Deception detection expertise Journal Article
In: Law and Human Behavior, vol. 32, no. 4, pp. 339–351, 2008.
A lively debate between Bond and Uysal (2007, Law and Human Behavior, 31, 109-115) and O'Sullivan (2007, Law and Human Behavior, 31, 117-123) concerns whether there are experts in deception detection. Two experiments sought to (a) identify expert(s) in detection and assess them twice with four tests, and (b) study their detection behavior using eye tracking. Paroled felons produced videotaped statements that were presented to students and law enforcement personnel. Two experts were identified, both female Native American BIA correctional officers. Experts were over 80% accurate in the first assessment, and scored at 90% accuracy in the second assessment. In Signal Detection analyses, experts showed high discrimination, and did not evidence biased responding. They exploited nonverbal cues to make fast, accurate decisions. These highly-accurate individuals can be characterized as experts in deception detection.
Verena S. Bonitz; Robert D. Gordon
Attention to smoking-related and incongruous objects during scene viewing Journal Article
In: Acta Psychologica, vol. 129, no. 2, pp. 255–263, 2008.
This study examined the influences of semantic characteristics of objects in real-world scenes on allocation of attention as reflected in eye movement measures. Stimuli consisted of full-color photographic scenes, and within each scene, the semantic salience of two target objects was manipulated while the objects' perceptual salience was kept constant. One of the target objects was either inconsistent or consistent with the scene category. In addition, the second target object was either smoking-related or neutral. Two groups of college students, namely current cigarette smokers (N = 18) and non-smokers (N = 19), viewed each scene for 10 s while their eye movements were recorded. While both groups showed preferential allocation of attention to inconsistent objects, smokers also selectively attended to smoking-related objects. Theoretical implications of the results are discussed.
Manon W. Jones; Mateo Obregón; M. Louise Kelly; Holly P. Branigan
Elucidating the component processes involved in dyslexic and non-dyslexic reading fluency: An eye-tracking study Journal Article
In: Cognition, vol. 109, no. 3, pp. 389–407, 2008.
The relationship between rapid automatized naming (RAN) and reading fluency is well documented (see Wolf, M. & Bowers, P.G. (1999). The double-deficit hypothesis for the developmental dyslexias. Journal of Educational Psychology, 91(3), 415-438, for a review), but little is known about which component processes are important in RAN, and why developmental dyslexics show longer latencies on these tasks. Researchers disagree as to whether these delays are caused by impaired phonological processing or whether extra-phonological processes also play a role (e.g., Clarke, P., Hulme, C., & Snowling, M. (2005). Individual differences in RAN and reading: A response timing analysis. Journal of Research in Reading, 28(2), 73-86; Wolf, M., Bowers, P.G., & Biddle, K. (2000). Naming-speed processes, timing, and reading: A conceptual review. Journal of learning disabilities, 33(4), 387-407). We conducted an eye-tracking study that manipulated phonological and visual information (as representative of extra-phonological processes) in RAN. Results from linear mixed (LME) effects analyses showed that both phonological and visual processes influence naming-speed for both dyslexic and non-dyslexic groups, but the influence on dyslexic readers is greater. Moreover, dyslexic readers' difficulties in these domains primarily emerge in a measure that explicitly includes the production phase of naming. This study elucidates processes underpinning RAN performance in non-dyslexic readers and pinpoints areas of difficulty for dyslexic readers. We discuss these findings with reference to phonological and extra-phonological hypotheses of naming-speed deficits.
C. -H. Juan; Neil G. Muggleton; Ovid J. L. Tzeng; D. L. Hung; A. Cowey; Vincent Walsh
Segregation of visual selection and saccades in human frontal eye fields Journal Article
In: Cerebral Cortex, vol. 18, no. 10, pp. 2410–2415, 2008.
The premotor theory of attention suggests that target processing and generation of a saccade to the target are interdependent. Temporally precise transcranial magnetic stimulation (TMS) was delivered over the human frontal eye fields, the area most frequently associated with the premotor theory in association with eye movements, while subjects performed a visually instructed pro-/antisaccade task. Visual analysis and saccade preparation were clearly separated in time, as indicated by 2 distinct time points of TMS delivery that resulted in elevated saccade latencies. These results show that visual analysis and saccade preparation, although frequently enacted together, are dissociable processes.
Johanna K. Kaakinen; Jukka Hyönä
Perspective-driven text comprehension Journal Article
In: Applied Cognitive Psychology, vol. 22, pp. 319–334, 2008.
The present article reports results of an eye‐tracking experiment, which examines whether the perspective‐driven text comprehension framework applies to comprehension of narrative text. Sixty‐four participants were instructed to adopt either a burglar's or an interior designer's perspective. A pilot test showed that readers have more overlapping prior knowledge with the burglar‐relevant than with the interior designer‐relevant information of the experimental text. Participants read either a transparent text version where the (ir)relevance of text segments to the perspective was made apparent, or an opaque text version where no direct mention of the perspective was made. After reading participants wrote a free recall of the text. The results showed that perspective‐related prior knowledge modulates the perspective effects observed in on‐line text processing and that signalling of (ir)relevance helps in encoding relevant information to memory. It is concluded that the proposed framework generalizes to the on‐line comprehension of narrative texts.
Edward L. Keller; Kyoung-Min Lee; Se-Woong Park; Jessica A. Hill
Effect of inactivation of the cortical frontal eye field on saccades generated in a choice response paradigm Journal Article
In: Journal of Neurophysiology, vol. 100, no. 5, pp. 2726–2737, 2008.
Previous studies using muscimol inactivations in the frontal eye fields (FEFs) have shown that saccades generated by recall from working memory are eliminated by these lesions, whereas visually guided saccades are relatively spared. In these experiments, we made reversible inactivations in FEFs in alert macaque monkeys and examined the effect on saccades in a choice response task. Our task required monkeys to learn arbitrary pairings between colored stimuli and saccade direction. Following inactivations, the percentage of choice errors increased as a function of the number of alternative (NA) pairings. In contrast, the percentage of dysmetric saccades (saccades that landed in the correct quadrant but were inaccurate) did not vary with NA. Saccade latency increased postlesion but did not increase with NA. We also made simultaneous inactivations in both FEFs. The results following bilateral lesions showed approximately twice as many choice errors. We conclude that the FEFs are involved in the generation of saccades in choice response tasks. The dramatic effect of NA on choice errors, but the lack of an effect of NA on motor errors or response latency, suggests that two types of processing are interrupted by FEF lesions. The first involves the formation of a saccadic intention vector from associate memory inputs, and the second, the execution of the saccade from the intention vector. An alternative interpretation of the first result is that a role of the FEFs may be to suppress incorrect responses. The doubling of choice errors following bilateral FEF lesions suggests that the effect of unilateral lesions is not caused by a general inhibition of the lesioned side by the intact side.
Chantal Kemner; Lizet Ewijk; Herman Engeland; Ignace T. C. Hooge
Brief report: Eye movements during visual search tasks indicate enhanced stimulus discriminability in subjects with PDD Journal Article
In: Journal of Autism and Developmental Disorders, vol. 38, no. 3, pp. 553–558, 2008.
Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements during visual search tasks in high functioning adult men with PDD and a control group. Subjects with PDD were significantly faster than controls in these tasks, replicating earlier findings in children. Eye movement data showed that subjects with PDD made fewer eye movements than controls. No evidence was found for a different search strategy between the groups. The data indicate an enhanced ability to discriminate between stimulus elements in PDD.
Dirk Kerzel; Angélique Gauch; Blandine Ulmann
Local motion inside an object affects pointing less than smooth pursuit Journal Article
In: Experimental Brain Research, vol. 191, no. 2, pp. 187–195, 2008.
During smooth pursuit eye movements, briefly presented objects are mislocalized in the direction of motion. It has been proposed that the localization error is the sum of the pursuit signal and the retinal motion signal in a $sim$200 ms interval after flash onset. To evaluate contributions of retinal motion signals produced by the entire object (global motion) and elements within the object (local motion), we asked observers to reach to flashed Gabor patches (Gaussian-windowed sine-wave gratings). Global motion was manipulated by varying the duration of a stationary flash, and local motion was manipulated by varying the motion of the sine-wave. Our results confirm that global retinal motion reduces the localization error. The effect of local retinal motion on object localization was far smaller, even though local and global motion had equal effects on eye velocity. Thus, local retinal motion has differential access to manual and oculomotor control circuits. Further, we observed moderate correlations between smooth pursuit gain and localization error.
Dirk Kerzel; David Souto; Nathalie E. Ziegler
Effects of attention shifts to stationary objects during steady-state smooth pursuit eye movements Journal Article
In: Vision Research, vol. 48, no. 7, pp. 958–969, 2008.
A number of studies have shown that stationary backgrounds compromise smooth pursuit eye movements. It has been suggested that poor attentional selection of the pursuit target was responsible for reductions of pursuit gain. To quantify the detrimental effects of attention, we instructed observers to either pay attention to background objects or to ignore them. The to-be-attended object was indicated by peripheral or central cues. Strong reductions of pursuit gain occurred when the following conditions were met: (a) the subject payed attention to the object (b) a salient event was present, for instance the onset of the target or cue and (c) the attended target produced retinal motion. Removing any of the three conditions resulted in no or far smaller decreases of pursuit gain. Further, decreases in pursuit gain were present with perceptual discrimination and simple manual detection.
Christof Körner; Iain D. Gilchrist
Memory processes in multiple-target visual search Journal Article
In: Psychological Research, vol. 72, no. 1, pp. 99–105, 2008.
Gibson, Li, Skow, Brown, and Cooke (Psychological Science, 11, 324–327, 2000) had participants carry out a search task in which they were required to detect the presence of one or two targets. In order to successfully perform such a multiple-target visual search task, participants had to remember the location of the Wrst target while searching for the second target. In two experiments we investigated the cost of remembering this target location. In Experiment 1, we compared performance on the Gibson et al. task with performance on a more conventional present–absent search task. The comparison suggests a substantial performance cost as measured by reaction time, number of Wxations and slope of the search functions. In Experment 2, we looked in detail at reWxations of distractors, which are a direct measure of attentional deployment. We demonstrated that the cost in this multiple-target visual search task was due to an increased number of reWxations on previously visited distractors. Such reWxations were present right from the start of the search. This change in search behaviour may be caused by the necessity of having to remember a target-allocating memory for the upcoming target may consume memory capacity that may otherwise be available for the tagging of distractors. These results support the notion of limited capacity memory processes in search. Introduction
Stuart Jackson; Fred Cummins; Nuala Brady
Rapid perceptual switching of a reversible biological figure Journal Article
In: PLoS ONE, vol. 3, no. 12, pp. e3982, 2008.
Certain visual stimuli can give rise to contradictory perceptions. In this paper we examine the temporal dynamics of perceptual reversals experienced with biological motion, comparing these dynamics to those observed with other ambiguous structure from motion (SFM) stimuli. In our first experiment, naïve observers monitored perceptual alternations with an ambiguous rotating walker, a figure that randomly alternates between walking in clockwise (CW) and counter-clockwise (CCW) directions. While the number of reported reversals varied between observers, the observed dynamics (distribution of dominance durations, CW/CCW proportions) were comparable to those experienced with an ambiguous kinetic depth cylinder. In a second experiment, we compared reversal profiles with rotating and standard point-light walkers (i.e. non-rotating). Over multiple test repetitions, three out of four observers experienced consistently shorter mean percept durations with the rotating walker, suggesting that the added rotational component may speed up reversal rates with biomotion. For both stimuli, the drift in alternation rate across trial and across repetition was minimal. In our final experiment, we investigated whether reversals with the rotating walker and a non-biological object with similar global dimensions (rotating cuboid) occur at random phases of the rotation cycle. We found evidence that some observers experience peaks in the distribution of response locations that are relatively stable across sessions. Using control data, we discuss the role of eye movements in the development of these reversal patterns, and the related role of exogenous stimulus characteristics. In summary, we have demonstrated that the temporal dynamics of reversal with biological motion are similar to other forms of ambiguous SFM. We conclude that perceptual switching with biological motion is a robust bistable phenomenon.
Peter Janssen; Siddharth Srivastava; Sien Ombelet; Guy A. Orban
Coding of shape and position in macaque lateral Iintraparietal area Journal Article
In: Journal of Neuroscience, vol. 28, no. 26, pp. 6679–6690, 2008.
The analysis of object shape is critical for both object recognition and grasping. Areas in the intraparietal sulcus of the rhesus monkey are important for the visuomotor transformations underlying actions directed toward objects. The lateral intraparietal (LIP) area has strong anatomical connections with the anterior intraparietal area, which is known to control the shaping of the hand during grasping, and LIP neurons can respond selectively to simple two-dimensional shapes. Here we investigate the shape representation in area LIP of awake rhesus monkeys. Specifically, we determined to what extent LIP neurons are tuned to shape dimensions known to be relevant for grasping and assessed the invariance of their shape preferences with regard to changes in stimulus size and position in the receptive field. Most LIP neurons proved to be significantly tuned to multiple shape dimensions. The population of LIP neurons that were tested showed barely significant size invariance. Position invariance was present in a minority of the neurons tested. Many LIP neurons displayed spurious shape selectivity arising from accidental interactions between the stimulus and the receptive field. We observed pronounced differences in the receptive field profiles determined by presenting two different shapes. Almost all LIP neurons showed spatially selective saccadic activity, but the receptive field for saccades did not always correspond to the receptive field as determined using shapes. Our results demonstrate that a subpopulation of LIP neurons encodes stimulus shape. Furthermore, the shape representation in the dorsal visual stream appears to differ radically from the known representation of shape in the ventral visual stream.
Wolfgang Jaschinski; Stephanie Jainta; Jörg Hoormann
Comparison of shutter glasses and mirror stereoscope for measuring dynamic and static vergence Journal Article
In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 1–7, 2008.
Vergence eye movement recordings in response to disparity step stimuli require to present different stimuli to the two eyes. The traditional method is a mirror stereoscope. Shutter glasses are more convenient, but have disadvantages as limited repetition rate, residual cross task, and reduced luminance. Therefore, we compared both techniques measuring (1) dynamic disparity step responses for stimuli of 1 and 3 deg and (2) fixation disparity, the static vergence error. Shutter glasses and mirror stereoscope gave very similar dynamic responses with correlations of about 0.95 for the objectively measured vergence velocity and for the response amplitude reached 400 ms after the step stimulus (measured objectively with eye movement recordings and subjectively with dichoptic nonius lines). Both techniques also provided similar amounts of fixation disparity, tested with dichoptic nonius lines.
Annette Kinder; Martin Rolfs; Reinhold Kliegl
Sequence learning at optimal stimulus – response mapping: Evidence from a serial reaction time task Journal Article
In: Quarterly Journal of Experimental Psychology, vol. 61, no. 2, pp. 203–209, 2008.
We propose a new version of the serial reaction time (SRT) task in which participants merely looked at the target instead of responding manually. As response locations were identical to target locations, stimulus–response compatibility was maximal in this task. We demonstrated that saccadic response times decreased during training and increased again when a new sequence was presented. It is unlikely that this effect was caused by stimulus–response (S–R) learning because bonds between (visual) stimuli and (oculomotor) responses were already well established before the experiment started. Thus, the finding shows that the building ofS–R bonds is not essential for learning in the SRT task. Numerous
P. Christiaan Klink; Raymond Van Ee; M. M. Nijs; G. J. Brouwer; A. J. Noest; Richard J. A. Wezel
Early interactions between neuronal adaptation and voluntary control determine perceptual choices in bistable vision Journal Article
In: Journal of Vision, vol. 8, no. 5, pp. 1–18, 2008.
At the onset of bistable stimuli, the brain needs to choose which of the competing perceptual interpretations willfi rst reach awareness. Stimulus manipulations and cognitive control both infl uence this choice process, but the underlying mechanisms and interactions remain poorly understood. Using intermittent presentation of bistable visual stimuli, we demonstrate that short interruptions cause perceptual reversals upon the next presentation, whereas longer interstimulus intervals stabilize the percept. Top-down voluntary control biases this process but does not override the timing dependencies. Extending a recently introduced low-level neural model, we demonstrate that percept-choice dynamics in bistable vision can be fully understood with interactions in early neural processing stages. Our model includes adaptive neural processing preceding a rivalry resolution stage with cross-inhibition, adaptation, and an interaction of the adaptation levels with a neural baseline. Most importantly, ourfi ndings suggest that top-down attentional control over bistable stimuli interacts with low-level mechanisms at early levels of sensory processing before perceptual confl icts are resolved and perceptual choices about bistable stimuli are made.
Stefan Klöppel; Bogdan Draganski; Charlotte V. Golding; Carlton Chu; Zoltan Nagy; Philip A. Cook; Stephen L. Hicks; Christopher Kennard; Daniel C. Alexander; Geoff J. M. Parker; Sarah J. Tabrizi; Richard S. J. Frackowiak
White matter connections reflect changes in voluntary-guided saccades in pre-symptomatic Huntington's disease Journal Article
In: Brain, vol. 131, no. 1, pp. 196–204, 2008.
Huntington's disease is caused by a known genetic mutation and so potentially can be diagnosed many years before the onset of symptoms. Neuropathological changes have been found in both striatum and frontal cortex in the pre-symptomatic stage. Disruption of cortico-striatal white matter fibre tracts is therefore likely to contribute to the first clinical signs of the disease. We analysed diffusion tensor MR image (DTI) data from 25 pre-symptomatic gene carriers (PSCs) and 20 matched controls using a multivariate support vector machine to identify patterns of changes in fractional anisotropy (FA). In addition, we performed probabilistic fibre tracking to detect changes in 'streamlines' connecting frontal cortex to striatum. We found a pattern of structural brain changes that includes putamen bilaterally as well as anterior parts of the corpus callosum. This pattern was sufficiently specific to enable us to correctly classify 82% of scans as coming from a PSC or control subject. Fibre tracking revealed a reduction of frontal cortico-fugal streamlines reaching the body of the caudate in PSCs compared to controls. In the left hemispheres of PSCs we found a negative correlation between years to estimated disease onset and streamlines from frontal cortex to body of caudate. A large proportion of the fibres to the caudate body originate from the frontal eye fields, which play an important role in the control of voluntary saccades. This type of saccade is specifically impaired in PSCs and is an early clinical sign of motor abnormalities. A correlation analysis in 14 PSCs revealed that subjects with greater impairment of voluntary-guided saccades had fewer fibre tracking streamlines connecting the frontal cortex and caudate body. Our findings suggest a specific patho-physiological basis for these symptoms by indicating selective vulnerability of the associated white matter tracts.
Christopher M. Knapp; Irene Gottlob; Rebecca J. McLean; Frank A. Proudlock
Horizontal and vertical look and stare optokinetic nystagmus symmetry in healthy adult volunteers Journal Article
In: Investigative Ophthalmology & Visual Science, vol. 49, no. 2, pp. 581–588, 2008.
PURPOSE: Look optokinetic nystagmus (OKN) consists of voluntary tracking of details in a moving visual field, whereas stare OKN is reflexive and consists of shorter slow phases of lower gain. Horizontal OKN is symmetrical in healthy adults, whereas symmetry of vertical OKN is controversial. Horizontal and vertical look and stare OKN symmetry was measured, and the consistency of individual asymmetries and the effect of varying stimulus conditions were investigated.METHODS: Horizontal and vertical look and stare OKN gains were recorded in 15 healthy volunteers (40 degrees /s) using new methods to delineate look and stare OKN. Responses with right and left eye viewing were compared to investigate consistency of individual OKN asymmetry. In a second experiment, the symmetry of stare OKN was measured in nine volunteers varying velocity (20 degrees /s and 40 degrees /s), contrast (50% and 100%), grating contrast profile (square or sine wave), and stimulus shape (full screen or circular vignetted).RESULTS: There was no horizontal or vertical asymmetry in look or stare OKN gain for all volunteers grouped together. However, individual vertical asymmetries were strongly correlated for left and right eye viewing (look: r = 0.77
John D. Koehn; Elizabeth Roy; Jason J. S. Barton
The "diagonal effect": A systematic error in oblique antisaccades Journal Article
In: Journal of Neurophysiology, vol. 100, no. 2, pp. 587–597, 2008.
Antisaccades are known to show greater variable error and also a systematic hypometria in their amplitude compared with visually guided prosaccades. In this study, we examined whether their accuracy in direction (as opposed to amplitude) also showed a systematic error. We had human subjects perform prosaccades and antisaccades to goals located at a variety of polar angles. In the first experiment, subjects made prosaccades or antisaccades to one of eight equidistant locations in each block, whereas in the second, they made saccades to one of two equidistant locations per block. In the third, they made antisaccades to one of two locations at different distances but with the same polar angle in each block. Regardless of block design, the results consistently showed a saccadic systematic error, in that oblique antisaccades (but not prosaccades) requiring unequal vertical and horizontal vector components were deviated toward the 45 degrees diagonal meridians. This finding could not be attributed to range effects in either Cartesian or polar coordinates. A perceptual origin of the diagonal effect is suggested by similar systematic errors in other studies of memory-guided manual reaching or perceptual estimation of direction, and may indicate a common spatial bias when there is uncertain information about spatial location.
Wendy E. Huddleston; Edgar A. DeYoe
The representation of spatial attention in human parietal cortex dynamically modulates with performance Journal Article
In: Cerebral Cortex, vol. 18, no. 6, pp. 1272–1280, 2008.
The control and allocation of attention is an essential, ubiquitous neural process that gates our awareness of objects and events in the environment. Neural representations of the locus of spatial attention have been previously demonstrated in parietal cortex. However, the behavioral relevance of these neural representations is not known. While undergoing functional magnetic resonance imaging, subjects performed a covert spatial attention task that yielded a wide range of performance values. Voxels in parietal cortex selective for attended target location also dynamically modulated, becoming more or less responsive as performance levels changed. Surprisingly, this relationship was not linear. Responses peaked at intermediate performance levels and dropped both when performance was very high and when it was very low. Such dynamic modulation may represent a mechanism for organizing neural control signals according to behavioral task demands.
Falk Huettig; Robert J. Hartsuiker
When you name the pizza you look at the coin and the bread: Eye movements reveal semantic activation during word production Journal Article
In: Memory and Cognition, vol. 36, no. 2, pp. 341–360, 2008.
Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 x 2 array and identified and named a target picture on the basis of either category (e.g., "What is the name of the musical instrument?") or visual-form (e.g., "What is the name of the circular object?") instructions. There were more fixations on visual-form competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, & Meyer, 1999).
Amelia R. Hunt; Craig S. Chapman; Alan Kingstone
Taking a long look at action and time perception Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 1, pp. 125–136, 2008.
Everyone has probably experienced chronostasis, an illusion of time that can cause a clock's second hand to appear to stand still during an eye movement. Though the illusion was initially thought to reflect a mechanism for preserving perceptual continuity during eye movements, an alternative hypothesis has been advanced that overestimation of time might be a general effect of any action. Contrary to both of these hypotheses, the experiments reported here suggest that distortions of time perception related to an eye movement are not distinct from temporal distortions for other kinds of responses. Moreover, voluntary action is neither necessary nor sufficient for overestimation effects. These results lead to a new interpretation of chronostasis based on the role of attention and memory in time estimation.
Albrecht W. Inhoff; Matthew S. Solomon; Bradley A. Seymour; Ralph Radach
Eye position changes during reading fixations are spatially selective Journal Article
In: Vision Research, vol. 48, no. 8, pp. 1027–1039, 2008.
Intra-fixation location changes were measured when one-line sentences written in lower or aLtErNaTiNg case were read. Intra-fixation location changes were common and their size was normally distributed except for a relatively high proportion of fixations without a discernible location change. Location changes that did occur were systematically biased toward the right when alternating case was read. Irrespective of case type, changes of the right eye were biased toward the right at the onset of sentence reading, and this spatial bias decreased as sentence reading progressed from left to right. The left eye showed a relatively stable right-directed bias. These results show that processing demands can pull the two fixated eyes in the same direction and that the response to this pull can differ for the right and left eye.
Albrecht W. Inhoff; Matthew S. Starr; Matthew S. Solomon; Lars Placke
Eye movements during the reading of compound words and the influence of lexeme meaning Journal Article
In: Memory and Cognition, vol. 36, no. 3, pp. 675–687, 2008.
We examined the use of lexeme meaning during the processing of spatially unified bilexemic compound words by manipulating both the location and the word frequency of the lexeme that primarily defined the meaning of a compound (i.e., the dominant lexeme). The semantically dominant and nondominant lexemes occupied either the beginning or the ending compound word location, and the beginning and ending lexemes could be either high- or low-frequency words. Three tasks were used--lexical decision, naming, and sentence reading--all of which focused on the effects of lexeme frequency as a function of lexeme dominance. The results revealed a larger word frequency effect for the dominant lexeme in all three tasks. Eye movements during sentence reading further revealed larger word frequency effects for the dominant lexeme via several oculomotor motor measures, including the duration of the first fixation on a compound word. These findings favor theoretical conceptions in which the use of lexeme meaning is an integral part of the compound recognition process.
Helene Intraub; Christopher A. Dickinson
False memory 1/20th of a second later: What the early onset of boundary extension reveals about perception Journal Article
In: Psychological Science, vol. 19, no. 10, pp. 1007–1014, 2008.
Errors of commission are thought to be caused by heavy memory loads, confusing information, lengthy retention intervals, or some combination of these factors. We report false memory beyond the boundaries of a view, boundary extension, after less than 1/20th of a second. Photographs of scenes were interrupted by a 42-ms or 250-ms mask, 250 ms into viewing, before reappearing or being replaced with a different view (Experiment 1). Postinterruption photographs that were unchanged were rated as closer up than the original views; when the photographs were changed, the same pair of closer-up and wider-angle views was rated as more similar when the closer view was first, rather than second. Thus, observers remembered preinterruption views with extended boundaries. Results were replicated when the interruption included a saccade (Experiment 2). The brevity of these interruptions has implications for visual scanning; it also challenges the traditional distinction between perception and memory. We offer an alternative conceptualization that shows how source monitoring can explain false memory after an interruption briefer than an eyeblink.
Roger Kalla; Neil G. Muggleton; Chi-Hung Juan; Alan Cowey; Vincent Walsh
The timing of the involvement of the frontal eye fields and posterior parietal cortex in visual search Journal Article
In: NeuroReport, vol. 19, no. 10, pp. 1069–1073, 2008.
The frontal eye fields (FEFs) and posterior parietal cortex (PPC) are important for target detection in conjunction visual search but the relative timings of their contribution have not been compared directly. We addressed this using temporally specific double pulse transcranial magnetic stimulation delivered at different times over FEFs and PPC during performance of a visual search task. Disruption of performance was earlier (0/40 ms) with FEF stimulation than with PPC stimulation (120/160 ms), revealing a clear and substantial temporal dissociation of the involvement of these two areas in conjunction visual search. We discuss these timings with reference to the respective roles of FEF and PPC in the modulation of extrastriate visual areas and selection of responses.
Andre Kaminiarz; Bart Krekelberg; Frank Bremmer
Expansion of visual space during optokinetic afternystagmus (OKAN) Journal Article
In: Journal of Neurophysiology, vol. 99, no. 5, pp. 2470–2478, 2008.
The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements. Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow phases, subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast phases, we found a bias in the direction of the fast phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN. A simple mismatch between the current eye position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centered position information.
Keisuke Kawasaki; David L. Sheinberg
Learning to recognize visual objects with microstimulation in inferior temporal cortex Journal Article
In: Journal of Neurophysiology, vol. 100, no. 1, pp. 197–211, 2008.
The malleability of object representations by experience is essential for adaptive behavior. It has been hypothesized that neurons in inferior temporal cortex (IT) in monkeys are pivotal in visual association learning, evidenced by experiments revealing changes in neural selectivity following visual learning, as well as by lesion studies, wherein functional inactivation of IT impairs learning. A critical question remaining to be answered is whether IT neuronal activity is sufficient for learning. To address this question directly, we conducted experiments combining visual classification learning with microstimulation in IT. We assessed the effects of IT microstimulation during learning in cases where the stimulation was exclusively informative, conditionally informative, and informative but not necessary for the classification task. The results show that localized microstimulation in IT can be used to establish visual classification learning, and the same stimulation applied during learning can predictably bias judgments on subsequent recognition. The effect of induced activity can be explained neither by direct stimulation-motor association nor by simple detection of cortical stimulation. We also found that the learning effects are specific to IT stimulation as they are not observed by microstimulation in an adjacent auditory area. Our results add the evidence that the differential activity in IT during visual association learning is sufficient for establishing new associations. The results suggest that experimentally manipulated activity patterns within IT can be effectively combined with ongoing visually induced activity during the formation of new associations.
Manabu Shikauchi; Shin Ishii; Tomohiro Shibata
Prediction of aperiodic target sequences by saccades Journal Article
In: Behavioural Brain Research, vol. 189, no. 2, pp. 325–331, 2008.
Through recording of saccadic eye movements, we investigated whether humans can achieve prediction of aperiodic target sequences which cannot be predicted based solely on memorizing short-length patterns of the target sequence. We proposed a novel experimental paradigm in which Auto-Regressive (AR) processes are used to generate aperiodic target sequences. If subjects can fully utilize the knowledge on the AR dynamics that have generated the target sequence, optimal prediction can be made. As a control task, a completely unpredictable (random) target sequence was generated by shuffling the AR sequences. Behavioral analysis suggested that the prediction of the next target position in the AR sequence was significantly more successful than that by the random guess or the optimal guess for the random sequence. Although their performances were not optimal, learning of the AR dynamics was observed for first-order AR sequences, suggesting that the subjects attempted to predict the next target position based on partially identified AR dynamics.
Alexandra Soliman; Gillian A. O'Driscoll; Jens Pruessner; Anne Lise V. Holahan; Isabelle Boileau; Danny Gagnon; Alain Dagher
Stress-induced dopamine release in humans at risk of psychosis: A [ "C] raclopride PET study Journal Article
In: Neuropsychopharmacology, vol. 33, no. 8, pp. 2033–2041, 2008.
Drugs that increase dopamine levels in the brain can cause psychotic symptoms in healthy individuals and worsen them in schizophrenic patients. Psychological stress also increases dopamine release and is thought to play a role in susceptibility to psychotic illness. We hypothesized that healthy individuals at elevated risk of developing psychosis would show greater striatal dopamine release than controls in response to stress. Using positron emission tomography and [(11)C]raclopride, we measured changes in synaptic dopamine concentrations in 10 controls and 16 psychometric schizotypes; 9 with perceptual aberrations (PerAb, ie positive schizotypy) and 7 with physical anhedonia (PhysAn, ie negative schizotypy). [(11)C]Raclopride binding potential was measured during a psychological stress task and a sensory-motor control. All three groups showed significant increases in self-reported stress and cortisol levels between the stress and control conditions. However, only the PhysAn group showed significant stress-induced dopamine release. Dopamine release in the entire sample was significantly negatively correlated with smooth pursuit gain, an endophenotype linked to frontal lobe function. Our findings suggest the presence of abnormalities in the dopamine response to stress in negative symptom schizotypy, and provide indirect evidence of a link to frontal function.
Gianluca U. Sorrento; Denise Y. P. Henriques
Reference frame conversions for repeated arm movements Journal Article
In: Journal of Neurophysiology, vol. 99, no. 6, pp. 2968–2984, 2008.
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.
Jan L. Souman; Tom C. A. Freeman
Motion perception during sinusoidal smooth pursuit eye movements: Signal latencies and non-linearities Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–14, 2008.
Smooth pursuit eye movements add motion to the retinal image. To compensate, the visual system can combine estimates of pursuit velocity and retinal motion to recover motion with respect to the head. Little attention has been paid to the temporal characteristics of this compensation process. Here, we describe how the latency difference between the eye movement signal and the retinal signal can be measured for motion perception during sinusoidal pursuit. In two experiments, observers compared the peak velocity of a motion stimulus presented in pursuit and fixation intervals. Both the pursuit target and the motion stimulus moved with a sinusoidal profile. The phase and amplitude of the motion stimulus were varied systematically in different conditions, along with the amplitude of pursuit. The latency difference between the eye movement signal and the retinal signal was measured by fitting the standard linear model and a non-linear variant to the observed velocity matches. We found that the eye movement signal lagged the retinal signal by a small amount. The non-linear model fitted the velocity matches better than the linear one and this difference increased with pursuit amplitude. The results support previous claims that the visual system estimates eye movement velocity and retinal velocity in a non-linear fashion and that the latency difference between the two signals is small.
David Souto; Dirk Kerzel
Dynamics of attention during the initiation of smooth pursuit eye movements Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 3–1–16, 2008.
Many studies indicate that saccades are necessarily preceded by a shift of attention to the target location. There is no direct evidence for the same coupling during smooth pursuit. If smooth pursuit and attention were coupled, pursuit onset should be delayed whenever attention is focused on a stationary, non-target location. To test this hypothesis, observers were instructed to shift their attention to a peripheral location according to a location cue (Experiments 1 and 2) or a symbolic cue (Experiment 3) around the time of smooth pursuit initiation. Attending to static targets had only negligible effects on smooth pursuit latencies and the early open-loop response but lowered pursuit velocity substantially about the onset of closed-loop pursuit. Around this time, eye velocity reflected the competition between the to-be-tracked and to-be-attended object motion, entailing a reduction of eye velocity by 50% compared to the single task condition. The precise time course of attentional modulation of smooth pursuit initiation was at odds with the idea that an attention shift must precede any voluntary eye movement. Finally, the initial catch-up saccades were strongly delayed with attention diverted from the pursuit target. Implications for models of target selection for pursuit and saccades are discussed.
Miriam Spering; Anna Montagnini; Karl R. Gegenfurtner
Competition between color and luminance for target selection in smooth pursuit and saccadic eye movements Journal Article
In: Journal of Vision, vol. 8, no. 15, pp. 1–19, 2008.
Visual processing of color and luminance for smooth pursuit and saccadic eye movements was investigated using a target selection paradigm. In two experiments, stimuli were varied along the dimensions color and luminance, and selection of the more salient target was compared in pursuit and saccades. Initial pursuit was biased in the direction of the luminance component whereas saccades showed a relative preference for color. An early pursuit response toward luminance was often reversed to color by a later saccade. Observers' perceptual judgments of stimulus salience, obtained in two control experiments, were clearly biased toward luminance. This choice bias in perceptual data implies that the initial short-latency pursuit response agrees with perceptual judgments. In contrast, saccades, which have a longer latency than pursuit, do not seem to follow the perceptual judgment of salience but instead show a stronger relative preference for color. These substantial differences in target selection imply that target selection processes for pursuit and saccadic eye movements use distinctly different weights for color and luminance stimuli.
Rike Steenken; Hans Colonius; Adele Diederich; Stefan Rach
Visual-auditory interaction in saccadic reaction time: Effects of auditory masker level Journal Article
In: Brain Research, vol. 1220, pp. 150–156, 2008.
Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Observed SRT reductions typically range between 10 and 50 ms and decrease as spatial disparity between the stimuli increases. Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory accessory. Here we probe this hypothesis by presenting an additional white-noise masker background of 3 s duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident vs. disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. As verified in a separate auditory localization task, localizability of the auditory accessory decreases with masker level. The SRT results are accounted for by a conceptual model positing that increasing masker level enlarges the area of possible auditory stimulus locations: it implies that perceivable distances decrease for disparate stimulus configurations and increase for coincident stimulus pairs.
Rike Steenken; Adele Diederich; Hans Colonius
Time course of auditory masker effects: Tapping the locus of audiovisual integration? Journal Article
In: Neuroscience Letters, vol. 435, no. 1, pp. 78–83, 2008.
In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.
Mariano Sigman; Jérôme Sackur; Antoine Del Cul; Stanislas Dehaene
Illusory displacement due to object substitution near the consciousness threshold Journal Article
In: Journal of Vision, vol. 8, no. 1, pp. 1–10, 2008.
A briefly presented target shape can be made invisible by the subsequent presentation of a mask that replaces the target. While varying the target-mask interval in order to investigate perception near the consciousness threshold, we discovered a novel visual illusion. At some intervals, the target is clearly visible, but its location is misperceived. By manipulating the mask's size and target's position, we demonstrate that the perceived target location is always displaced to the boundary of a virtual surface defined by the mask contours. Thus, mutual exclusion of surfaces appears as a cause of masking.
Michael A. Silver; Amitai Shenhav; Mark D'Esposito
Cholinergic enhancement reduces spatial spread of visual responses in human early visual cortex Journal Article
In: Neuron, vol. 60, no. 5, pp. 904–914, 2008.
Animal studies have shown that acetylcholine decreases excitatory receptive field size and spread of excitation in early visual cortex. These effects are thought to be due to facilitation of thalamocortical synaptic transmission and/or suppression of intracortical connections. We have used functional magnetic resonance imaging (fMRI) to measure the spatial spread of responses to visual stimulation in human early visual cortex. The cholinesterase inhibitor donepezil was administered to normal healthy human subjects to increase synaptic levels of acetylcholine in the brain. Cholinergic enhancement with donepezil decreased the spatial spread of excitatory fMRI responses in visual cortex, consistent with a role of acetylcholine in reducing excitatory receptive field size of cortical neurons. Donepezil also reduced response amplitude in visual cortex, but the cholinergic effects on spatial spread were not a direct result of reduced amplitude. These findings demonstrate that acetylcholine regulates spatial integration in human visual cortex.
Tim J. Smith; John M. Henderson
Edit Blindness: The relationship between attention and global change blindness in dynamic scenes Journal Article
In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–17, 2008.
Although we experience the visual world as a continuous, richly detailed space we often fail to notice large and significant changes. Such change blindness has been demonstrated for local object changes and changes to the visual form of whole images, however it is assumed that total changes from one image to another would be easily detected. Film editing presents such total changes several times a minute yet we rarely seem to be aware of them, a phenomenon we refer to here as edit blindness. This phenomenon has never been empirically demonstrated even though film editors believe they have at their disposal techniques that induce edit blindness, the Continuity Editing Rules. In the present study we tested the relationship between Continuity Editing Rules and edit blindness by instructing participants to detect edits while watching excerpts from feature films. Eye movements were recorded during the task. The results indicate that edits constructed according to the Continuity Editing Rules result in greater edit blindness than edits not adhering to the rules. A quarter of edits joining two viewpoints of the same scene were undetected and this increased to a third when the edit coincided with a sudden onset of motion. Some cuts may be missed due to suppression of the cut transients by coinciding with eyeblinks or saccadic eye movements but the majority seem to be due to inattentional blindness as viewers attend to the depicted narrative. In conclusion, this study presents the first empirical evidence of edit blindness and its relationship to natural attentional behaviour during dynamic scene viewing.
J. F. Soechting; Martha Flanders
Extrapolation of visual motion for manual interception Journal Article
In: Journal of Neurophysiology, vol. 99, no. 6, pp. 2956–2967, 2008.
A frequent goal of hand movement is to touch a moving target or to make contact with a stationary object that is in motion relative to the moving head and body. This process requires a prediction of the target's motion, since the initial direction of the hand movement anticipates target motion. This experiment was designed to define the visual motion parameters that are incorporated in this prediction of target motion. On seeing a go signal (a change in target color), human subjects slid the right index finger along a touch-sensitive computer monitor to intercept a target moving along an unseen circular or oval path. The analysis focused on the initial direction of the interception movement, which was found to be influenced by the time required to intercept the target and the target's distance from the finger's starting location. Initial direction also depended on the curvature of the target's trajectory in a manner that suggested that this parameter was underestimated during the process of extrapolation. The pattern of smooth pursuit eye movements suggests that the extrapolation of visual target motion was based on local motion cues around the time of the onset of hand movement, rather than on a cognitive synthesis of the target's pattern of motion.
Giovanni Taibbi; Zhong I. Wang; Louis F. Dell'Osso
Infantile nystagmus syndrome : Broadening the high-foveation-quality fi eld with contact lenses Journal Article
In: Ophthalmology, vol. 2, no. 3, pp. 585–589, 2008.
We investigated the effects of contact lenses in broadening and improving the high-foveation-quality fi eld in a subject with infantile nystagmus syndrome (INS). A high-speed, digitized video system was used for the eye-movement recording. The subject was asked to fi xate a far target at different horizontal gaze angles with contact lenses inserted. Data from the subject while fi xating at far without refractive correction and at near (at a convergence angle of 60 PD), were used for comparison. The eXpanded Nystagmus Acuity Function (NAFX) was used to evaluate the foveation quality at each gaze angle. Contact lenses broadened the high- foveation-quality range of gaze angles in this subject. The broadening was comparable to that achieved during 60 PD of convergence although the NAFX values were lower. Contact lenses allowed the subject to see “more” (he had a wider range of high-foveation-quality gaze angles) and “better” (he had improved foveation at each gaze angle). Instead of being contraindicated by INS, contact lenses emerge as a potentially important therapeutic option. Contact lenses employ afferent feedback via the ophthalmic division of the V cranial nerve to damp INS slow phases over a broadened range of gaze angles. This supports the proprioceptive hypothesis of INS improvement.
Kohske Takahashi; Katsumi Watanabe
Persisting effect of prior experience of change blindness Journal Article
In: Perception, vol. 37, no. 2, pp. 324–327, 2008.
Most cognitive scientists know that an airplane tends to lose its engine when the display is flickering. How does such prior experience influence visual search? We recorded eye movements made by vision researchers while they were actively performing a change-detection task. In selected trials, we presented Rensink's familiar 'airplane' display, but with changes occurring at locations other than the jet engine. The observers immediately noticed that there was no change in the location where the engine had changed in the previous change-blindness demonstration. Nevertheless, eye-movement analyses indicated that the observers were compelled to look at the location of the unchanged engine. These results demonstrate the powerful effect of prior experience on eye movements, even when the observers are aware of the futility of doing so.
Benjamin W. Tatler; Benjamin T. Vincent
Systematic tendencies in scene viewing Journal Article
In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–18, 2008.
While many current models of scene perception debate the relative roles of low- and high- level factors in eye guidance, systematic tendencies in how the eyes move may be infor- mative. We consider how each saccade and fixation is influenced by that which preceded or followed it, during free inspection of images of natural scenes. We find evidence to suggest periods of localized scanning separated by ‘global' relocations to new regions of the scene. We also find evidence to support the existence of small amplitude ‘corrective' saccades in natural image viewing. Our data reveal statistical dependencies between suc- cessive eye movements, which may be informative in furthering our understanding of eye guidance.
Benjamin W. Tatler; Nicholas J. Wade; Kathrin Kaulard
Examining art: Dissociating pattern and perceptual influences on oculomotor behaviour Journal Article
In: Spatial Vision, vol. 21, no. 1, pp. 165–184, 2008.
When observing art the viewer's understanding results from the interplay between the marks made on the surface by the artist and the viewer's perception and knowledge of it. Here we use a novel set of stimuli to dissociate the influences of the marks on the surface and the viewer's perceptual experience upon the manner in which the viewer inspects art. Our stimuli provide the opportunity to study situations in which (1) the same visual stimulus can give rise to two different perceptual experiences in the viewer, and (2) the visual stimuli differ but give rise to the same perceptual experience in the viewer. We find that oculomotor behaviour changes when the perceptual experience changes. Oculomotor behaviour also differs when the viewer's perceptual experience is the same but the visual stimulus is different. The methodology used and insights gained from this study offer a first step toward an experimental exploration of the relative influences of the artist's creation and viewer's perception when viewing art and also toward a better understanding of the principles of composition in portraiture.
T. Teichert; Steffen Klingenhoefer; T. Wachtler; Frank Bremmer
Depth perception during saccades Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–13, 2008.
A number of studies have investigated the localization of briefly flashed targets during saccades to understand how the brain perceptually compensates for changes in gaze direction. Typical version saccades, i.e., saccades between two points of the horopter, are not only associated with changes in gaze direction, but also with large transient changes of ocular vergence. These transient changes in vergence have to be compensated for just as changes in gaze direction. We investigated depth judgments of perisaccadically flashed stimuli relative to continuously present references and report several novel findings. First, disparity thresholds increased around saccade onset. Second, for horizontal saccades, depth judgments were prone to systematic errors: Stimuli flashed around saccade onset were perceived in a closer depth plane than persistently shown references with the same retinal disparity. Briefly before and after this period, flashed stimuli tended to be perceived in a farther depth plane. Third, depth judgments for upward and downward saccades differed substantially: For upward, but not for downward saccades we observed the same pattern of mislocalization as for horizontal saccades. Finally, unlike localization in the fronto-parallel plane, depth judgments did not critically depend on the presence of visual references. Current models fail to account for the observed pattern of mislocalization in depth.
Masahiko Terao; Junji Watanabe; Akihiro Yagi; Shin'ya Nishida
Reduction of stimulus visibility compresses apparent time intervals Journal Article
In: Nature Neuroscience, vol. 11, no. 5, pp. 541–542, 2008.
The neural mechanisms underlying visual estimation of subsecond durations remain unknown, but perisaccadic underestimation of interflash intervals may provide a clue as to the nature of these mechanisms. Here we found that simply reducing the flash visibility, particularly the visibility of transient signals, induced similar time underestimation by human observers. Our results suggest that weak transient responses fail to trigger the proper detection of temporal asynchrony, leading to increased perception of simultaneity and apparent time compression.
Paul Sauleau; Pierre Pollak; Paul Krack; Jean Hubert Courjon; Alain Vighetto; Alim Louis Benabid; Denis Pélisson; Caroline Tilikete
Subthalamic stimulation improves orienting gaze movements in Parkinson's disease Journal Article
In: Clinical Neurophysiology, vol. 119, no. 8, pp. 1857–1863, 2008.
Objective: To determine the effect of subthalamic stimulation on visually triggered eye and head movements in patients with Parkinson's disease (PD). Methods: We compared the gain and latency of visually triggered eye and head movements in 12 patients bilaterally implanted into the subthalamic nucleus (STN) for severe PD and six age-matched control subjects. Visually triggered movements of eye (head restrained), and of eye and head (head unrestrained) were recorded in the absence of dopaminergic medication. Bilateral stimulation was turned OFF and then turned ON with voltage and contact used in chronic setting. The latency was determined from the beginning of initial horizontal eye movements relative to the target onset, and the gain was defined as the ratio of the amplitude of the initial movement to the amplitude of the target movement. Results: Without stimulation, the initiation of the head movement was significantly delayed in patients and the gain of head movement was reduced. Our patients also presented significantly prolonged latencies and hypometry of visually triggered saccades in the head-fixed condition and of gaze in head-free condition. Bilateral STN stimulation with therapeutic parameters improved performance of orienting gaze, eye and head movements towards the controls' level. Conclusions: These results demonstrate that visually triggered saccades and orienting eye-head movements are impaired in the advanced stage of PD. In addition, subthalamic stimulation enhances amplitude and shortens latency of these movements. Significance: These results are likely explained by alteration of the information processed by the superior colliculus (SC), a pivotal visuomotor structure involved in both voluntary and reflexive saccades. Improvement of movements with stimulation of the STN may be related to its positive input either on the STN-Substantia Nigra-SC pathway or on the parietal cortex-SC pathway.
Christoph Scheepers; Frank Keller; Mirella Lapata
Evidence for serial coercion: A time course analysis using the visual-world paradigm Journal Article
In: Cognitive Psychology, vol. 56, no. 1, pp. 1–29, 2008.
Metonymic verbs like start or enjoy often occur with artifact-denoting complements (e.g., The artist started the picture) although semantically they require event-denoting complements (e.g., The artist started painting the picture). In case of artifact-denoting objects, the complement is assumed to be type shifted (or coerced) into an event to conform to the verb's semantic restrictions. Psycholinguistic research has provided evidence for this kind of enriched composition: readers experience processing difficulty when faced with metonymic constructions compared to non-metonymic controls. However, slower reading times for metonymic constructions could also be due to competition between multiple interpretations that are being entertained in parallel whenever a metonymic verb is encountered. Using the visual-world paradigm, we devised an experiment which enabled us to determine the time course of metonymic interpretation in relation to non-metonymic controls. The experiment provided evidence in favor of a non-competitive, serial coercion process.
Anne-Catherine Scherlen; Jean-Baptiste Bernard; Aurélie Calabrèse; Eric Castet
Page mode reading with simulated scotomas: Oculo-motor patterns Journal Article
In: Vision Research, vol. 48, no. 18, pp. 1870–1878, 2008.
This study investigated the relationship between reading speed and oculo-motor parameters when normally sighted observers had to read single sentences with an artificial macular scotoma. Using multiple regression analysis, our main result shows that two significant predictors, number of saccades per sentence followed by average fixation duration, account for 94% of reading speed variance: reading speed decreases when number of saccades and fixation duration increase. The number of letters per forward saccade (L/FS), which was measured directly in contrast to previous studies, is not a significant predictor. The results suggest that, independently of the size of saccades, some or all portions of a sentence are temporally integrated across an increasing number of fixations as reading speed is reduced.
Laura Schmalzl; Romina Palermo; Melissa J. Green; Ruth Brunsdon; Max Coltheart
Training of familiar face recognition and visual scan paths for faces in a child with congenital prosopagnosia Journal Article
In: Cognitive Neuropsychology, vol. 25, no. 5, pp. 704–729, 2008.
In the current report we describe a successful training study aimed at improving recognition ofa set of familiar face photographs in K., a 4-year-old girl with congenital prosopagnosia (CP). A detailed assessment of K.'s face-processing skills showed a deficit in structural encoding, most pronounced in the processing of facial features within the face. In addition, eye movement recordings revealed that K.'s scan paths for faces were characterized by a large percentage of fixations directed to areas outside the internal core features (i.e., eyes, nose, and mouth), in particular by poor attendance to the eye region. Following multiple baseline assessments, training focused on teaching K. to reliably recognize a set of familiar face photographs by directing visual attention to specific characteristics of the internal features of each face. The training significantly improved K.'s ability to recognize the target faces, with her performance being flawless immediately after training as well as at a follow-up assessment 1 month later. In addition, eye movement recordings following training showed a significant change in K.'s scan paths, with a significant increase in the percentage offixations directed to the internal features, particularly the eye region. Encouragingly, not only was the change in scan paths observed for the set offamiliar trained faces, but it generalized to a set offaces that was not presented during training. In addition to documenting significant training effects, our study raises the intriguing question ofwhether abnormal scan paths for faces may be a common factor underlying face recognition impairments in childhood CP, an issue that has not been explored so far.
Michael Schneider; Angela Heine; Verena Thaler; Joke Torbeyns; Bert De Smedt; Lieven Verschaffel; Arthur M. Jacobs; Elsbeth Stern
A validation of eye movements as a measure of elementary school children's developing number sense Journal Article
In: Cognitive Development, vol. 23, no. 3, pp. 409–422, 2008.
The number line estimation task captures central aspects of children's developing number sense, that is, their intuitions for numbers and their interrelations. Previous research used children's answer patterns and verbal reports as evidence of how they solve this task. In the present study we investigated to what extent eye movements recorded during task solution reflect children's use of the number line. By means of a cross-sectional design with 66 children from Grades 1, 2, and 3, we show that eye-tracking data (a) reflect grade-related increase in estimation competence, (b) are correlated with the accuracy of manual answers, (c) relate, in Grade 2, to children's addition competence, (d) are systematically distributed over the number line, and (e) replicate previous findings concerning children's use of counting strategies and orientation-point strategies. These findings demonstrate the validity and utility of eye-tracking data for investigating children's developing number sense and estimation competence.
Werner X. Schneider; Ellen Matthias; Melissa L. -H. Võ
Transsaccadic scene memory revisited: A 'Theory of Visual Attention (TVA)' based approach to recognition memory and confidence for objects in naturalistic scenes Journal Article
In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–13, 2008.
The study presented here introduces a new approach to the investigation of transsaccadic memory for objects in naturalistic scenes. Participants were tested with a whole-report task from which — based on the theory of visual attention (TVA) — processing efficiency parameters were derived, namely visual short-term memory storage capacity and visual processing speed. By combining these processing efficiency parameters with transsaccadic memory data from a previous study, we were able to take a closer look at the contribution of visual short-term memory capacity and processing speed to the establishment of visual long-term memory representations during scene viewing. Results indicate that especially the VSTM storage capacity plays a major role in the generation of transsaccadic visual representations of naturalistic scenes.
Alexander C. Schütz; Doris I. Braun; Dirk Kerzel; Karl R. Gegenfurtner
Improved visual sensitivity during smooth pursuit eye movements Journal Article
In: Nature Neuroscience, vol. 11, no. 10, pp. 1211–1216, 2008.
When we view the world around us, we constantly move our eyes. This brings objects of interest into the fovea and keeps them there, but visual sensitivity has been shown to deteriorate while the eyes are moving. Here we show that human sensitivity for some visual stimuli is improved during smooth pursuit eye movements. Detection thresholds for briefly flashed, colored stimuli were 16% lower during pursuit than during fixation. Similarly, detection thresholds for luminance-defined stimuli of high spatial frequency were lowered. These findings suggest that the pursuit-induced sensitivity increase may have its neuronal origin in the parvocellular retino-thalamic system. This implies that the visual system not only uses feedback connections to improve processing for locations and objects being attended to, but that a whole processing subsystem can be boosted. During pursuit, facilitation of the parvocellular system may reduce motion blur for stationary objects and increase sensitivity to speed changes of the tracked object.
Timo Stein; Ignacio Vallines; Werner X. Schneider
Primary visual cortex repoundsects behavioral performance in the attentional blink Journal Article
In: NeuroReport, vol. 19, no. 13, pp. 1277–1281, 2008.
When two masked targets are presented in a rapid sequence, attentional limitations are reflected in reduced identification accuracy for the second target (T2). We used functional magnetic resonance imaging to disentangle the distinct neural substrates of T2 processing during this attentional blink phenomenon. Spatially separating the two targets allows the retinotopic localization of the different stimuli's encoding sites in primary visual cortex (V1) and thus enables activation elicited by each target to be differentially measured in V1. The encoding location of the second target mirrored T2 identification accuracy in a retinotopically specific manner. These results are the first evidence for effects of behavioral performance on hemodynamic responses in V1 under conditions of the attentional blink.
Brian Sullivan; Jelena Jovancevic-Misic; Mary Hayhoe; Gwen Sterns
Use of multiple preferred retinal loci in Stargardt's disease during natural tasks: A case study Journal Article
In: Ophthalmic and Physiological Optics, vol. 28, no. 2, pp. 168–177, 2008.
Individuals with central visual field loss often use a preferred retinal locus (PRL) to compensate for their deficit. We present a case study examining the eye movements of a subject with Stargardt's disease causing bilateral central scotomas, while performing a set of natural tasks including: making a sandwich; building a model; reaching and grasping; and catching a ball. In general, the subject preferred to use PRLs in the lower left visual field. However, there was considerable variation in the location and extent of the PRLs used. Our results demonstrate that a well-defined PRL is not necessary to adequately perform this set of tasks and that many sites in the peripheral retina may be viable for PRLs, contingent on task and stimulus constraints.
Joshua M. Susskind; Daniel H. Lee; Andrée Cusi; Roman Feiman; Wojtek Grabski; Adam K. Anderson
Expressing fear enhances sensory acquisition Journal Article
In: Nature Neuroscience, vol. 11, no. 7, pp. 843–850, 2008.
It has been proposed that facial expression production originates in sensory regulation. Here we demonstrate that facial expressions of fear are configured to enhance sensory acquisition. A statistical model of expression appearance revealed that fear and disgust expressions have opposite shape and surface reflectance features. We hypothesized that this reflects a fundamental antagonism serving to augment versus diminish sensory exposure. In keeping with this hypothesis, when subjects posed expressions of fear, they had a subjectively larger visual field, faster eye movements during target localization and an increase in nasal volume and air velocity during inspiration. The opposite pattern was found for disgust. Fear may therefore work to enhance perception, whereas disgust dampens it. These convergent results provide support for the Darwinian hypothesis that facial expressions are not arbitrary configurations for social communication, but rather, expressions may have originated in altering the sensory interface with the physical world.
Areh Mikulić; Michael C. Dorris
Temporal and spatial allocation of motor preparation during a mixed-strategy game Journal Article
In: Journal of Neurophysiology, vol. 100, no. 4, pp. 2101–2108, 2008.
Adopting a mixed response strategy in competitive situations can prevent opponents from exploiting predictable play. What drives stochastic action selection is unclear given that choice patterns suggest that, on average, players are indifferent to available options during mixed-strategy equilibria. To gain insight into this stochastic selection process, we examined how motor preparation was allocated during a mixed-strategy game. If selection processes on each trial reflect a global indifference between options, then there should be no bias in motor preparation (unbiased preparation hypothesis). If, however, differences exist in the desirability of options on each trial then motor preparation should be biased toward the preferred option (biased preparation hypothesis). We tested between these alternatives by examining how saccade preparation was allocated as human subjects competed against an adaptive computer opponent in an oculomotor version of the game "matching pennies." Subjects were free to choose between two visual targets using a saccadic eye movement. Saccade preparation was probed by occasionally flashing a visual distractor at a range of times preceding target presentation. The probability that a distractor would evoke a saccade error, and when it failed to do so, the probability of choosing each of the subsequent targets quantified the temporal and spatial evolution of saccade preparation, respectively. Our results show that saccade preparation became increasingly biased as the time of target presentation approached. Specifically, the spatial locus to which saccade preparation was directed varied from trial to trial, and its time course depended on task timing.
William L. Miller; Vincenzo Maffei; Gianfranco Bosco; Marco Iosa; Myrka Zago; Emiliano Macaluso; Francesco Lacquaniti
Vestibular nuclei and cerebellum put visual gravitational motion in context Journal Article
In: Journal of Neurophysiology, vol. 99, no. 4, pp. 1969–1982, 2008.
Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion.
D. A. Mills; Teresa C. Frohman; Scott L. Davis; A. R. Salter; Samuel M. McClure; I. Beatty; A. Shah; S. Galetta; E. Eggenberger; D. S. Zee; Elliot M. Frohman
Break in binocular fusion during head turning in MS patients with INO Journal Article
In: Neurology, vol. 71, pp. 457–460, 2008.
Internuclear ophthalmoparesis (INO) is the most common eye movement abnormality observed in pa- tients with multiple sclerosis (MS).1 While most MS patients with INO have no or little misalignment in the straight ahead position, significant disconjugacy occurs during horizontal saccades or with horizontal (yaw axis) head turning.2 A break in binocular fusion can produce a loss of stereopsis and depth percep- tion, transient diplopia (perceived as a double image or visual blur), oscillopsia, and disorientation.2 The purpose of this investigation was to confirm the hy- pothesis that a break in binocular fusion occurs in MS patients with INO during head or body turning, and that the magnitude of disconjugacy will be di- rectly correlated with the severity of this eye move- ment syndrome.
Don C. Mitchell; Xingjia Shen; Matthew J. Green; Timothy L. Hodgson
Accounting for regressive eye-movements in models of sentence processing: A reappraisal of the Selective Reanalysis hypothesis Journal Article
In: Journal of Memory and Language, vol. 59, no. 3, pp. 266–293, 2008.
When people read temporarily ambiguous sentences, there is often an increased prevalence of regressive eye-movements launched from the word that resolves the ambiguity. Traditionally, such regressions have been interpreted at least in part as reflecting readers' efforts to re-read and reconfigure earlier material, as exemplified by the Selective Reanalysis hypothesis [Frazier, L., & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14, 178-210]. Within such frameworks it is assumed that the selection of saccadic landing-sites is linguistically supervised. As an alternative to this proposal, we consider the possibility (dubbed the Time Out hypothesis) that regression control is partly decoupled from linguistic operations and that landing-sites are instead selected on the basis of low-level spatial properties such as their proximity to the point from which the regressive saccade was launched. Two eye-tracking experiments were conducted to compare the explanatory potential of these two accounts. Experiment 1 manipulated the formatting of linguistically identical sentences and showed, contrary to purely linguistic supervision, that the landing site of the first regression from a critical word was reliably influenced by the physical layout of the text. Experiment 2 used a fixed physical format but manipulated the position in the display at which reanalysis-relevant material was located. Here the results showed a highly reliable linguistic influence on the overall distribution of regression landing sites (though with few effects being apparent on the very first regression). These results are interpreted as reflecting mutually exclusive forms of regression control with fixation sequences being influenced both by spatially constrained, partially decoupled supervision systems as well as by some kind of linguistic guidance. The findings are discussed in relation to existing computational models of eye-movements in reading.
Thomas Nyffeler; Dario Cazzoli; Pascal Wurtz; Mathias Lüthi; Roman Von Wartburg; Silvia Chaves; Anouk Déruaz; Christian W. Hess; René M. Müri
Neglect-like visual exploration behaviour after theta burst transcranial magnetic stimulation of the right posterior parietal cortex Journal Article
In: European Journal of Neuroscience, vol. 27, no. 7, pp. 1809–1813, 2008.
The right posterior parietal cortex (PPC) is critically involved in visual exploration behaviour, and damage to this area may lead to neglect of the left hemispace. We investigated whether neglect-like visual exploration behaviour could be induced in healthy subjects using theta burst repetitive transcranial magnetic stimulation (rTMS). To this end, one continuous train of theta burst rTMS was applied over the right PPC in 12 healthy subjects prior to a visual exploration task where colour photographs of real-life scenes were presented on a computer screen. In a control experiment, stimulation was also applied over the vertex. Eye movements were measured, and the distribution of visual fixations in the left and right halves of the screen was analysed. In comparison to the performance of 28 control subjects without stimulation, theta burst rTMS over the right PPC, but not the vertex, significantly decreased cumulative fixation duration in the left screen-half and significantly increased cumulative fixation duration in the right screen-half for a time period of 30 min. These results suggest that theta burst rTMS is a reliable method of inducing transient neglect-like visual exploration behaviour.
Hans P. Op De Beeck; Jennifer A. Deutsch; Wim Vanduffel; Nancy Kanwisher; James J. DiCarlo
A stable topography of selectivity for unfamiliar shape classes in monkey inferior temporal cortex Journal Article
In: Cerebral Cortex, vol. 18, no. 7, pp. 1676–1694, 2008.
The inferior temporal (IT) cortex in monkeys plays a central role in visual object recognition and learning. Previous studies have observed patches in IT cortex with strong selectivity for highly familiar object classes (e.g., faces), but the principles behind this functional organization are largely unknown due to the many properties that distinguish different object classes. To unconfound shape from meaning and memory, we scanned monkeys with functional magnetic resonance imaging while they viewed classes of initially novel objects. Our data revealed a topography of selectivity for these novel object classes across IT cortex. We found that this selectivity topography was highly reproducible and remarkably stable across a 3-month interval during which monkeys were extensively trained to discriminate among exemplars within one of the object classes. Furthermore, this selectivity topography was largely unaffected by changes in behavioral task and object retinal position, both of which preserve shape. In contrast, it was strongly influenced by changes in object shape. The topography was partially related to, but not explained by, the previously described pattern of face selectivity. Together, these results suggest that IT cortex contains a large-scale map of shape that is largely independent of meaning, familiarity, and behavioral task.
M. Niwa; J. Ditterich
Perceptual decisions between multiple directions of visual motion Journal Article
In: Journal of Neuroscience, vol. 28, no. 17, pp. 4435–4445, 2008.
Previous studies and models of perceptual decision making have largely focused on binary choices. However, we often have to choose from multiple alternatives. To study the neural mechanisms underlying multialternative decision making, we have asked human subjects to make perceptual decisions between multiple possible directions of visual motion. Using a multicomponent version of the random-dot stimulus, we were able to control experimentally how much sensory evidence we wanted to provide for each of the possible alternatives. We demonstrate that this task provides a rich quantitative dataset for multialternative decision making, spanning a wide range of accuracy levels and mean response times. We further present a computational model that can explain the structure of our behavioral dataset. It is based on the idea of a race between multiple integrators to a decision threshold. Each of these integrators accumulates net sensory evidence for a particular choice, provided by linear combinations of the activities of decision-relevant pools of sensory neurons.
Lauri Nummenmaa; Jussi Hirvonen; Riitta Parkkola; Jari K. Hietanen
Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy Journal Article
In: NeuroImage, vol. 43, no. 3, pp. 571–580, 2008.
Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.
Inger Montfoort; Josef N. Geest; Harm P. Slijper; Chris I. Zeeuw; Maarten A. Frens
Adaptation of the cervico- and vestibulo-ocular reflex in whiplash injury patients Journal Article
In: Journal of Neurotrauma, vol. 25, pp. 687–693, 2008.
The aim of this study was to investigate the underlying mechanisms of the increased gains of the cervico-ocular reflex (COR) and the lack of synergy between the COR and the vestibulo-ocular reflex (VOR) that have been previously observed in patients with whiplash-associated disorders (WAD). Eye movements during COR or VOR stimulation were recorded in four different experiments. The effect of restricted neck motion and the relationship between muscle activity and COR gain was examined in healthy controls. The adaptive ability of the COR and the VOR was tested in WAD patients and healthy controls. Reduced neck mobility yielded an increase in COR gain. No correlation between COR gain and muscle activity was observed. Adaptation of both the COR and VOR was observed in healthy controls, but not in WAD patients. The increased COR gain of WAD patients may stem from a reduced neck mobility. The lack of adaptation of the two stabilization reflexes may result in a lack of synergy between them. These abnormalities may underlie several of the symptoms frequently observed in WAD, such as vertigo and dizziness.
Sofie Moresi; Jos J. Adam; Jons Rijcken; Pascal W. M. Van Gerven
Cue validity effects in response preparation: A pupillometric study Journal Article
In: Brain Research, vol. 1196, pp. 94–102, 2008.
This study examined the effects of cue validity and cue difficulty on response preparation to provide a test of the Grouping Model [Adam, J.J., Hommel, B. and Umiltà, C., 2003. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognit. Psychol. 46(3), 302-58, Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) automatic and effortful processes in response cuing. Vis. Cogn. 12(8), 1444-1473.]. We used the pupillary response to index the cognitive processing load during and after the preparatory interval (2 s). Twenty-two participants performed the finger-cuing tasks with valid (75%) and invalid (25%) cues. Results showed longer reaction times, more errors, and larger pupil dilations for invalid than valid cues. During the preparation interval, pupil dilation varied systematically with cue difficulty, with easy cues (specifying 2 fingers on 1 hand) showing less pupil dilation than difficult cues (specifying 2 fingers on 2 hands). After the preparation interval, this pattern of differential pupil dilation as a function of cue difficulty reversed for invalid cues, suggesting that cues which incorrectly specified fingers on one hand required more effortful reprogramming operations than cues which incorrectly specified fingers on two hands. These outcomes were consistent with predictions derived from the Grouping Model. Finally, all participants exhibited two distinct pupil dilation strategies: an "early" strategy in which the onset of the main pupil dilation was tied to onset of the cue, and a "late" strategy in which the onset of the main pupil dilation was tied to the onset of the target. Thus, whereas the early pupil dilation strategy showed a strong dilation during the preparation interval, the late pupil strategy showed a strong constriction. Interestingly, only the late onset pupil dilation strategy revealed the above reported sensitivity to cue difficulty, showing for the first time that the well-known pupil's sensitivity to task difficulty can also emerge when the pupil is constricting instead of dilating.
Sofie Moresi; Jos J. Adam; Jons Rijcken; Pascal W. M. Van Gerven; Harm Kuipers; Jelle Jolles
Pupil dilation in response preparation Journal Article
In: International Journal of Psychophysiology, vol. 67, no. 2, pp. 124–130, 2008.
This study examined changes in pupil size during response preparation in a finger-cuing task. Based on the Grouping Model of finger preparation [Adam, J.J., Hommel, B. and Umiltà, C., 2003b. Preparing for perception and action (I): the role of grouping in the response-cuing paradigm. Cognitive Psychology. 46, (3), 302-358.; Adam, J.J., Hommel, B. and Umiltà, C., 2005. Preparing for perception and action (II) Automatic and effortfull Processes in Response cuing. Visual Cognition. 12, (8), 1444-1473.], it was hypothesized that the selection and preparation of more difficult response sets would be accompanied by larger pupillary dilations. The results supported this prediction, thereby extending the validity of pupil size as a measure of cognitive load to the domain of response preparation.
Jane L. Morgan; Gus Elswijk; Antje S. Meyer
Extrafoveal processing of objects in a naming task: Evidence from word probe experiments Journal Article
In: Psychonomic Bulletin & Review, vol. 15, no. 3, pp. 561–565, 2008.
In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.
Linda Mortensen; Antje S. Meyer; Glyn W. Humphreys
Speech planning during multiple-object naming: Effects of ageing Journal Article
In: Quarterly Journal of Experimental Psychology, vol. 61, no. 8, pp. 1217–1238, 2008.
Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.
S. Moshel; Ari Z. Zivotofsky; L. Jin-Rong; Ralf Engbert; Jürgen Kurths; Reinhold Kliegl; Shlomo Havlin
Persistence and phase synchronisation properties of fixational eye movements Journal Article
In: The European Physical Journal Special Topics, vol. 161, pp. 207–223, 2008.
When we fixate our gaze on a stable object, our eyes move continuously with extremely small involuntary and autonomic movements, that even we are unaware of during their occurrence. One of the roles of these fixational eye movements is to prevent the adaptation of the visual system to continuous illumination and inhibit fading of the image. These random, small movements are restricted at long time scales so as to keep the target at the centre of the field of view. In addition, the synchronisation properties between both eyes are related to binocular coordination in order to provide stereopsis. We investigated the roles of different time scale behaviours, especially how they are expressed in the different spatial directions (vertical versus horizontal). We also tested the synchronisation between both eyes. Results show different scaling behaviour between horizontal and vertical movements. When the small ballistic movements, i.e., microsaccades, are removed, the scaling behaviour in both axes becomes similar. Our findings suggest that microsaccades enhance the persistence at short time scales mostly in the horizontal component and much less in the vertical component. We also applied the phase synchronisation decay method to study the synchronisation between six combinations of binocular fixational eye movement components. We found that the vertical-vertical components of right and left eyes are significantly more synchronised than the horizontal-horizontal components. These differences may be due to the need for continuously moving the eyes in the horizontal plane in order to match the stereoscopic image for different viewing distances.
Brad C. Motter; Diglio A. Simoni
Changes in the functional visual field during search with and without eye movements Journal Article
In: Vision Research, vol. 48, pp. 2382–2393, 2008.
The size of the functional visual field (FVF) is dynamic, changing with the context and attentive demand that each fixation brings as we move our eyes and head to explore the visual scene. Using performance measures of the FVF we show that during search conditions with eye movements, the FVF is small compared to the size of the FVF measured during search without eye movements. In all cases the size of the FVF is constrained by the density of distracting items. During search without eye movements the FVF expands with time; subjects have idiosyncratic spatial biases suggesting covert shifts of attention. For search within the constraints imposed by item density, the rate of item inspection is the same across all search conditions. Array set size effects are not apparent once stimulus density is taken into account, a result that is consistent with a spatial constraint for the FVF based on the cortical separation hypothesis.
Manon Mulckhuyse; Wieske Zoest; Jan Theeuwes
Capture of the eyes by relevant and irrelevant onsets Journal Article
In: Experimental Brain Research, vol. 186, no. 2, pp. 225–235, 2008.
During early visual processing the eyes can be captured by salient visual information in the environment. Whether a salient stimulus captures the eyes in a purely automatic, bottom-up fashion or whether capture is contingent on task demands is still under debate. In the first experiment, we manipulated the relevance of a salient onset distractor. The onset distractor could either be similar or dissimilar to the target. Error saccade latency distributions showed that early in time, oculomotor capture was driven purely bottom-up irrespective of distractor similarity. Later in time, top-down information became available resulting in contingent capture. In the second experiment, we manipulated the saliency information at the target location. A salient onset stimulus could be presented either at the target or at a non-target location. The latency distributions of error and correct saccades had a similar time-course as those observed in the first experiment. Initially, the distributions overlapped but later in time task-relevant information decelerated the oculomotor system. The present findings reveal the interaction between bottom-up and top-down processes in oculomotor behavior. We conclude that the task relevance of a salient event is not crucial for capture of the eyes to occur. Moreover, task-relevant information may integrate with saliency information to initiate saccades, but only later in time.
Jorge Otero-Millan; Xoana G. Troncoso; Stephen L. Macknik; Ignacio Serrano-Pedraza; Susana Martinez-Conde
Saccades and microsaccades during visual fixation, exploration, and search: Foundations for a common saccadic generator Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–18, 2008.
Microsaccades are known to occur during prolonged visual fixation, but it is a matter of controversy whether they also happen during free-viewing. Here we set out to determine: 1) whether microsaccades occur during free visual exploration and visual search, 2) whether microsaccade dynamics vary as a function of visual stimulation and viewing task, and 3) whether saccades and microsaccades share characteristics that might argue in favor of a common saccade-microsaccade oculomotor generator. Human subjects viewed naturalistic stimuli while performing various viewing tasks, including visual exploration, visual search, and prolonged visual fixation. Their eye movements were simultaneously recorded with high precision. Our results show that microsaccades are produced during the fixation periods that occur during visual exploration and visual search. Microsaccade dynamics during free-viewing moreover varied as a function of visual stimulation and viewing task, with increasingly demanding tasks resulting in increased microsaccade production. Moreover, saccades and microsaccades had comparable spatiotemporal characteristics, including the presence of equivalent refractory periods between all pair-wise combinations of saccades and microsaccades. Thus our results indicate a microsaccade-saccade continuum and support the hypothesis of a common oculomotor generator for saccades and microsaccades.
Xiaochuan Pan; Kosuke Sawa; Ichiro Tsuda; Minoru Tsukada; Masamichi Sakagami
Reward prediction based on stimulus categorization in primate lateral prefrontal cortex Journal Article
In: Nature Neuroscience, vol. 11, no. 6, pp. 703–712, 2008.
To adapt to changeable or unfamiliar environments, it is important that animals develop strategies for goal-directed behaviors that meet the new challenges. We used a sequential paired-association task with asymmetric reward schedule to investigate how prefrontal neurons integrate multiple already-acquired associations to predict reward. Two types of reward-related neurons were observed in the lateral prefrontal cortex: one type predicted reward independent of physical properties of visual stimuli and the other encoded the reward value specific to a category of stimuli defined by the task requirements. Neurons of the latter type were able to predict reward on the basis of stimuli that had not yet been associated with reward, provided that another stimulus from the same category was paired with reward. The results suggest that prefrontal neurons can represent reward information on the basis of category and propagate this information to category members that have not been linked directly with any experience of reward.
Sebastian Pannasch; Jens R. Helmert; Katharina Roth; Ann-Katrin Herbold; Henrik Walter
Visual fixation durations and saccade amplitudes: Shifting relationship in a variety of conditions Journal Article
In: Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–19, 2008.
Is there any relationship between visual fixation durations and saccade amplitudes in free exploration of pictures and scenes? In four experiments with naturalistic stimuli, we compared eye movements during early and late phases of scene perception. Influences of repeated presentation of similar stimuli (Experiment 1), object density (Experiment 2), emotional stimuli (Experiment 3) and mood induction (Experiment 4) were examined. The results demonstrate a systematic increase in the durations of fixations and a decrease for saccadic amplitudes over the time course of scene perception. This relationship was very stable across the variety of studied conditions. It can be interpreted in terms of a shifting balance of the two modes of visual information processing.
Bob McMurray; Richard N. Aslin; Michael K. Tanenhaus; Michael J. Spivey; Dana Subik
Gradient sensitivity to within-category variation in words and syllables Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 6, pp. 1609–1631, 2008.
Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability.
Bob McMurray; Meghan Clayards; Michael K. Tanenhaus; Richard N. Aslin
Tracking the time course of phonetic cue integration during spoken word recognition Journal Article
In: Psychonomic Bulletin & Review, vol. 15, no. 6, pp. 1064–1071, 2008.
Speech perception requires listeners to integrate multiple cues that each contribute to judgments about a phonetic category. Classic studies of trading relations assessed the weights attached to each cue but did not explore the time course of cue integration. Here, we provide the first direct evidence that asynchronous cues to voicing (/b/ vs. /p/) and manner (/b/ vs. /w/) contrasts become available to the listener at different times during spoken word recognition. Using the visual world paradigm, we show that the probability of eye movements to pictures of target and of competitor objects diverge at different points in time after the onset of the target word. These points of divergence correspond to the availability of early (voice onset time or formant transition slope) and late (vowel length) cues to voicing and manner contrasts. These results support a model of cue integration in which phonetic cues are used for lexical access as soon as they are available.
Dynamic, object-based remapping of visual features in trans-saccadic perception Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–17, 2008.
Saccadic eye movements can dramatically change the location in which an object is projected onto the retina. One mechanism that might potentially underlie the perception of stable objects, despite the occurrence of saccades, is the "remapping" of receptive fields around the time of saccadic eye movements. Here we examined two possible models of trans-saccadic remapping of visual features: (1) spatiotopic coordinates that remain constant across saccades or (2) an object-based remapping in retinal coordinates. We used form adaptation to test "object" and "space" based predictions for an adapter that changed spatial and/or retinal location due to eye movements, object motion or manual displacement using a computer mouse. The predictability and speed of the object motion was also manipulated. The main finding was that maximum transfer of the form aftereffect in retinal coordinates occurred when there was a saccade and when the object motion was attended and predictable. A small transfer was also found when observers moved the object across the screen using a computer mouse. The overall pattern of results is consistent with the theory of object-based remapping for salient stimuli. Thus, the active updating of the location and features of attended objects may play a role in perceptual stability.
Antje S. Meyer; Marc Ouellet; Christine Häcker
Parallel processing of objects in a naming task Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 34, no. 4, pp. 982–987, 2008.
The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.
Ikuya Murakami; Rumi Hisakata
The effects of eccentricity and retinal illuminance on the illusory motion seen in a stationary luminance gradient Journal Article
In: Vision Research, vol. 48, no. 19, pp. 1940–1948, 2008.
Kitaoka recently reported a novel illusion named the Rotating Snakes [Kitaoka, A., & Ashida, H. (2003). Phenomenal characteristics of the peripheral drift illusion. Vision, 15, 261-262], in which a stationary pattern appears to rotate constantly. In the first experiment, we attempted to quantify the anecdote that this illusion is better perceived in the periphery. The stimulus was a ring composed of stepwise luminance patterns and was presented in the left visual field. With increasing eccentricity up to 10-14 deg, the cancellation velocity required to establish perceptual stationarity increased. In the next experiment, we examined the effect of retinal illuminance. Interestingly, the cancellation velocity decreased as retinal illuminance was decreased. We also estimated the human temporal impulse response at some retinal illuminances by using the double-pulse method to confirm that the shape of the impulse response actually changes from biphasic to monophasic, which indicates that the transient processing system has weaker activities at lower illuminances. We conclude that some transient temporal processing system is necessary for the illusion.
Chie Nakatani; Cees Van Leeuwen
A pragmatic approach to multi-modality and non-normality in fixation duration studies of cognitive processes Journal Article
In: Journal of Eye Movement Research, vol. 1, no. 2, pp. 1–12, 2008.
Interpreting eye-fixation durations in terms of cognitive processing load is complicated by the multimodality of their distribution. An important source of multimodality is the distinction between single and multiple fixations to the same object. Based on the distinction, we separated a log-transformed distribution made to an object in non-reading task. We could reasonably conclude that the separated distributions belong to the same, general logistic distribution, which has a finite population mean and variance. This allowed us to use the sample means as dependent variables in a parametric analysis. Six tasks were compared, which required different levels of post-perceptual processing. A no-task control condition was added to test for perceptual processing. Fixation durations differentiated task-specific perceptual, but not post-perceptual processing demands.
Harold T. Nefs; J. M. Harris
Induced motion in depth and the effects of vergence eye movements Journal Article
In: Journal of Vision, vol. 8, no. 3, pp. 1–16, 2008.
Induced motion is the false impression that physically stationary objects move when in the presence of other objects that really move. In this study, we investigated this motion illusion in the depth dimension. We raised three related questions, as follows: (1) What cues in the stimulus are responsible for this motion illusion in depth? (2) Is the size of this illusion affected by vergence eye movements? And (3) are the effects of eye movements different for motion in depth and for motion in the frontoparallel plane? To answer these questions, we measured the point of subjective stationarity. Observers viewed an inducer target that oscillated in depth and a test target that was located directly above it. The test target moved in phase or out of phase with the inducer, but with a smaller amplitude. Observers had to indicate whether the test target and the inducer target moved in phase or out of phase with one another. They were asked to keep their eyes either on the test target or on the inducer. For motion in depth, created by binocular disparity and retinal size change or by binocular disparity alone, we found that when the eyes followed the inducer, subjective stationarity occurred at approximately 40-45% of the inducer's amplitude. When the eyes were kept fixated on the test target, the bias decreased tenfold to around 4%. When size change was the only cue to motion in depth, there was no illusory motion. When the eyes were kept on an inducer moving in the frontoparallel plane, induced motion was of the same order as for induced motion in depth, namely, approximately 44%. When the induced motion was in the frontoparallel plane, we found that perceived stationarity occurred at approximately 23% of inducer's amplitude when the eyes were kept on the test target.
Mark B. Neider; Gregory J. Zelinsky
Exploring set size effects in scenes: Identifying the objects of search Journal Article
In: Visual Cognition, vol. 16, no. 1, pp. 1–10, 2008.
Traditional search paradigms utilize simple displays, allowing a precise determination of set size. However, objects in realistic scenes are largely uncountable, and typically visually and semantically complex. Can traditional conceptions of set size be applied to search in realistic scenes? Observers searched quasirealistic scenes for a tank target hidden among tree distractors varying in number and density. Search efficiency improved as trees were added to the display, a reverse set size effect. Eye movement analyses revealed that observers fixated individual trees when the set size was small, and the open regions between trees when the set size was large. Rather than a set size consisting of objectively countable objects, we interpret these data as evidence for a restricted functional set size consisting of idiosyncratically defined objects of search. Observers exploit low-level perceptual grouping processes and high-level semantic scene constraints to dynamically create objects that are appropriate to a given search task.
Wolfgang Einhäuser; Ueli Rutishauser; Christof Koch
Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli Journal Article
In: Journal of Vision, vol. 8, no. 2, pp. 1–19, 2008.
In natural vision both stimulus features and task-demands affect an observer's attention. However, the relationship between sensory-driven ("bottom-up") and task-dependent ("top-down") factors remains controversial: Can task-demands counteract strong sensory signals fully, quickly, and irrespective of bottom-up features? To measure attention under naturalistic conditions, we recorded eye-movements in human observers, while they viewed photographs of outdoor scenes. In the first experiment, smooth modulations of contrast biased the stimuli's sensory-driven saliency towards one side. In free-viewing, observers' eye-positions were immediately biased toward the high-contrast, i.e., high-saliency, side. However, this sensory-driven bias disappeared entirely when observers searched for a bull's-eye target embedded with equal probability to either side of the stimulus. When the target always occurred in the low-contrast side, observers' eye-positions were immediately biased towards this low-saliency side, i.e., the sensory-driven bias reversed. Hence, task-demands do not only override sensory-driven saliency but also actively countermand it. In a second experiment, a 5-Hz flicker replaced the contrast gradient. Whereas the bias was less persistent in free viewing, the overriding and reversal took longer to deploy. Hence, insufficient sensory-driven saliency cannot account for the bias reversal. In a third experiment, subjects searched for a spot of locally increased contrast ("oddity") instead of the bull's-eye ("template"). In contrast to the other conditions, a slight sensory-driven free-viewing bias prevails in this condition. In a fourth experiment, we demonstrate that at known locations template targets are detected faster than oddity targets, suggesting that the former induce a stronger top-down drive when used as search targets. Taken together, task-demands can override sensory-driven saliency in complex visual stimuli almost immediately, and the extent of overriding depends on the search target and the overridden feature, but not on the latter's free-viewing saliency.
Wolfgang Einhäuser; Merrielle Spain; Pietro Perona
Objects predict fixations better than early saliency Journal Article
In: Journal of Vision, vol. 8, no. 14, pp. 1–26, 2008.
Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as "saliency maps," are often built on the assumption that "early" features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to "interesting" objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated.
Wolfgang Einhäuser; James Stout; Christof Koch; Olivia Carter
Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry Journal Article
In: Proceedings of the National Academy of Sciences, vol. 105, no. 5, pp. 1704–1709, 2008.
During sustained viewing of an ambiguous stimulus, an individual's perceptual experience will generally switch between the different possible alternatives rather than stay fixed on one interpretation (perceptual rivalry). Here, we measured pupil diameter while subjects viewed different ambiguous visual and auditory stimuli. For all stimuli tested, pupil diameter increased just before the reported perceptual switch and the relative amount of dilation before this switch was a significant predictor of the subsequent duration of perceptual stability. These results could not be explained by blink or eye-movement effects, the motor response or stimulus driven changes in retinal input. Because pupil dilation reflects levels of norepinephrine (NE) released from the locus coeruleus (LC), we interpret these results as suggestive that the LC-NE complex may play the same role in perceptual selection as in behavioral decision making.
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold
Visual selective attention among persons with schizophrenia: The distractor ratio effect Journal Article
In: Schizophrenia Research, vol. 105, pp. 61–67, 2008.
The current study investigated whether impaired visual attention among patients with schizophrenia can be accounted for by poor perceptual organization and impaired search selectivity. Twenty-three patients with schizophrenia and 22 healthy control participants completed a conjunctive visual search task where the relative frequency of the two types of distractors was manipulated. It has been shown that, when the total number of items in a display is fixed, search performance depends on the relative frequency of the types of distractors (i.e., as the ratio becomes more discrepant search time decreases). This modulation of search efficiency reflects participants' ability to group items by their perceptual similarity and then search only the smaller group of items that share a feature with the target. Results show that patients modulate their response time normally as a function of the distractor ratio – that is, they benefit from the presence of a smaller distractor subset in the display. This suggests that patients with schizophrenia, group items according to their perceptual similarity and flexibly deploy their attention to the smaller subset of distractors on each trial. These results demonstrate that search selectivity as a function of the relative frequency of distractors is unimpaired among patients with schizophrenia.
Ralf Engbert; Antje Nuthmann
Self-consistent estimation of mislocated fixations during reading Journal Article
In: PLoS ONE, vol. 3, no. 2, pp. e1534, 2008.
During reading, we generate saccadic eye movements to move words into the center of the visual field for word processing. However, due to systematic and random errors in the oculomotor system, distributions of within-word landing positions are rather broad and show overlapping tails, which suggests that a fraction of fixations is mislocated and falls on words to the left or right of the selected target word. Here we propose a new procedure for the self-consistent estimation of the likelihood of mislocated fixations in normal reading. Our approach is based on iterative computation of the proportions of several types of oculomotor errors, the underlying probabilities for word-targeting, and corrected distributions of landing positions. We found that the average fraction of mislocated fixations ranges from about 10% to more than 30% depending on word length. These results show that fixation probabilities are strongly affected by oculomotor errors.
Paola Escudero; Rachel Hayes-Harb; Holger Mitterer
Novel second-language words and asymmetric lexical access Journal Article
In: Journal of Phonetics, vol. 36, no. 2, pp. 345–360, 2008.
The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English nonwords, of which 10 contained the English contrast /$epsilon$/-æ/ (a confusable contrast for native Dutch speakers). One group of subjects learned the words by matching their auditory forms to pictured meanings, while a second group additionally saw the spelled forms of the words. We found that the group who received only auditory forms confused words containing /æ/ and /$epsilon$/ symmetrically, i.e., both /æ/ and /$epsilon$/ auditory tokens triggered looks to pictures containing both /æ/ and /$epsilon$/. In contrast, the group who also had access to spelled forms showed the same asymmetric word recognition pattern found by previous studies, i.e., they only looked at pictures of words containing /$epsilon$/ when presented with /$epsilon$/ target tokens, but looked at pictures of words containing both /æ/ and /$epsilon$/ when presented with /æ/ target tokens. The results demonstrate that L2 learners can form lexical contrasts for auditorily confusable novel L2 words. However, and most importantly, this study suggests that explicit information over the contrastive nature of two new sounds may be needed to build separate lexical representations for similar-sounding L2 words.