All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2011 |
Kohske Takahashi; Haruaki Fukuda; Hanako Ikeda; Hirokazu Doi; Katsumi Watanabe; Kazuhiro Ueda; Kazuyuki Shinohara Roles of the upper and lower bodies in direction discrimination of point-light walkers Journal Article In: Journal of Vision, vol. 11, no. 14, pp. 1–13, 2011. @article{Takahashi2011, We can easily recognize human movements from very limited visual information (biological motion perception). The present study investigated how upper and lower body areas contribute to direction discrimination of a point-light (PL) walker. Observers judged the direction that the PL walker was facing. The walker performed either normal walking or hakobi, a walking style used in traditional Japanese performing arts, in which the amount of the local motion of extremities is much smaller than that in normal walking. Either the upper, lower, or full body of the PL walker was presented. Discrimination performance was found to be better for the lower body than for the upper body. We also found that discrimination performance for the lower body was affected by walking style and/or the amount of local motion signals. Additional eye movement analyses indicated that the observers initially inspected the region corresponding to the upper body, and then the gaze shifted toward the lower body. This held true even when the upper body was absent. We conjectured that the upper body subserved to localize the PL walker and the lower body to discriminate walking direction. We concluded that the upper and lower bodies play different roles in direction discrimination of a PL walker. |
Debra Titone; Maya R. Libben; Julie Mercier; Veronica Whitford; Irina Pivneva Bilingual lexical access during L1 sentence reading: The effects of L2 knowledge, semantic constraint, and L1-L2 intermixing Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 6, pp. 1412–1431, 2011. @article{Titone2011, Libben and Titone (2009) recently observed that cognate facilitation and interlingual homograph interference were attenuated by increased semantic constraint during bilingual second language (L2) reading, using eye movement measures. We now investigate whether cross-language activation also occurs during first language (L1) reading as a function of age of L2 acquisition and task demands (i.e., inclusion of L2 sentences). In Experiment 1, participants read high and low constraint English (L1) sentences containing interlingual homographs, cognates, or control words. In Experiment 2, we included French (L2) filler sentences to increase salience of the L2 during L1 reading. The results suggest that bilinguals reading in their L1 show nonselective activation to the extent that they acquired their L2 early in life. Similar to our previous work on L2 reading, high contextual constraint attenuated cross-language activation for cognates. The inclusion of French filler items promoted greater cross-language activation, especially for late stage reading measures. Thus, L1 bilingual reading is modulated by L2 knowledge, semantic constraint, and task demands. |
K. Torab; T. S. Davis; D. J. Warren; Paul A. House; R. A. Normann; Bradley Greger In: Journal of Neural Engineering, vol. 8, no. 3, pp. 1–13, 2011. @article{Torab2011, We hypothesize that a visual prosthesis capable of evoking high-resolution visual perceptions can be produced using high-electrode-count arrays of penetrating microelectrodes implanted into the primary visual cortex of a blind human subject. To explore this hypothesis, and as a prelude to human psychophysical experiments, we have conducted a set of experiments in primary visual cortex (V1) of non-human primates using chronically implanted Utah Electrode Arrays (UEAs). The electrical and recording properties of implanted electrodes, the high-resolution visuotopic organization of V1, and the stimulation levels required to evoke behavioural responses were measured. The impedances of stimulated electrodes were found to drop significantly immediately following stimulation sessions, but these post-stimulation impedances returned to pre-stimulation values by the next experimental session. Two months of periodic microstimulation at currents of up to 96 µA did not impair the mapping of receptive fields from local field potentials or multi-unit activity, or impact behavioural visual thresholds of light stimuli that excited regions of V1 that were implanted with UEAs. These results demonstrate that microstimulation at the levels used did not cause functional impairment of the electrode array or the neural tissue. However, microstimulation with current levels ranging from 18 to 76 µA (46 ± 19 µA, mean ± std) was able to elicit behavioural responses on eight out of 82 systematically stimulated electrodes. We suggest that the ability of microstimulation to evoke phosphenes and elicit a subsequent behavioural response may depend on several factors: the location of the electrode tips within the cortical layers of V1, distance of the electrode tips to neuronal somata, and the inability of nonhuman primates to recognize and respond to a generalized set of evoked percepts. |
Annie Tremblay Learning to parse liaison-initial words: An eye-tracking study Journal Article In: Bilingualism: Language and Cognition, vol. 14, no. 3, pp. 257–279, 2011. @article{Tremblay2011, This study investigates the processing of resyllabified words by native English speakers at three proficiency levels in French and by native French speakers. In particular, it examines non-native listeners' development of a parsing procedure for recognizing vowel-initial words in the context of liaison, a process that creates a misalignment of the syllable and word boundaries in French. The participants completed an eye-tracking experiment in which they identified liaison- and consonant-initial real and nonce words in auditory stimuli. The results show that the non-native listeners had little difficulty recognizing liaison-initial real words, and they recognized liaison-initial nonce words more rapidly than consonant-initial ones. By contrast, native listeners recognized consonant-initial real and nonce words more rapidly than liaison-initial ones. These results suggest that native and non-native listeners used different parsing procedures for recognizing liaison-initial words in the task, with the non-native listeners' ability to segment liaison-initial words being phonologically abstract rather than lexical. © Copyright Cambridge University Press 2011. |
Leanne Trick; Lee Hogarth; Theodora Duka Prediction and uncertainty in human Pavlovian to instrumental transfer Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 3, pp. 757–765, 2011. @article{Trick2011, Attentional capture and behavioral control by conditioned stimuli have been dissociated in animals. The current study assessed this dissociation in humans. Participants were trained on a Pavlovian schedule in which 3 visual stimuli, A, B, and C, predicted the occurrence of an aversive noise with 90%, 50%, or 10% probability, respectively. Participants then went on to separate instrumental training in which a key-press response canceled the aversive noise with a .5 probability on a variable interval schedule. Finally, in the transfer phase, the 3 Pavlovian stimuli were presented in this instrumental schedule and were no longer differentially predictive of the outcome. Observing times and gaze dwell time indexed attention to these stimuli in both training and transfer. Aware participants acquired veridical outcome expectancies in training–that is, A > B > C, and these expectancies persisted into transfer. Most important, the transfer effect accorded with these expectancies, A > B > C. By contrast, observing times accorded with uncertainty–that is, they showed B > A = C during training, and B < A = C in the transfer phase. Dwell time bias supported this association between attention and uncertainty, although these data showed a slightly more complicated pattern. Overall, the study suggests that transfer is linked to outcome prediction and is dissociated from attention to conditioned stimuli, which is linked to outcome uncertainty. |
Cara Tsang; Craig G. Chambers Appearances aren't everything: Shape classifiers and referential processing in cantonese Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 5, pp. 1065–1080, 2011. @article{Tsang2011, Cantonese shape classifiers encode perceptual information that is characteristic of their associated nouns, although certain nouns are exceptional. For example, the classifier tiu occurs primarily with nouns for long-narrow-flexible objects (e.g., scarves, snakes, and ropes) and also occurs with the noun for a (short, rigid) key. In 3 experiments, we explored how the semantic information encoded in shape classifiers influences language comprehension. When judging the fit between classifiers and depicted objects in an explicit ranking task, Cantonese speakers evaluated classifier-noun pairings solely in terms of grammatical well-formedness and showed no separate sensitivity to the shape features of objects. In an eye-tracking task (Experiment 2), we also found little sensitivity to shape classifier semantics during real-time comprehension. However, in a subsequent experiment in which referent objects lacked the prototypical features for their accompanying classifiers (Experiment 3), an influence of shape semantics was found in participants' incidental fixations to nontarget objects. We conclude that shape classifiers influence referential interpretation primarily through their grammatical constraints, consistent with the agreementlike nature of classifiers in general. The role of shape classifiers' semantics on processing is apparent only in specific circumstances. |
Luminita Tarita-Nistor; Michael H. Brent; Martin J. Steinbach; Esther G. González Fixation stability during binocular viewing in patients with age-related macular degeneration Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 3, pp. 1887–1893, 2011. @article{TaritaNistor2011, PURPOSE: The authors examined the fixation stability of patients with age-related macular degeneration (AMD) and large interocular acuity differences, testing them in monocular and binocular viewing conditions. The relationship between fixation stability and visual performance during monocular and binocular viewing was also studied. METHODS: Twenty patients with AMD participated. Their monocular and binocular distance acuities were measured with the ETDRS charts. Fixation stability of the better and worse eye were recorded monocularly with the MP-1 microperimeter (Nidek Technologies Srl., Vigonza, PD, Italy) and binocularly with an EyeLink eye tracker (SR Research Ltd., Mississauga, Ontario, Canada). Additional recordings of monocular fixations were obtained with the EyeLink in viewing conditions when one eye viewed the target while the fellow eye was covered by an infrared filter so it could not see the target. RESULTS: Fixation stability of the better eye did not change across viewing conditions. Fixation stability of the worse eye was 84% to 100% better in the binocular condition than in monocular conditions. Fixation stability of the worse eye was significantly larger (P < 0.05) than that of the better eye when recorded monocularly with the MP-1 microperimeter. This difference was dramatically reduced in the binocular condition but remained marginally significant (95% confidence interval, -0.351 to -0.006). For the better eye, there was a moderate relationship between fixation stability and visual acuity, both monocular and binocular, in all conditions in which this eye viewed the target. CONCLUSIONS: Fixational ocular motor control and visual acuity are driven by the better-seeing eye when patients with AMD and large interocular acuity differences perform the tasks binocularly. |
Alisdair J. G. Taylor; Samuel B. Hutton Error awareness and antisaccade performance Journal Article In: Experimental Brain Research, vol. 213, no. 1, pp. 27–34, 2011. @article{Taylor2011, In the antisaccade task, healthy participants often make errors by saccading towards the sudden onset target, despite instructions to saccade to the mirror image location. One interesting and relatively unexplored feature of antisaccade performance is that participants are typically unaware of a large proportion of the errors they make. Across two experiments, we explored the extent to which error awareness is altered by manipulations known to affect antisaccade error rate. In experiment 1, participants performed the antisaccade task under standard instructions, instructions to respond as quickly as possible or instructions to delay responding. Delay instructions significantly reduced antisaccade error rate compared to the other instructions, but this reduction was driven by a decrease only in the number of errors that participants were aware of-the number of errors of which participants were unaware remained constant across instruction condition. In experiment 2, participants performed antisaccades alone, or concurrently with two different distractor tasks-spatial tapping and random number generation task. Both the dual task conditions increased the number of errors of which participants were aware, but again, unaware error rates remained unchanged. These results are discussed in the light of recent models of antisaccade performance and the role of conscious awareness in error correction. |
Masahiko Terao; Ikuya Murakami Compensation for equiluminant color motion during smooth pursuit eye movement Journal Article In: Journal of Vision, vol. 11, no. 6, pp. 1–12, 2011. @article{Terao2011, Motion perception is compromised at equiluminance. Because previous investigations have been primarily carried out under fixation conditions, it remains unknown whether and how equiluminant color motion comes into play in the velocity compensation for retinal image motion due to smooth pursuit eye movement. We measured the retinal image velocity required to reach subjective stationarity for a horizontally drifting sinusoidal grating in the presence of horizontal smooth pursuit. The grating was defined by luminance or chromatic modulation. When the subjective stationarity of the color motion was shifted toward environmental stationarity, compared with the subjective stationarity of luminance motion, that of color motion was farther from retinal stationarity, indicating that a slowing of color motion occurred before this factor contributed to the process by which retinal motion was integrated with a biological estimate of eye velocity during pursuit. The gain in the estimate of eye velocity per se was unchanged irrespective of whether the stimulus was defined by luminance or by color. Indeed, the subjective reduction in the speed of color motion during fixation was accounted for by the same amount of deterioration in speed. From these results, we conclude that the motion deterioration at equiluminance takes place prior to the velocity comparison. |
Katharine N. Thakkar; Jeffrey D. Schall; Leanne Boucher; Gordon D. Logan; Sohee Park Response inhibition and response monitoring in a saccadic countermanding task in schizophrenia Journal Article In: Biological Psychiatry, vol. 69, no. 1, pp. 55–62, 2011. @article{Thakkar2011, Background: Cognitive control deficits are pervasive in individuals with schizophrenia (SZ) and are reliable predictors of functional outcome, but the specificity of these deficits and their underlying neural mechanisms have not been fully elucidated. The objective of the present study was to determine the nature of response inhibition and response monitoring deficits in SZ and their relationship to symptoms and social and occupational functioning with a behavioral paradigm that provides a translational approach to investigating cognitive control. Methods: Seventeen patients with SZ and 16 demographically matched healthy control subjects participated in a saccadic countermanding task. Performance on this task is approximated as a race between movement generation and inhibition processes; this race model provides an estimate of the time needed to cancel a planned movement. Response monitoring can be assessed by reaction time adjustments on the basis of trial history. Results: Saccadic reaction time was normal, but patients required more time to inhibit a planned saccade. The latency of the inhibitory process was associated with the severity of negative symptoms and poorer occupational functioning. Both groups slowed down significantly after correctly cancelled and erroneously noncancelled stop signal trials, but patients slowed down more than control subjects after correctly inhibited saccades. Conclusions: These results suggest that SZ is associated with a difficulty in inhibiting planned movements and an inflated response adjustment effect after inhibiting a saccade. Furthermore, behavioral results are consistent with potential abnormalities in frontal and supplementary eye fields in patients with SZ. |
Lore Thaler; Melvyn A. Goodale Reaction times for allocentric movements are 35 ms slower than reaction times for target-directed movements Journal Article In: Experimental Brain Research, vol. 211, no. 2, pp. 313–328, 2011. @article{Thaler2011, Many movements that people perform every day are directed at visual targets, e.g., when we press an elevator button. However, many other movements are not target-directed, but are based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing or copying. Here, show a reaction time difference between these two types of movements in four separate experiments. In Exp. 1, subjects moved their eyes freely and used direct hand movements. In Exp. 2, subjects moved their eyes freely and their movements were tool-mediated (computer mouse). In Exp. 3, subjects fixated a central target and the visual field in which visual information was presented was manipulated. Experiment 4 was identical to Exp. 3 except for the fact that visual information about targets disappeared before movement onset. In all four experiments, reaction times in the allocentric task were approximately 35 ms slower than they were in the target-directed task. We suggest that this difference in reaction time between the two tasks reflects the fact that allocentric, but not target-directed, movements recruit the ventral stream, in particular lateral occipital cortex, which increases processing time. We also observed an advantage for movements made in the lower visual field as measured by movement variability, whether or not those movements were allocentric or target-directed. This latter result, we argue, reflects the role of the dorsal visual stream in the online control of movements in both kinds of tasks. |
Mervyn G. Thomas; Moira Crosier; Susan Lindsay; Anil Kumar; Shery Thomas; Masasuke Araki; Chris J. Talbot; Rebecca J. McLean; Mylvaganam Surendran; Katie Taylor; Bart P. Leroy; Anthony T. Moore; David G. Hunter; Richard W. Hertle; Patrick Tarpey; Andrea Langmann; Susanne Lindner; Martina Brandner; Irene Gottlob The clinical and molecular genetic features of idiopathic infantile periodic alternating nystagmus Journal Article In: Brain, vol. 134, no. 3, pp. 892–902, 2011. @article{Thomas2011, Periodic alternating nystagmus consists of involuntary oscillations of the eyes with cyclical changes of nystagmus direction. It can occur during infancy (e.g. idiopathic infantile periodic alternating nystagmus) or later in life. Acquired forms are often associated with cerebellar dysfunction arising due to instability of the optokinetic-vestibular systems. Idiopathic infantile periodic alternating nystagmus can be familial or occur in isolation; however, very little is known about the clinical characteristics, genetic aetiology and neural substrates involved. Five loci (NYS1-5) have been identified for idiopathic infantile nystagmus; three are autosomal (NYS2, NYS3 and NYS4) and two are X-chromosomal (NYS1 and NYS5). We previously identified the FRMD7 gene on chromosome Xq26 (NYS1 locus); mutations of FRMD7 are causative of idiopathic infantile nystagmus influencing neuronal outgrowth and development. It is unclear whether the periodic alternating nystagmus phenotype is linked to NYS1, NYS5 (Xp11.4-p11.3) or a separate locus. From a cohort of 31 X-linked families and 14 singletons (70 patients) with idiopathic infantile nystagmus we identified 10 families and one singleton (21 patients) with periodic alternating nystagmus of which we describe clinical phenotype, genetic aetiology and neural substrates involved. Periodic alternating nystagmus was not detected clinically but only on eye movement recordings. The cycle duration varied from 90 to 280 s. Optokinetic reflex was not detectable horizontally. Mutations of the FRMD7 gene were found in all 10 families and the singleton (including three novel mutations). Periodic alternating nystagmus was predominantly associated with missense mutations within the FERM domain. There was significant sibship clustering of the phenotype although in some families not all affected members had periodic alternating nystagmus. In situ hybridization studies during mid-late human embryonic stages in normal tissue showed restricted FRMD7 expression in neuronal tissue with strong hybridization signals within the afferent arms of the vestibulo-ocular reflex consisting of the otic vesicle, cranial nerve VIII and vestibular ganglia. Similarly within the afferent arm of the optokinetic reflex we showed expression in the developing neural retina and ventricular zone of the optic stalk. Strong FRMD7 expression was seen in rhombomeres 1 to 4, which give rise to the cerebellum and the common integrator site for both these reflexes (vestibular nuclei). Based on the expression and phenotypic data, we hypothesize that periodic alternating nystagmus arises from instability of the optokinetic-vestibular systems. This study shows for the first time that mutations in FRMD7 can cause idiopathic infantile periodic alternating nystagmus and may affect neuronal circuits that have been implicated in acquired forms. |
Mervyn G. Thomas; Irene Gottlob; Rebecca J. McLean; Gail Maconachie; Anil Kumar; Frank A. Proudlock Reading strategies in infantile nystagmus syndrome Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 11, pp. 8156–8165, 2011. @article{Thomas2011a, PURPOSE: The adaptive strategies adopted by individuals with infantile nystagmus syndrome (INS) during reading are not clearly understood. Eye movement recordings were used to identify ocular motor strategies used by patients with INS during reading. METHODS: Eye movements were recorded at 500 Hz in 25 volunteers with INS and 7 controls when reading paragraphs of text centered at horizontal gaze angles of -20°, -10°, 0°, 10°, and 20°. At each location, reading speeds were measured, along with logMAR visual acuity and nystagmus during gaze-holding. Adaptive strategies were identified from slow and quick-phase patterns in the nystagmus waveform. RESULTS: Median reading speeds were 204.3 words per minute in individuals with INS and 273.6 words per minute in controls. Adaptive strategies included (1) suppression of corrective quick phases allowing involuntary slow phases to achieve the desired goal, (2) voluntarily changing the character of the involuntary slow phases using quick phases, and (3) correction of involuntary slow phases using quick phases. Several individuals with INS read more rapidly than healthy control volunteers. CONCLUSIONS: These findings demonstrate that volunteers with INS learn to manipulate their nystagmus using a range of strategies to acquire visual information from the text. These strategies include taking advantage of the stereotypical and periodic nature of involuntary eye movements to allow the involuntary eye movements to achieve the desired goal. The versatility of these adaptations yields reading speeds in those with nystagmus that are often much better than might be expected, given the degree of foveal and ocular motor deficits. |
Hua Shu; Wei Zhou; Ming Yan; Reinhold Kliegl Font size modulates saccade-target selection in chinese reading Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 2, pp. 482–490, 2011. @article{Shu2011, In alphabetic writing systems, saccade amplitude (a close correlate of reading speed) is independent of font size, presumably because an increase in the angular size of letters is compensated for by a decrease of visual acuity with eccentricity. We propose that this invariance may (also) be due to the presence of spaces between words, guiding the eyes across a large range of font sizes. Here, we test whether saccade amplitude is also invariant against manipulations of font size during reading Chinese, a character-based writing system without spaces as explicit word boundaries for saccade-target selection. In contrast to word-spaced alphabetic writing systems, saccade amplitude decreased significantly with increased font size, leading to an increase in the number of fixations at the beginning of words and in the number of refixations. These results are consistent with a model which assumes that word beginning (rather than word center) is the default saccade target if the length of the parafoveal word is not available. |
Alisha Siebold; Wieske Zoest; Mieke Donk Oculomotor evidence for top-down control following the initial saccade Journal Article In: PLoS ONE, vol. 6, no. 9, pp. e23552, 2011. @article{Siebold2011, The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter. |
Sara Ann Simpson; Mathias Abegg; Jason J. S. Barton Rapid adaptation of visual search in simulated hemianopia Journal Article In: Cerebral Cortex, vol. 21, no. 7, pp. 1593–1601, 2011. @article{Simpson2011, Patients with homonymous hemianopia have altered visual search patterns, but it is unclear how rapidly this develops and whether it reflects a strategic adaptation to altered perception or plastic changes to tissue damage. To study the temporal dynamics of adaptation alone, we used a gaze-contingent display to simulate left or right hemianopia in 10 healthy individuals as they performed 25 visual search trials. Visual search was slower and less accurate in hemianopic than in full-field viewing. With full-field viewing, there were improvements in search speed, fixation density, and number of fixations over the first 9 trials, then stable performance. With hemianopic viewing, there was a rapid shift of fixation into the blind field over the first 5-7 trials, followed by continuing gradual improvements in completion time, number of fixations, and fixation density over all 25 trials. We conclude that in the first minutes after onset of hemianopia, there is a biphasic pattern of adaptation to altered perception: an early rapid qualitative change that shifts visual search into the blind side, followed by more gradual gains in the efficiency of using this new strategy, a pattern that has parallels in other studies of motor learning. |
Chris R. Sims; Robert A. Jacobs; David C. Knill Adaptive allocation of vision under competing task temands Journal Article In: Journal of Neuroscience, vol. 31, no. 3, pp. 928–943, 2011. @article{Sims2011, Human behavior in natural tasks consists of an intricately coordinated dance of cognitive, perceptual, and motor activities. Although much research has progressed in understanding the nature of cognitive, perceptual, or motor processing in isolation or in highly constrained settings, few studies have sought to examine how these systems are coordinated in the context of executing complex behavior. Previous research has suggested that, in the course of visually guided reaching movements, the eye and hand are yoked, or linked in a nonadaptive manner. In this work, we report an experiment that manipulated the demands that a task placed on the motor and visual systems, and then examined in detail the resulting changes in visuomotor coordination. We develop an ideal actor model that predicts the optimal coordination of vision and motor control in our task. On the basis of the predictions of our model, we demonstrate that human performance in our experiment reflects an adaptive response to the varying costs imposed by our experimental manipulations. Our results stand in contrast to previous theories that have assumed a fixed control mechanism for coordinating vision and motor control in reaching behavior. |
Petra Sinn; Ralf Engbert Saccadic facilitation by modulation of microsaccades in natural backgrounds Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 4, pp. 1029–1033, 2011. @article{Sinn2011, Saccades move objects of interest into the center of the visual field for high-acuity visual analysis. White, Stritzke, and Gegenfurtner (Current Biology, 18, 124-128, 2008) have shown that saccadic latencies in the context of a structured background are much shorter than those with an unstructured background at equal levels of visibility. This effect has been explained by possible preactivation of the saccadic circuitry whenever a structured background acts as a mask for potential saccade targets. Here, we show that background textures modulate rates of microsaccades during visual fixation. First, after a display change, structured backgrounds induce a stronger decrease of microsaccade rates than do uniform backgrounds. Second, we demonstrate that the occurrence of a microsaccade in a critical time window can delay a subsequent saccadic response. Taken together, our findings suggest that microsaccades contribute to the saccadic facilitation effect, due to a modulation of microsaccade rates by properties of the background. |
Anna Siyanova-Chanturia; Kathy Conklin; Norbert Schmitt Adding more fuel to the fire: An eye-tracking study of idiom processing by native and non-native speakers Journal Article In: Second Language Research, vol. 27, no. 2, pp. 251–272, 2011. @article{SiyanovaChanturia2011, Using eye-tracking, we investigate on-line processing of idioms in a biasing story context by native and non-native speakers of English. The stimuli are idioms used figuratively ("at the end of the day"–"eventually"), literally ("at the end of the day"–"in the evening"), and novel phrases ("at the end of the war"). Native speaker results indicate a processing advantage for idioms over novel phrases, as evidenced by fewer and shorter fixations. Further, no processing advantage is found for figurative idiom uses over literal ones in a full idiom analysis or in a recognition point analysis. Contrary to native speaker results, non-native findings suggest that L2 speakers process idioms at a similar speed to novel phrases. Further, figurative uses are processed more slowly than literal ones. Importantly, the recognition point analysis allows us to establish where non-natives slow down when processing the figurative meaning. |
Anna Siyanova-Chanturia; Kathy Conklin; Walter J. B. Heuven Seeing a phrase "time and again" matters: The role of phrasal frequency in the processing of multiword sequences Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 3, pp. 776–784, 2011. @article{SiyanovaChanturia2011a, Are speakers sensitive to the frequency with which phrases occur in language? The authors report an eye-tracking study that investigates this by examining the processing of multiword sequences that differ in phrasal frequency by native and proficient nonnative English speakers. Participants read sentences containing 3-word binomial phrases (bride and groom) and their reversed forms (groom and bride), which are identical in syntax and meaning but that differ in phrasal frequency. Mixed-effects modeling revealed that native speakers and nonnative speakers, across a range of proficiencies, are sensitive to the frequency with which phrases occur in English. Results also indicate that native speakers and higher proficiency nonnatives are sensitive to whether a phrase occurs in a particular configuration (binomial vs. reversed) in English, highlighting the contribution of entrenchment of a particular phrase in memory. |
Timothy J. Slattery; Bernhard Angele; Keith Rayner Eye movements and display change detection during reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 6, pp. 1924–1938, 2011. @article{Slattery2011, In the boundary change paradigm (Rayner, 1975), when a reader's eyes cross an invisible boundary location, a preview word is replaced by a target word. Readers are generally unaware of such changes due to saccadic suppression. However, some readers detect changes on a few trials and a small percentage of them detect many changes. Two experiments are reported in which we combined eye movement data with signal detection analyses to investigate display change detection. On each trial, readers had to indicate if they saw a display change in addition to reading for meaning. On half the trials the display change occurred during the saccade (immediate condition); on the other half, it was slowed by 15-25 ms (delay condition) to increase the likelihood that a change would be detected. Sentences were presented in an alternating case fashion allowing us to investigate the influence of both letter identity and case. In the immediate condition, change detection was higher when letters changed than when case changed corroborating findings that word processing utilizes abstract (case independent) letter identities. However, in the delay condition (where d' was much higher than the immediate condition), detection was equal for letter and case changes. The results of both experiments indicate that sensitivity to display changes was related to how close the eyes were to the invalid preview on the fixation prior to the display change, as well as the timing of the completion of this change relative to the start of the post-change fixation. |
Timothy J. Slattery; Elizabeth R. Schotter; Raymond W. Berry; Keith Rayner Parafoveal and foveal processing of abbreviations during eye fixations in reading: Making a case for case Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 4, pp. 1022–1031, 2011. @article{Slattery2011a, The processing of abbreviations in reading was examined with an eye movement experiment. Abbreviations were of 2 distinct types: acronyms (abbreviations that can be read with the normal grapheme-phoneme correspondence [GPC] rules, such as NASA) and initialisms (abbreviations in which the GPCs are letter names, such as NCAA). Parafoveal and foveal processing of these abbreviations was assessed with the use of the boundary change paradigm (K. Rayner, 1975). Using this paradigm, previews of the abbreviations were either identical to the abbreviation (NASA or NCAA), orthographically legal (NUSO or NOBA), or illegal (NRSB or NRBA). The abbreviations were presented as capital letter strings within normal, predominantly lowercase sentences and also sentences in all capital letters such that the abbreviations would not be visually distinct. The results indicate that acronyms and initialisms undergo different processing during reading and that readers can modulate their processing based on low-level visual cues (distinct capitalization) in parafoveal vision. In particular, readers may be biased to process capitalized letter strings as initialisms in parafoveal vision when the rest of the sentence is normal, lowercase letters. |
Elke Smeets; Anita Jansen; Anne Roefs Bias for the (un)attractive self: On the role of attention in causing body (dis)satisfaction Journal Article In: Health Psychology, vol. 30, no. 3, pp. 360–367, 2011. @article{Smeets2011, Objective: Body dissatisfaction plays a key role in the maintenance of eating disorders, and selective attention might be crucial for the origin of body dissatisfaction. A. Jansen, C. Nederkoorn, and S. Mulkens (2005) showed that eating disorder patients attend relatively more to their own unattractive body parts, whereas healthy controls attend relatively more to their own attractive body parts. In 2 studies, we investigated whether this bias in selective attention is causal to body dissatisfaction and whether an experimentally induced bias for attractive body parts might lead to increased body satisfaction in women who are highly dissatisfied with their bodies. Design: We used a between-subjects design in which participants were trained to attend to either their self-defined unattractive body parts or their self-defined attractive body parts by use of an eye tracker. Main Outcome Measures: State body and weight satisfaction. Results: Inducing a temporary attentional bias for self-defined unattractive body parts led to a significant decrease in body satisfaction and teaching body-dissatisfied women to attend to their own attractive body parts led to a significant increase in body satisfaction. Conclusion: Selective attention for unattractive body parts can play a role in the development of body dissatisfaction, and changing the way one looks may be a new way for improving body dissatisfaction in women. |
Nicholas D. Smith; David P. Crabb; David F. Garway-Heath An exploratory study of visual search performance in glaucoma Journal Article In: Ophthalmic and Physiological Optics, vol. 31, no. 3, pp. 225–232, 2011. @article{Smith2011, PURPOSE: Visual search plays an integral role in many daily activities. This study aimed to determine whether patients with glaucoma are slower than visually healthy age-matched individuals when searching for items in computer displayed images. METHODS: Forty participants were recruited for the study: 20 patients with a clinical diagnosis of glaucoma and 20 age-similar visually healthy control subjects. All participants had visual acuity of 6/12 or better. Participants were presented with 20 images with Landolt C symbols and 15 photographic images of everyday scenes on a computer. The time taken by each participant to locate a specified item in each image was recorded. Average search times were calculated across participants and compared between groups. RESULTS: All the patients had visual field defects in both eyes. On average, the patients also differed from control subjects by binocular contrast sensitivity measurements (p = 0.01) and visual acuity (p = 0.003). The patients (mean age = 67 years, S.D.: 10 years) and controls (mean age: 67 years, S.D.: 11 years) were age similar (p = 0.40). The median search time for patients finding target items in photographs of everyday scenes was 15.2 s (interquartile range 9.4-20.6 s) and this was significantly slower than the median time (10.0 s; interquartile range 7.2-10.3 s) taken by the controls (p = 0.007). There was no statistical evidence for a difference in median search times between groups in the Landolt C search task (p = 0.24). CONCLUSION: Some individuals with glaucomatous visual field defects in both eyes find it especially difficult to locate objects in photographs of everyday scenes when compared to visually healthy individuals of a similar age. |
Adrian Staub The effect of lexical predictability on distributions of eye fixation durations Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 2, pp. 371–376, 2011. @article{Staub2011, A word's predictability in context has a well-established effect on fixation durations in reading. To investigate how this effect is manifested in distributional terms, an experiment was carried out in which subjects read each of 50 target words twice, once in a high-predictability context and once in a low-predictability context. The ex-Gaussian distribution was fit to each subject's first-fixation durations and single-fixation durations. For both measures, the μ parameter increased when a word was unpredictable, while the τ parameter was not significantly affected, indicating that a predictability manipulation shifts the distribution of fixation durations but does not affect the degree of skew. Vincentile plots showed that the mean ex-Gaussian parameters described the typical distribution shapes extremely well. These results suggest that the predictability and frequency effects are functionally distinct, since a frequency manipulation has been shown to influence both μ and τ. The results may also be seen as consistent with the finding from single-word recognition paradigms that semantic priming affects only μ. |
Adrian Staub Word recognition and syntactic attachment in reading: Evidence for a staged architecture Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 3, pp. 407–433, 2011. @article{Staub2011a, In 3 experiments, the author examined how readers' eye movements are influenced by joint manipulations of a word's frequency and the syntactic fit of the word in its context. In the critical conditions of the first 2 experiments, a high- or low-frequency verb was used to disambiguate a garden-path sentence, while in the last experiment, a high- or low-frequency verb constituted a phrase structure violation. The frequency manipulation always influenced the early eye movement measures of first-fixation duration and gaze duration. The context manipulation had a delayed effect in Experiment 1, influencing only the probability of a regressive eye movement from later in the sentence. However, the context manipulation influenced the same early eye movement measures as the frequency effect in Experiments 2 and 3, though there was no statistical interaction between the effects of these variables. The context manipulation also influenced the probability of a regressive eye movement from the verb, though the frequency manipulation did not. These results are shown to confirm predictions emerging from the serial, staged architecture for lexical and integrative processing of the E-Z Reader 10 model of eye movement control in reading (Reichle, Warren, & McConnell, 2009). It is argued, more generally, that the results provide an important constraint on how the relationship between visual word recognition and syntactic attachment is treated in processing models. |
Maria Staudte; Matthew W. Crocker Investigating joint attention mechanisms through spoken human-robot interaction Journal Article In: Cognition, vol. 120, no. 2, pp. 268–291, 2011. @article{Staudte2011, Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's focus of (visual) attention to anticipate, ground, and disambiguate spoken references. To investigate the dynamics of such gaze-following and its influence on utterance comprehension in a controlled manner, we use a human-robot interaction setting. Specifically, we hypothesize that referential gaze is interpreted as a cue to the speaker's referential intentions which facilitates or disrupts reference resolution. Moreover, the use of a dynamic and yet extremely controlled gaze cue enables us to shed light on the simultaneous and incremental integration of the unfolding speech and gaze movement.We report evidence from two eye-tracking experiments in which participants saw videos of a robot looking at and describing objects in a scene. The results reveal a quantified benefit-disruption spectrum of gaze on utterance comprehension and, further, show that gaze is used, even during the initial movement phase, to restrict the spatial domain of potential referents. These findings more broadly suggest that people treat artificial agents similar to human agents and, thus, validate such a setting for further explorations of joint attention mechanisms. |
Genevieve Z. Steiner; Robert J. Barry Pupillary responses and event-related potentials as indices of the orienting reflex Journal Article In: Psychophysiology, vol. 48, no. 12, pp. 1648–1655, 2011. @article{Steiner2011, This study examined skin conductance responses, the late positive complex of the event-related potential, and pupillary dilation responses as autonomic and central correlates of the orienting reflex (OR) in the context of indifferent and significant stimuli. In particular, we aimed to clarify the inconsistencies surrounding the pupillary dilation response as an OR index. An auditory dishabituation paradigm was employed, and physiological measures were recorded from 24 participants. Response decrement to a repeated stimulus, response recovery to a change stimulus, and subsequent dishabituation were assessed. Findings confirmed expectations that the skin conductance response and the late positive complex are indices of the OR. The pupillary dilation response, however, demonstrated an unexpected sensitivity to stimulus novelty only, while the prestimulus measure of tonic pupil diameter showed the significance effect that was expected of the phasic measure. Together, these findings argue against the suggestion that the pupillary dilation response is an OR index. The diverse results obtained from this experiment contribute to our understanding of the OR, and provide impetus for further research with a variety of paradigm manipulations. |
Mike Stieff; Mary Hegarty; Ghislain Deslongchamps Identifying representational competence with multi-representational displays Journal Article In: Cognition and Instruction, vol. 29, no. 1, pp. 123–145, 2011. @article{Stieff2011, Increasingly, multi-representational educational technologies are being deployed in science classrooms to support science learning and the development of representational competence. Several studies have indicated that students experience significant challenges working with these multi-representational displays and prefer to use only one representation while problem solving. Here, we examine the use of one such display, a multi-representational molecular mechanics animation, by organic chemistry undergraduates in a problem-solving interview. Using both protocol analysis and eye fixation data, our analysis indicates that students rely mainly on two visual-spatial representations in the display and do not make use of two accompanying mathematical representations. Moreover, we explore how eye fixation data complement verbal protocols by providing information about how students allocate their attention to different locations of a multi-representational display with and without concurrent verbal utterances. Our analysis indicates that verbal protocols and eye movement data are highly correlated, suggesting that eye fixations and verbalizations reflect similar cognitive processes in such studies. |
Viola S. Stormer; Shu-Chen Li; Hauke R. Heekeren; Ulman Lindenberger Feature-based interference from unattended visual field during attentional tracking in younger and older adults Journal Article In: Journal of Vision, vol. 11, no. 2, pp. 1–12, 2011. @article{Stormer2011, The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall. |
Michael J. Stroud; Tamaryn Menneer; Kyle R. Cave; Nick Donnelly; Keith Rayner Search for multiple targets of different colours: Misguided eye movements reveal a reduction of colour selectivity Journal Article In: Applied Cognitive Psychology, vol. 25, no. 6, pp. 971–982, 2011. @article{Stroud2011, Searching for two targets simultaneously is often less efficient than conducting two separate searches. Eye movements were tracked to understand the source of this dual-target cost. Findings are discussed in the context of security screening. In both single-target and dual-target search, displays contained one target at most. Stimuli were abstract shapes modeled after guns and other threat items. With these targets and distractors, color information was more helpful in guiding search than shape information. When the two targets had different colors, distractors with colors different from either target were fixated more often in dual-target search than in single-target searches. Thus a dual-target cost arose from a reduction in color selectivity, reflecting limitations in the ability to represent two target features simultaneously and use them to guide search. Because of these limitations, performance in security searches may improve if each image was searched by two screeners, each specializing in a different category of threat item. |
Tim J. Smith; John M. Henderson Looking back at Waldo: Oculomotor inhibition of return does not prevent return fixations Journal Article In: Journal of Vision, vol. 11, no. 1, pp. 1–11, 2011. @article{Smith2011a, Inhibition of Return (IOR) is a difficulty in processing stimuli presented at recently attended locations. IOR is widely believed to facilitate foraging of a visual scene by decreasing the probability that gaze will return to previously fixated locations. However, there is a lack of clear evidence in support of the foraging facilitator hypothesis during scene search. The original R. M. Klein andW. J. MacInnes' (1999) Where'sWaldo study reported a forward bias in the distribution of fixations that was taken as evidence for the foraging facilitator hypothesis. The present study was designed to replicate R. M. Klein and W. J. R. M. Klein andW. J. MacInnes' (1999) Where'sWaldo study reported a forward bias in the distribution of fixations that was taken as evidence for the foraging facilitator hypothesis. The present study was designed to replicate R. M. Klein and W. J. MacInnes' (1999) but include detailed analysis of fixation distributions in order to test the precise predictions of the foraging facilitator hypothesis. The results indicate that latencies of saccades returning to 1-back (and possibly 2-back) locations MacInnes' (1999) but include detailed analysis of fixation distributions in order to test the precise predictions of the foraging facilitator hypothesis. The results indicate that latencies of saccades returning to 1-back (and possibly 2-back) locations during visual search are elevated. However, there is no evidence that the probability of returning to these locations is significantly less than control locations. Eye movement behavior during search of visual scenes does not support the view that IOR facilitates foraging. |
Tim J. Smith; John M. Henderson Does oculomotor inhibition of return influence fixation probability during scene search? Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 8, pp. 2384–2398, 2011. @article{Smith2011b, Oculomotor inhibition of return (IOR) is believed to facilitate scene scanning by decreasing the probability that gaze will return to a previously fixated location. This "foraging" hypothesis was tested during scene search and in response to sudden-onset probes at the immediately previous (one-back) fixation location. The latencies of saccades landing within 1º of the previous fixation location were elevated, consistent with oculomotor IOR. However, there was no decrease in the likelihood that the previous location would be fixated relative to distance-matched controls or an a priori baseline. Saccades exhibit an overall forward bias, but this is due to a general bias to move in the same direction and for the same distance as the last saccade (saccadic momentum) rather than to a spatially specific tendency to avoid previously fixated locations. We find no evidence that oculomotor IOR has a significant impact on return probability during scene search. |
Grayden J. F. Solman; J. A. Cheyne; Daniel Smilek; J. Allan Cheyne; Daniel Smilek Memory load affects visual search processes without influencing search efficiency Journal Article In: Vision Research, vol. 51, no. 10, pp. 1185–1191, 2011. @article{Solman2011, Participants' eye-movements were monitored while they searched for a target among a varying number of distractors either with or without a concurrent memory load. Consistent with previous findings, adding a memory load slowed response times without affecting search slopes; a finding normally taken to imply that memory load affects pre- and/or post-search processes but not the search process itself. However, when overall response times were decomposed using eye-movement data into pre-search (e.g., initial encoding), search, and post-search (e.g., response selection) phases, analysis revealed that adding a memory load affected all phases, including the search phase. In addition, we report that fixations selected under load were more likely to be distant from search items, and more likely to be close to previously inspected locations. Thus, memory load affects the search process without affecting search slopes. These results challenge standard interpretations of search slopes and main effects in visual search. |
Joo-Hyun Song; Robert D. Rafal; Robert M. McPeek Deficits in reach target selection during inactivation of the midbrain superior colliculus Journal Article In: Proceedings of the National Academy of Sciences, vol. 108, no. 51, pp. E1433–E1440, 2011. @article{Song2011, Purposive action requires the selection of a single movement goal from multiple possibilities. Neural structures involved in movement planning and execution often exhibit activity related to target selection. A key question is whether this activity is specific to the type of movement produced by the structure, perhaps consisting of a competition among effector-specific movement plans, or whether it constitutes a more abstract, effector-independent selection signal. Here, we show that temporary focal inactivation of the primate superior colliculus (SC), an area involved in eye-movement target selection and execution, causes striking target selection deficits for reaching movements, which cannot be readily explained as a simple impairment in visual perception or motor execution. This indicates that target selection activity in the SC does not simply represent a competition among eye-movement goals and, instead, suggests that the SC contributes to a more general purpose priority map that influences target selection for other actions, such as reaches. |
Miriam Spering; Marc Pomplun; Marisa Carrasco Tracking without perceiving: A dissociation between eye movements and motion perception Journal Article In: Psychological Science, vol. 22, no. 2, pp. 216–225, 2011. @article{Spering2011, Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. |
Miriam Spering; Alexander C. Schütz; Doris I. Braun; Karl R. Gegenfurtner Keep your eyes on the ball: Smooth pursuit eye movements enhance prediction of visual motion Journal Article In: Journal of Neurophysiology, vol. 105, no. 4, pp. 1756–1767, 2011. @article{Spering2011a, Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, "eye soccer," in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100-500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction. |
Andreas Sprenger; Peter Trillenberg; Jonas Pohlmann; Kirsten Herold; Rebekka Lencer; Christoph Helmchen The role of prediction and anticipation on age-related effects on smooth pursuit eye movements Journal Article In: Annals of the New York Academy of Sciences, vol. 1233, pp. 168–176, 2011. @article{Sprenger2011, Externally guided sensory-motor processes deteriorate with increasing age. Internally guided, for example, predictive, behavior usually helps to overcome sensory-motor delays. We studied whether predictive components of visuomotor transformation decline with age. We investigated smooth pursuit eye movements (SPEM) of 45 healthy subjects with paradigms of different degrees of predictability with respect to target motion onset, type (smoothed triangular, ramp stimulation), and direction by blanking the target at various intervals of the ramp stimulation. Using repetitive trials of SPEM stimulation, we could dissociate anticipatory and predictive components of extraretinal smooth pursuit behavior. The main results suggest that basic motor parameters decline with increasing age, whereas both anticipation and prediction of target motion did not change with age. We suggest that the elderly maintain their capability of using prediction in the immediate control of motor behavior, which might be a way to compensate for age-related delays in sensory-motor transformation, even in the absence of sensory signals. |
Aarlenne Zein Khan; Joo-Hyun Song; Robert M. McPeek The eye dominates in guiding attention during simultaneous eye and hand movements. Journal Article In: Journal of Vision, vol. 11, no. 1, pp. 1–14, 2011. @article{Khan2011a, Prior to the onset of a saccade or a reach, attention is directed to the goal of the upcoming movement. However, it remains unknown whether attentional resources are shared across effectors for simultaneous eye and hand movements. Using a 4-AFC shape discrimination task, we investigated attentional allocation during the planning of a saccade alone, reach alone, or combined saccade and reach to one of five peripheral locations. Target discrimination was better when the probe appeared at the goal of the impending movement than when it appeared elsewhere. However, discrimination performance at the movement goal was not better for combined eye-hand movements compared to either effector alone, suggesting a shared limited attentional resource rather than separate pools of effector-specific attention. To test which effector dominates in guiding attention, we then separated eye and hand movement goals in two conditions: (1) cued reach/fixed saccade–subjects made saccades to the same peripheral location throughout the block, while the reach goal was cued and (2) cued saccade/fixed reach–subjects made reaches to the same location, while the saccade goal was cued. For both conditions, discrimination performance was consistently better at the eye goal than the hand goal. This indicates that shared attentional resources are guided predominantly by the eye during the planning of eye and hand movements. |
Manizeh Khan; Meredyth Daneman How readers spontaneously interpret man-suffix words: Evidence from eye movements Journal Article In: Journal of Psycholinguistic Research, vol. 40, no. 5, pp. 351–366, 2011. @article{Khan2011, This study investigated whether readers are more likely to assign a male referent to man-suffix terms (e.g. chairman) than to gender-neutral alternatives (e.g., chairperson) during reading, and whether this bias differs as a function of age. Younger and older adults' eye movements were monitored while reading passages containing phrases such as "The chairman/chairperson familiarized herself with…" On-line eye fixation data provided strong evidence that man-suffix words were more likely to evoke the expectation of a male referent in both age groups. Younger readers demonstrated inflated processing times when first encountering herself after chairman relative to chairperson, and they tended to make more regressive fixations to chairman. Older readers did not show the effect when initially encountering herself, but they spent disproportionately longer looking back to chairman and herself. The study provides empirical support for copy-editing policies that mandate the use of explicitly gender-neutral suffix terms in place of man-suffix terms. |
M. M. Kibbe; Eileen Kowler Visual search for category sets: Tradeoffs between exploration and memory Journal Article In: Journal of Vision, vol. 11, no. 3, pp. 1–21, 2011. @article{Kibbe2011, Limitations of working memory force a reliance on motor exploration to retrieve forgotten features of the visual array. A category search task was devised to study tradeoffs between exploration and memory in the face of significant cognitive and motor demands. The task required search through arrays of hidden, multi-featured objects to find three belonging to the same category. Location contents were revealed briefly by either a: (1) mouseclick, or (2) saccadic eye movement with or without delays between saccade offset and object appearance. As the complexity of the category rule increased, search favored exploration, with more visits and revisits needed to find the set. As motor costs increased (mouseclick search or oculomotor search with delays) search favored reliance on memory. Application of the model of J. Epelboim and P. Suppes (2001) to the revisits produced an estimate of immediate memory span (M) of about 4-6 objects. Variation in estimates of M across category rules suggested that search was also driven by strategies of transforming the category rule into concrete perceptual hypotheses. The results show that tradeoffs between memory and exploration in a cognitively demanding task are determined by continual and effective monitoring of perceptual load, cognitive demand, decision strategies and motor effort. |
Tim C. Kietzmann; Stephan Geuter; Peter König Overt visual attention as a causal factor of perceptual awareness Journal Article In: PLoS ONE, vol. 6, no. 7, pp. e22614, 2011. @article{Kietzmann2011, Next-generation sequencing (NGS) technologies provide a revolutionary tool with numerous applications in transcriptome studies. The power of NGS technologies to address diverse biological questions has already been proved in many studies. One of the most important applications of NGS is the sequencing and characterization of transcriptome of a non-model species using RNA-seq. This application of NGS technologies can be used to dissect the complete expressed gene content of an organism. In this article, I illustrate the use of NGS technologies in transcriptome characterization of a non-model species taking example of chickpea from our recent studies. |
Yosuke Kita; Atsuko Gunji; Yuki Inoue; Takaaki Goto; Kotoe Sakihara; Makiko Kaga; Masumi Inagaki; Toru Hosokawa Self-face recognition in children with autism spectrum disorders: A near-infrared spectroscopy study Journal Article In: Brain and Development, vol. 33, no. 6, pp. 494–503, 2011. @article{Kita2011, It is assumed that children with autism spectrum disorders (ASD) have specificities for self-face recognition, which is known to be a basic cognitive ability for social development. In the present study, we investigated neurological substrates and potentially influential factors for self-face recognition of ASD patients using near-infrared spectroscopy (NIRS). The subjects were 11 healthy adult men, 13 normally developing boys, and 10 boys with ASD. Their hemodynamic activities in the frontal area and their scanning strategies (eye-movement) were examined during self-face recognition. Other factors such as ASD severities and self-consciousness were also evaluated by parents and patients, respectively. Oxygenated hemoglobin levels were higher in the regions corresponding to the right inferior frontal gyrus than in those corresponding to the left inferior frontal gyrus. In two groups of children these activities reflected ASD severities, such that the more serious ASD characteristics corresponded with lower activity levels. Moreover, higher levels of public self-consciousness intensified the activities, which were not influenced by the scanning strategies. These findings suggest that dysfunction in the right inferior frontal gyrus areas responsible for self-face recognition is one of the crucial neural substrates underlying ASD characteristics, which could potentially be used to evaluate psychological aspects such as public self-consciousness. |
Albrecht W. Inhoff; Matthew S. Solomon; Ralph Radach; Bradley A. Seymour Temporal dynamics of the eye-voice span and eye movement control during oral reading Journal Article In: Journal of Cognitive Psychology, vol. 23, no. 5, pp. 543–558, 2011. @article{Inhoff2011, The distance between eye movements and articulation during oral reading, commonly referred to as the eye?voice span, has been a classic issue of experimental reading research since Buswell (1921). To examine the influence of the span on eye movement control, synchronised recordings of eye position and speech production were obtained during fluent oral reading. The viewing of a word almost always preceded its articulation, and the interval between the onset of a word's fixation and the onset of its articulation was approximately 500 ms. The identification and articulation of a word were closely coupled, and the fixation?speech interval was regulated through immediate adjustments of word viewing duration, unless the interval was relatively long. In this case, the lag between identification and articulation was often reduced through a regression that moved the eyes back in the text. These results indicate that models of eye movement control during oral reading need to include a mechanism that maintains a close linkage between the identification and articulation of words through continuous oculomotor adjustments. |
Lisa Irmen; Eva Schumann Processing grammatical gender of role nouns: Further evidence from eye movements Journal Article In: Journal of Cognitive Psychology, vol. 23, no. 8, pp. 998–1014, 2011. @article{Irmen2011, Two eye-tracking experiments investigated the effects of masculine versus feminine grammatical gender on the processing of role nouns and on establishing coreference relations. Participants read sentences with the basic structure My kinship term is a role noun prepositional phrase such as My brother is a singer in a band. Role nouns were either masculine or feminine. Kinship terms were lexically male or female and in this way specified referent gender, i.e., the sex of the person referred to. Experiment 1 tested a fully crossed design including items with an incorrect combination of lexically male kinship term and feminine role name. Experiment 2 tested only correct combinations of grammatical and lexical/referential gender to control for possible effects of the incorrect items of Experiment 1. In early stages of processing, feminine role nouns, but not masculine ones, were fixated longer when grammatical and referential gender were contradictory. In later stages of sentence wrap-up there were longer fixations for sentences with masculine than for those with feminine role nouns. Results of both experiments indicate that, for feminine role nouns, cues to referent gender are integrated immediately, whereas a late integration obtains for masculine forms. |
David E. Irwin Where does attention go when you blink? Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 5, pp. 1374–1384, 2011. @article{Irwin2011, Many studies have shown that covert visual attention precedes saccadic eye movements to locations in space. The present research investigated whether the allocation of attention is similarly affected by eye blinks. Subjects completed a partial-report task under blink and no-blink conditions. Experiment 1 showed that blinking facilitated report of the bottom row of the stimulus array: Accuracy for the bottom row increased and mislocation errors decreased under blink, as compared with no-blink, conditions, indicating that blinking influenced the allocation of visual attention. Experiment 2 showed that this was true even when subjects were biased to attend elsewhere. These results indicate that attention moves downward before a blink in an involuntary fashion. The eyes also move downward during blinks, so attention may precede blink-induced eye movements just as it precedes saccades and other types of eye movements. |
L. Issen; Krystel R. Huxlin; David C. Knill Spatial integration of optic flow information in direction of heading judgments Journal Article In: Journal of Vision, vol. 11, no. 6, pp. 1–16, 2011. @article{Issen2011, While we know that humans are extremely sensitive to optic flow information about direction of heading, we do not know how they integrate information across the visual field. We adapted the standard cue perturbation paradigm to investigate how young adult observers integrate optic flow information from different regions of the visual field to judge direction of heading. First, subjects judged direction of heading when viewing a three-dimensional field of random dots simulating linear translation through the world. We independently perturbed the flow in one visual field quadrant to indicate a different direction of heading relative to the other three quadrants. We then used subjects' judgments of direction of heading to estimate the relative influence of flow information in each quadrant on perception. Human subjects behaved similarly to the ideal observer in terms of integrating motion information across the visual field with one exception: Subjects overweighted information in the upper half of the visual field. The upper-field bias was robust under several different stimulus conditions, suggesting that it may represent a physiological adaptation to the uneven distribution of task-relevant motion information in our visual world. |
Stephanie Jainta The pupil reflects motor preparation for saccades – even before the eye starts to move Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 97, 2011. @article{Jainta2011, The eye produces saccadic eye movements whose reaction times are perhaps the shortest in humans. Saccade latencies reflect ongoing cortical processing and, generally, shorter latencies are supposed to reflect advanced motor preparation. The dilation of the eye's pupil is reported to reflect cortical processing as well. Eight participants made saccades in a gap and overlap paradigm (in pure and mixed blocks), which we used in order to produce a variety of different saccade latencies. Saccades and pupil size were measured with the EyeLink II. The pattern in pupil dilation resembled that of a gap effect: for gap blocks, pupil dilations were larger compared to overlap blocks; mixing gap and overlap trials reduced the pupil dilation for gap trials thereby inducing a switching cost. Furthermore, saccade latencies across all tasks predicted the magnitude of pupil dilations post hoc: the longer the saccade latency the smaller the pupil dilation before the eye actually began to move. In accordance with observations for manual responses, we conclude that pupil dilations prior to saccade execution reflect advanced motor preparations and therefore provide valid indicator qualities for ongoing cortical processes. |
Stephanie Jainta; Anne Dehnert; Sven P. Heinrich; Wolfgang Jaschinski Binocular coordination during reading of blurred and nonblurred text Journal Article In: Investigative Ophthalmology & Visual Science, vol. 52, no. 13, pp. 9416–9424, 2011. @article{Jainta2011a, PURPOSE: Reading a text requires vergence angle adjustments, so that the images in the two eyes fall on corresponding retinal areas. Vergence adjustments bring the two retinal images into Panum's fusional area and therefore, small remaining errors or regulations do not lead to double vision. The present study evaluated dynamic and static aspects of the binocular coordination when upcoming text was blurred. METHODS: Binocular eye movements and accommodation responses were simultaneously measured for 20 participants while reading single, nonblurred sentences and while the text was blurred as if it were seen by a person in whom the combination of refraction and accommodation deviated from the stimulus plane by 0.5 D. RESULTS: Text comprehension did not change, even though fixation times increased for reading blurred sentences. The disconjugacy during saccades was also not affected by blurred text presentations, but the vergence adjustment during fixations was reduced. Further, for blurred text, the overall vergence angle shifted in the exo direction, and this shift correlated with the individual heterophoria. Accommodation measures showed that the lag of accommodation was slightly larger for reading blurred sentences and that the shift in vergence angle was larger when the individual lag of accommodation was also larger. CONCLUSIONS: The results suggest that reading comprehension is robust against changes in binocular coordination that result from moderate text degradation; nevertheless, these changes are likely to be linked to the development of fatigue and visual strain in near reading conditions. |
Yi Ting Huang; Peter C. Gordon Distinguishing the time course of lexical and discourse processes through context, coreference, and quantified expressions Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 4, pp. 966–978, 2011. @article{Huang2011, How does prior context influence lexical and discourse-level processing during real-time language comprehension? Experiment 1 examined whether the referential ambiguity introduced by a repeated, anaphoric expression had an immediate or delayed effect on lexical and discourse processing, using an eye-tracking-while-reading task. Eye movements indicated facilitated recognition of repeated expressions, suggesting that prior context can rapidly influence lexical processing. However, context effects at the discourse level affected later processing, appearing in longer regression-path durations 2 words after the anaphor and in greater rereading times of the antecedent expression. Experiments 2 and 3 explored the nature of this delay by examining the role of the preceding context in activating relevant representations. Offline and online interpretations confirmed that relevant referents were activated following the critical context. Nevertheless, their initial unavailability during comprehension suggests a robust temporal division between lexical and discourse-level processing. |
Yu-feng Huang; Feng-yang Kuo An eye-tracking investigation of internet consumers' decision deliberateness Journal Article In: Internet Research, vol. 21, no. 5, pp. 541–561, 2011. @article{Huang2011a, Purpose – Because presentation formats, i.e. table v. graph, in shopping web sites may promote or inhibit deliberate consumer decision making, it is important to understand the effects of information presentation on deliberateness. This paper seeks to empirically test whether the table format enhances deliberate decision making, while the web map weakens the process. In addition, deliberateness can be influenced by the decision orientation, i.e. emotionally charged or accuracy oriented. Thus, the paper further examines the effect of presentations across these two decision orientations. Design/methodology/approach – Objective and detailed description of the decision process is used to examine the effects. A two (decision orientation: positive emotion v. accuracy) by two (presentation: map v. table) eye-tracking experiment is designed. Deliberateness is quantified with the information processing pattern summarized from eye movement data. Participants are required to make preferential choices from simple decision tasks. Findings – The results confirm that the table strengthens while the map weakens deliberateness. In addition, this effect is mostly evident across the two decision orientations. An explorative factor analysis further reveals that there are two major attention distribution functions (global v. local) underlying the decision process. Research limitations/implications – Only simple decision tasks are used in the present study and therefore complex tasks should be introduced to examine the effects in the future. Practical implications – For consumers, they should become aware that the table facilitates while the map diminishes deliberateness. For web businesses, they may try to strengthen the impulsivity in a web map filled with emotional stimuli. Originality/value – This research is one of the first attempts to investigate the joint effects of presentations and decision orientations on decision deliberateness in the internet domain. The eye movement data are also valuable because previous studies seldom provided such detailed description of the decision process. |
Anke Huckauf; Mario H. Urbina Object selection in gaze controlled systems: What you don't look at is what you get Journal Article In: ACM Transactions on Applied Perception, vol. 8, no. 2, pp. 1–14, 2011. @article{Huckauf2011, Controlling computers using eye movements can provide a fast and efficient alternative to the computer mouse. However, implementing object selection in gaze-controlled systems is still a challenge. Dwell times or fixations on a certain object typically used to elicit the selection of this object show several disadvantages. We studied deviations of critical thresholds by an individual and task-specific adaptation method. This demonstrated an enormous variability of optimal dwell times. We developed an alternative approach using antisaccades for selection. For selection by antisaccades, highlighted objects are copied to one side of the object. The object is selected when fixating to the side opposed to that copy requiring to inhibit an automatic gaze shift toward new objects. Both techniques were compared in a selection task. Two experiments revealed superior performance in terms of errors for the individually adapted dwell times. Antisaccades provide an alternative approach to dwell time selection, but they did not show an improvement over dwell time. We discuss potential improvements in the antisaccade implementation with which antisaccades might become a serious alternative to dwell times for object selection in gaze-controlled systems. |
V. C. Huddy; Timothy L. Hodgson; M. A. Ron; Thomas R. E. Barnes; Eileen M. Joyce Abnormal negative feedback processing in first episode schizophrenia: Evidence from an oculomotor rule switching task Journal Article In: Psychological Medicine, vol. 41, no. 9, pp. 1805–1814, 2011. @article{Huddy2011, Background. Previous studies have shown that patients with schizophrenia are impaired on executive tasks, where positive and negative feedbacks are used to update task rules or switch attention. However, research to date using saccadic tasks has not revealed clear deficits in task switching in these patients. The present study used an oculomotor ‘ rule switching ' task to investigate the use of negative feedback when switching between task rules in people with schizophrenia. Method. A total of 50 patients with first episode schizophrenia and 25 healthy controls performed a task in which the association between a centrally presented visual cue and the direction of a saccade could change from trial to trial. Rule changes were heralded by an unexpected negative feedback, indicating that the cue-response mapping had reversed. Results. Schizophrenia patients were found to make increased errors following a rule switch, but these were almost entirely the result of executing saccades away from the location at which the negative feedback had been presented on the preceding trial. This impairment in negative feedback processing was independent of IQ. Conclusions. The results not only confirm the existence of a basic deficit in stimulus–response rule switching in schizophrenia, but also suggest that this arises from aberrant processing of response outcomes, resulting in a failure to appropriately update rules. The findings are discussed in the context of neurological and pharmacological abnormalities in the conditions that may disrupt prediction error signalling in schizophrenia. |
Lynn Huestegge; Jos J. Adam Oculomotor interference during manual response preparation: Evidence from the response-cueing paradigm Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 3, pp. 702–707, 2011. @article{Huestegge2011, Preparation provided by visual location cues is known to speed up behavior. However, the role of concurrent saccades in response to visual cues remains unclear. In this study, participants performed a spatial precueing task by pressing one of four response keys with one of four fingers (two of each hand) while eye movements were monitored. Prior to the stimulus, we presented a neutral cue (baseline), a hand cue (corresponding to left vs. right positions), or a finger cue (corresponding to inner vs. outer positions). Participants either remained fixated on a central fixation point or moved their eyes freely. The results demonstrated that saccades during the cueing interval altered the pattern of cueing effects. Finger cueing trials in which saccades were spatially incompatible (vs. compatible) with the subsequently required manual response exhibited slower manual RTs. We propose that interference between saccades and manual responses affects manual motor preparation. |
Lynn Huestegge; Andrea M. Philipp Effects of spatial compatibility on integration processes in graph comprehension Journal Article In: Attention, Perception, and Psychophysics, vol. 73, no. 6, pp. 1903–1915, 2011. @article{Huestegge2011a, A precondition for efficiently understanding and memorizing graphs is the integration of all relevant graph elements and their meaning. In the present study, we analyzed integration processes by manipulating the spatial compatibility between elements in the data region and the legend. In Experiment 1, participants judged whether bar graphs depicting either statistical main effects or interactions correspond to previously presented statements. In Experiments 2 and 3, the same was tested with line graphs of varying complexity. In Experiment 4, participants memorized line graphs for a subsequent validation task. Throughout the experiments, eye movements were recorded. The results indicated that data-legend compatibility reduced the time needed to understand graphs, as well as the time needed to retrieve relevant graph information from memory. These advantages went hand in hand with a decrease of gaze transitions between the data region and the legend, indicating that data-legend compatibility decreases the difficulty of integration processes. |
Falk Huettig; Gerry T. M. Altmann In: Quarterly Journal of Experimental Psychology, vol. 64, no. 1, pp. 122–145, 2011. @article{Huettig2011, Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment. |
Falk Huettig; James M. Mcqueen The nature of the visual environment induces implicit biases during language-mediated visual search Journal Article In: Memory & Cognition, vol. 39, no. 6, pp. 1068–1084, 2011. @article{Huettig2011a, Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search. |
Amelia R. Hunt; P. Cavanagh Remapped visual masking Journal Article In: Journal of Vision, vol. 11, no. 1, pp. 1–8, 2011. @article{Hunt2011, Cells in saccade control areas respond if a saccade is about to bring a target into their receptive fields (J. R. Duhamel, C. L. Colby, & M. R. Goldberg, 1992). This remapping process should shift the retinal location from which attention selects target information (P. Cavanagh, A. R. Hunt, S. R. Afraz, & M. Rolfs, 2010). We examined this attention shift in a masking experiment where target and mask were presented just before an eye movement. In a control condition with no eye movement, masks interfered with target identification only when they spatially overlapped. Just before a saccade, however, a mask overlapping the target had less effect, whereas a mask placed in the target's remapped location was quite effective. The remapped location is the retinal position the target will have after the upcoming saccade, which corresponds to neither the retinotopic nor spatiotopic location of the target before the saccade. Both effects are consistent with a pre-saccadic shift in the location from which attention selects target information. In the case of retinally aligned target and mask, the shift of attention away from the target location reduces masking, but when the mask appears at the target's remapped location, attention's shift to that location brings in mask information that interferes with the target identification. |
Marc Hurwitz; Derick Valadao; James Danckert Static versus dynamic judgments of spatial extent Journal Article In: Experimental Brain Research, vol. 209, no. 2, pp. 271–286, 2011. @article{Hurwitz2011, Research exploring how scanning affects judgments of spatial extent has produced conflicting results. We conducted four experiments on line bisection judgments measuring ocular and pointing behavior, with line length, position, speed, acceleration, and direction of scanning manipulated. Ocular and pointing judgments produced distinct patterns. For static judgments (i.e., no scanning), the eyes were sensitive to position and line length with pointing much less sensitive to these factors. For dynamic judgments (i.e., scanning the line), bisection biases were influenced by the speed of scanning but not acceleration, while both ocular and pointing results varied with scan direction. We suggest that static and dynamic probes of spatial judgments are different. Furthermore, the substantial differences seen between static and dynamic bisection suggest the two invoke different neural processes for computing spatial extent for ocular and pointing judgments. |
Samuel B. Hutton; S. Nolte The effect of gaze cues on attention to print advertisements Journal Article In: Applied Cognitive Psychology, vol. 25, no. 6, pp. 887–892, 2011. @article{Hutton2011, Print advertisements often employ images of humans whose gaze may be focussed on an object or region within the advertisement. Gaze cues are powerful factors in determining the focus of our attention, but there have been no systematic studies exploring the impact of gaze cues on attention to print advertisements. We tracked participants' eyes whilst they read an on-screen magazine containing advertisements in which the model either looked at the product being advertised or towards the viewer. When the model's gaze was directed at the product, participants spent longer looking at the product, the brand logo and the rest of the advertisement compared to when the model's gaze was directed towards the viewer. These results demonstrate that the focus of reader's attention can be readily manipulated by gaze cues provided by models in advertisements, and that these influences go beyond simply drawing attention to the cued area of the advertisement. |
Alex D. Hwang; Hsueh-Cheng Wang; Marc Pomplun Semantic guidance of eye movements in real-world scenes Journal Article In: Vision Research, vol. 51, no. 10, pp. 1192–1205, 2011. @article{Hwang2011, The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. |
Jukka Hyönä; Raymond Bertram Optimal viewing position effects in reading Finnish Journal Article In: Vision Research, vol. 51, no. 11, pp. 1279–1287, 2011. @article{Hyoenae2011, The present study examined effects of the initial landing position in words on eye behavior during reading of long and short Finnish compound words. The study replicated OVP and IOVP effects previously found in French, German and English - languages structurally distinct from Finnish, suggesting that the effects generalize across structurally different alphabetic languages. The results are consistent with the view that the landing position effects appear at the prelexical stage of word processing, as landing position effects were not modulated by word frequency. Moreover, the OVP effects are in line with a visuomotor explanation making recourse to visual acuity constraints. |
Zoï Kapoula; Qing Yang; Norman Sabbah; Marine Vernet Different effects of double-pulse TMS of the posterior parietal cortex on reflexive and voluntary saccades Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 114, 2011. @article{Kapoula2011, Gap and overlap tasks are widely used to promote automatic versus controlled saccades. This study examines the hypothesis that the right posterior parietal cortex (PPC) is differently involved in the two tasks. Twelve healthy students participated in the experiment. We used double-pulse transcranial magnetic stimulation (dTMS) on the right PPC, the first pulse delivered at the target onset and the second 65 or 80 ms later. Each subject performed several blocks of gap or overlap task with or without dTMS. Eye movements were recorded with an Eyelink device. The results show an increase of latency of saccades after dTMS of the right PPC for both tasks but for different time windows (0-80 ms for the gap task, 0-65 ms for the overlap task). Moreover, for rightward saccades the coefficient of variation of latency increased in the gap task but decreased in the overlap task. Finally, in the gap task and for leftward saccades only, dTMS at 0-80 ms decreased the amplitude and the speed of saccades. Although the study is preliminary and needs further investigation in detail, the results support the hypothesis that the right PPC is involved differently in the initiation of the saccades for the two tasks: in the gap task the PPC controls saccade triggering while in the overlap task it could be a relay to the Frontal Eye Fields which is known to control voluntary saccades, e.g., memory-guided and perhaps the controlled saccades in the overlap task The results have theoretical and clinical significance as gap-overlap tasks are easy to perform even in advanced age and in patients with neurodegenerative diseases. |
Kai Kaspar; Peter Konig Viewing behavior and the impact of low-level image properties across repeated presentations of complex scenes Journal Article In: Journal of Vision, vol. 11, no. 13, pp. 1–29, 2011. @article{Kaspar2011, Studies on bottom-up mechanisms in human overt attention support the significance of basic image features for fixation behavior on visual scenes. In this context, a decisive question has been neglected so far: How stable is the impact of basic image features on overt attention across repeated image observation? To answer this question, two eye-tracking studies were conducted in which 79 subjects were repeatedly exposed to several types of visual scenes differing in gist and complexity. Upon repeated presentations, viewing behavior changed significantly. Subjects neither performed independent scanning eye movements nor scrutinized complementary image regions but tended to view largely overlapping image regions, but this overlap significantly decreased over time. Importantly, subjects did not uncouple their scanning pathways substantially from basic image features. In contrast, the effect of image type on feature–fixation correlations was much bigger than the effect of memory-mediated scene familiarity. Moreover, feature–fixation correlations were moderated by actual saccade length, and this phenomenon remained constant across repeated viewings. We also demonstrated that this saccade length effect was not an exclusive within-subject phenomenon. We conclude that the present results bridge a substantial gap in attention research and are important for future research and modeling processes of human overt attention. Additionally, we advise considering interindividual differences in viewing behavior. |
Kai Kaspar; Peter König Overt attention and context factors: The impact of repeated presentations, image type, and individual motivation Journal Article In: PLoS ONE, vol. 6, no. 7, pp. e21719, 2011. @article{Kaspar2011a, The present study investigated the dynamic of the attention focus during observation of different categories of complex scenes and simultaneous consideration of individuals' memory and motivational state. We repeatedly presented four types of complex visual scenes in a pseudo-randomized order and recorded eye movements. Subjects were divided into groups according to their motivational disposition in terms of action orientation and individual rating of scene interest.Statistical analysis of eye-tracking data revealed that the attention focus successively became locally expressed by increasing fixation duration; decreasing saccade length, saccade frequency, and single subject's fixation distribution over images; and increasing inter-subject variance of fixation distributions. The validity of these results was supported by verbal reports. This general tendency was weaker for the group of subjects who rated the image set as interesting as compared to the other group. Additionally, effects were partly mediated by subjects' motivational disposition. Finally, we found a generally strong impact of image type on eye movement parameters. We conclude that motivational tendencies linked to personality as well as individual preferences significantly affected viewing behaviour. Hence, it is important and fruitful to consider inter-individual differences on the level of motivation and personality traits within investigations of attention processes. We demonstrate that future studies on memory's impact on overt attention have to deal appropriately with several aspects that had been out of the research focus until now. |
David J. Kelly; Rachael R. Jack; Sébastien Miellet; Emanuele Luca De; Kay Foreman; Roberto Caldara Social experience does not abolish cultural diversity in eye movements Journal Article In: Frontiers in Psychology, vol. 2, pp. 95, 2011. @article{Kelly2011, Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed "Eastern" eye movement strategies, while approximately 25% of participants displayed "Western" strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that "culture" alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. |
David J. Kelly; Shaoying Liu; Helen Rodger; Sébastien Miellet; Liezhong Ge; Roberto Caldara Developing cultural differences in face processing Journal Article In: Developmental Science, vol. 14, no. 5, pp. 1176–1184, 2011. @article{Kelly2011a, Perception and eye movements are affected by culture. Adults from Eastern societies (e.g. China) display a disposition to process information holistically, whereas individuals from Western societies (e.g. Britain) process information analytically. Recently, this pattern of cultural differences has been extended to face processing. Adults from Eastern cultures fixate centrally towards the nose when learning and recognizing faces, whereas adults from Western societies spread fixations across the eye and mouth regions. Although light has been shed on how adults can fixate different areas yet achieve comparable recognition accuracy, the reason why such divergent strategies exist is less certain. Although some argue that culture shapes strategies across development, little direct evidence exists to support this claim. Additionally, it has long been claimed that face recognition in early childhood is largely reliant upon external rather than internal face features, yet recent studies have challenged this theory. To address these issues, we tested children aged 7-12 years of age from the UK and China with an old/new face recognition paradigm while simultaneously recording their eye movements. Both populations displayed patterns of fixations that were consistent with adults from their respective cultural groups, which 'strengthened' across development as qualified by a pattern classifier analysis. Altogether, these observations suggest that cultural forces may indeed be responsible for shaping eye movements from early childhood. Furthermore, fixations made by both cultural groups almost exclusively landed on internal face regions, suggesting that these features, and not external features, are universally used to achieve face recognition in childhood. |
Daniel P. Kennedy; Ralph Adolphs Impaired fixation to eyes following amygdala damage arises from abnormal bottom-up attention Journal Article In: Neuropsychologia, vol. 49, no. 4, pp. 589–595, 2011. @article{Kennedy2011, SM is a patient with complete bilateral amygdala lesions who fails to fixate the eyes in faces and is consequently impaired in recognizing fear (Adolphs et al., 2005). Here we first replicated earlier findings in SM of reduced gaze to the eyes when seen in whole faces. Examination of the time course of fixations revealed that SM's reduced eye contact is particular pronounced in the first fixation to the face, and less abnormal in subsequent fixations. In a second set of experiments, we used a gaze-contingent presentation of faces with real time eye tracking, wherein only a small region of the face is made visible at the center of gaze. In essence, viewers explore the face by moving a small searchlight over the face with their gaze. Under such viewing conditions, SM's fixations to eye region of faces became entirely normalized. We suggest that this effect arises from the absence of bottom-up effects due to the facial features, allowing gaze location to be driven entirely by top-down control. Together with SM's failure to fixate the eyes in whole faces primarily at the very first saccade, the findings suggest that the saliency of the eyes normally attract our gaze in an amygdala-dependent manner. Impaired eye gaze is also a prominent feature of several psychiatric illnesses in which the amygdala has been hypothesized to be dysfunctional, and our findings and experimental manipulation may hold promise for interventions in such populations, including autism and fragile X syndrome. |
Donatas Jonikaitis; Heiner Deubel Independent allocation of attention to eye and hand targets in coordinated eye-hand movements Journal Article In: Psychological Science, vol. 22, no. 3, pp. 339–347, 2011. @article{Jonikaitis2011, When reaching for objects, people frequently look where they reach. This raises the question of whether the targets for the eye and hand in concurrent eye and hand movements are selected by a unitary attentional system or by independent mechanisms. We used the deployment of visual attention as an index of the selection of movement targets and asked observers to reach and look to either the same location or separate locations. Results show that during the preparation of coordinated movements, attention is allocated in parallel to the targets of a saccade and a reaching movement. Attentional allocations for the two movements interact synergistically when both are directed to a common goal. Delaying the eye movement delays the attentional shift to the saccade target while leaving attentional deployment to the reach target unaffected. Our findings demonstrate that attentional resources are allocated independently to the targets of eye and hand movements and suggest that the goals for these effectors are selected by separate attentional mechanisms. |
Barbara J. Juhasz; Rachel N. Berkowitz Effects of morphological families on English compound word recognition: A multitask investigation Journal Article In: Language and Cognitive Processes, vol. 26, no. 4-6, pp. 653–682, 2011. @article{Juhasz2011, Three experiments examined the influence of first lexeme morphological family size on English compound word recognition. Concatenated compound words whose first lexemes were from large morphological families were responded to faster in word naming and lexical decision than compounds from small morphological families. In addition, an eye movement experiment showed that gaze durations were shorter on compounds from large morphological families during sentence reading. This was mainly due to more refixations on compounds from small morphological families. Posthoc analyses and re-analysis of past studies suggested that compounds with a larger number of higher frequency family members (HFFM) are read more slowly than compounds with fewer HFFM. Thus, while morphological family size is generally facilitative, the presence of HFFM has an inhibitory effect on eye movement behaviour. The time-course of these effects is discussed. |
Barbara J. Juhasz; Margaret M. Gullick; Leah W. Shesler The effects of age-of-Aacquisition on ambiguity resolution: Evidence from eye movements Journal Article In: Journal of Eye Movement Research, vol. 4, no. 1, pp. 1–14, 2011. @article{Juhasz2011a, Words that are rated as acquired earlier in life receive shorter fixation durations than later acquired words, even when word frequency is adequately controlled (Juhasz & Rayner, 2003; 2006). Some theories posit that age-of-acquisition (AoA) affects the semantic representation of words (e.g., Steyvers & Tenenbaum, 2005), while others suggest that AoA should have an influence at multiple levels in the mental lexicon (e.g. Ellis & Lambon Ralph, 2000). In past studies, early and late AoA words have differed from each other in orthography, phonology, and meaning, making it difficult to localize the influence of AoA. Two experiments are reported which examined the locus of AoA effects in reading. Both experiments used balanced ambiguous words which have two equally-frequent meanings acquired at different times (e.g. pot, tick). In Experiment 1, sentence context supporting either the early- or late-acquired meaning was presented prior to the ambiguous word; in Experiment 2, disambiguating context was presented after the ambiguous word. When prior context disambiguated the ambiguous word, meaning AoA influenced the processing of the target word. However, when disambiguating sentence context followed the ambiguous word, meaning frequency was the more important variable and no effect of meaning AoA was observed. These results, when combined with the past results of Juhasz and Rayner (2003; 2006) suggest that AoA influences access to multiple levels of representation in the mental lexicon. The results also have implications for theories of lexical ambiguity resolution, as they suggest that variables other than meaning frequency and context can influence resolution of noun-noun ambiguities. |
Johanna K. Kaakinen; Jukka Hyönä; Minna Viljanen Influence of a psychological perspective on scene viewing and memory for scenes Journal Article In: Quarterly Journal of Experimental Psychology, vol. 64, no. 7, pp. 1372–1387, 2011. @article{Kaakinen2011, In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing. |
Elsi Kaiser Focusing on pronouns: Consequences of subjecthood, pronominalisation, and contrastive focus Journal Article In: Language and Cognitive Processes, vol. 26, no. 10, pp. 1625–1666, 2011. @article{Kaiser2011, We report two visual-world eye-tracking experiments that investigated the effects of subjecthood, pronominalisation, and contrastive focus on the interpretation of pronouns in subsequent discourse. By probing the effects of these factors on real-time pronoun interpretation, we aim to contribute to our understanding of how topicality-related factors (subjecthood, givenness) interact with contrastive focus effects, and to investigate whether the seemingly mixed results obtained in prior work on topicality and focusing could be related to effects of subjecthood. Our results indicate that structural and semantic prominence (specifically, agentive subjects) influence pronoun interpretation even when separated from information-structural notions, and thus need to be taken into account when investigating topicality and focusing. We discuss how our results allow us to reconcile the distinct findings of prior studies. More generally, this research contributes to our understanding of how the language comprehension system integrates different kinds of information during real-time reference resolution.$backslash$nWe report two visual-world eye-tracking experiments that investigated the effects of subjecthood, pronominalisation, and contrastive focus on the interpretation of pronouns in subsequent discourse. By probing the effects of these factors on real-time pronoun interpretation, we aim to contribute to our understanding of how topicality-related factors (subjecthood, givenness) interact with contrastive focus effects, and to investigate whether the seemingly mixed results obtained in prior work on topicality and focusing could be related to effects of subjecthood. Our results indicate that structural and semantic prominence (specifically, agentive subjects) influence pronoun interpretation even when separated from information-structural notions, and thus need to be taken into account when investigating topicality and focusing. We discuss how our results allow us to reconcile the distinct findings of prior studies. More generally, this research contributes to our understanding of how the language comprehension system integrates different kinds of information during real-time reference resolution. |
Joke P. Kalisvaart; Sumientra M. Rampersad; Jeroen Goossens Binocular onset rivalry at the time of saccades and stimulus jumps Journal Article In: PLoS ONE, vol. 6, no. 6, pp. e20017, 2011. @article{Kalisvaart2011, Recent studies suggest that binocular rivalry at stimulus onset, so called onset rivalry, differs from rivalry during sustained viewing. These observations raise the interesting question whether there is a relation between onset rivalry and rivalry in the presence of eye movements. We therefore studied binocular rivalry when stimuli jumped from one visual hemifield to the other, either through a saccade or through a passive stimulus displacement, and we compared rivalry after such displacements with onset and sustained rivalry. We presented opponent motion, orthogonal gratings and face/house stimuli through a stereoscope. For all three stimulus types we found that subjects showed a strong preference for stimuli in one eye or one hemifield (Experiment 1), and that these subject-specific biases did not persist during sustained viewing (Experiment 2). These results confirm and extend previous findings obtained with gratings. The results from the main experiment (Experiment 3) showed that after a passive stimulus jump, switching probability was low when the preferred eye was dominant before a stimulus jump, but when the non-preferred eye was dominant beforehand, switching probability was comparatively high. The results thus showed that dominance after a stimulus jump was tightly related to eye dominance at stimulus onset. In the saccade condition, however, these subject-specific biases were systematically reduced, indicating that the influence of saccades can be understood from a systematic attenuation of the subjects' onset rivalry biases. Taken together, our findings demonstrate a relation between onset rivalry and rivalry after retinal shifts and involvement of extra-retinal signals in binocular rivalry. |
Sakari Kallio; Jukka Hyönä; Antti Revonsuo; Pilleriin Sikka; Lauri Nummenmaa The existence of a hypnotic state revealed by eye movements Journal Article In: PLoS ONE, vol. 6, no. 10, pp. e26374, 2011. @article{Kallio2011, Hypnosis has had a long and controversial history in psychology, psychiatry and neurology, but the basic nature of hypnotic phenomena still remains unclear. Different theoretical approaches disagree as to whether or not hypnosis may involve an altered mental state. So far, a hypnotic state has never been convincingly demonstrated, if the criteria for the state are that it involves some objectively measurable and replicable behavioural or physiological phenomena that cannot be faked or simulated by non-hypnotized control subjects. We present a detailed case study of a highly hypnotizable subject who reliably shows a range of changes in both automatic and volitional eye movements when given a hypnotic induction. These changes correspond well with the phenomenon referred to as the "trance stare" in the hypnosis literature. Our results show that this 'trance stare' is associated with large and objective changes in the optokinetic reflex, the pupillary reflex and programming a saccade to a single target. Control subjects could not imitate these changes voluntarily. For the majority of people, hypnotic induction brings about states resembling normal focused attention or mental imagery. Our data nevertheless highlight that in some cases hypnosis may involve a special state, which qualitatively differs from the normal state of consciousness. |
Sunjeev K. Kamboj; Rachel Massey-Chase; Lydia Rodney; Ravi K. Das; Basil Almahdi; H. Valerie Curran; Celia J. A. Morgan In: Psychopharmacology, vol. 217, no. 1, pp. 25–37, 2011. @article{Kamboj2011, Rationale: The effects of D-cycloserine (DCS) in animal models of anxiety disorders and addiction indicate a role for N-methyl D-aspartate (NMDA) receptors in extinction learning. Exposure/response prevention treatments for anxiety disorders in humans are enhanced by DCS, suggesting a promising co-therapy regime, mediated by NMDA receptors. Exposure/response prevention may also be effective in problematic drinkers, and DCS might enhance habituation to cues in these individuals. Since heavy drinkers show ostensible conditioned responses to alcohol cues, habituation following exposure/response prevention should be evident in these drinkers, with DCS enhancing this effect. Objectives: The objective of this study is to investigate the effect of DCS on exposure/response prevention in heavy drinkers. Methods: In a randomised, double-blind, placebo- controlled study, heavy social drinkers recruited from the community received either DCS (125 mg; n=19) or placebo (n=17) 1 h prior to each of two sessions of exposure/response prevention. Cue reactivity and attentional bias were assessed during these two sessions and at a third follow-up session. Between-session drinking behaviour was recorded. Results: Robust cue reactivity and attentional bias to alcohol cues was evident, as expected of heavy drinkers. Within- and between-session habituation of cue reactivity, as well as a reduction in attentional bias to alcohol cues over time was found. However, there was no evidence ofgreater habituation in the DCS group. Subtle stimulant effects (increased subjective contentedness and euphoria) which were unrelated to exposure/response prevention were found following DCS. Conclusions: DCS does not appear to enhance habituation of alcohol cue reactivity in heavy non-dependent drinkers. Its utility in enhancing treatments based on exposure/ response prevention in dependent drinkers or drug users remains open. |
Marcus L. Johnson; Matthew W. Lowder; Peter C. Gordon The sentence-composition effect : Processing of complex noun phrases versus unusual noun phrases Journal Article In: Journal of Experimental Psychology: General, vol. 140, no. 4, pp. 707–724, 2011. @article{Johnson2011, In 2 experiments, the authors used an eye tracking while reading methodology to examine how different configurations of common noun phrases versus unusual noun phrases (NPs) influenced the difference in processing difficulty between sentences containing object- and subject-extracted relative clauses. Results showed that processing difficulty was reduced when the head NP was unusual relative to the embedded NP, as manipulated by lexical frequency. When both NPs were common or both were unusual, results showed strong effects of both commonness and sentence structure, but no interaction. In contrast, when 1 NP was common and the other was unusual, results showed the critical interaction. These results provide evidence for a sentence-composition effect analogous to the list-composition effect that has been well documented in memory research, in which the pattern of recall for common versus unusual items is different, depending on whether items are studied in a pure or mixed list context. This work represents an important step in integrating the list-memory and sentence-processing literatures and provides additional support for the usefulness of studying complex sentence processing from the perspective of memory-based models. |
Jacob Jolij; H. Steven Scholte; Simon Gaal; Timothy L. Hodgson; Victor A. F. Lamme Act quickly, decide later: Long-latency visual processing underlies perceptual decisions but not reflexive behavior Journal Article In: Journal of Cognitive Neuroscience, vol. 23, no. 12, pp. 3734–3745, 2011. @article{Jolij2011, Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response. |
Clare N. Jonas; Alisdair J. G. Taylor; Samuel B. Hutton; Peter H. Weiss; Jamie Ward Visuo-spatial representations of the alphabet in synaesthetes and non-synaesthetes Journal Article In: Journal of Neuropsychology, vol. 5, no. 2, pp. 302–322, 2011. @article{Jonas2011, Visuo-spatial representations of the alphabet (so-called 'alphabet forms') may be as common as other types of sequence–space synaesthesia, but little is known about them or the way they relate to implicit spatial associations in the general population. In the first study, we describe the characteristics of a large sample of alphabet forms visualized by synaesthetes. They most often run from left to right and have salient features (e.g., bends, breaks) at particular points in the sequence that correspond to chunks in the 'Alphabet Song' and at the alphabet mid-point. The Alphabet Song chunking suggests that the visuo-spatial characteristics are derived, at least in part, from those of the verbal sequence learned earlier in life. However, these synaesthetes are no faster at locating points in the sequence (e.g., what comes before/after letter X?) than controls. They tend to be more spatially consistent (measured by eye tracking) and letters can act as attentional cues to left/right space in synaesthetes with alphabet forms (measured by saccades), but not in non-synaesthetes. This attentional cueing suggests dissociation between numbers (which reliably act as attentional cues in synaesthetes and non-synaesthetes) and letters (which act as attentional cues in synaesthetes only). |
L. Elliot Hong; Gunvant K. Thaker; Robert P. McMahon; Ann Summerfelt; Jill RachBeisel; Rebecca L. Fuller; Ikwunga Wonodi; Robert W. Buchanan; Carol Myers; Stephen J. Heishman; Jeff Yang; Adrienne Nye In: Archives of General Psychiatry, vol. 68, no. 12, pp. 1195–1206, 2011. @article{Hong2011, CONTEXT: The administration of nicotine transiently improves many neurobiological and cognitive functions in schizophrenia and schizoaffective disorder. It is not yet clear which nicotinic acetylcholine receptor (nAChR) subtype or subtypes are responsible for these seemingly pervasive nicotinic effects in schizophrenia and schizoaffective disorder. OBJECTIVE: Because α4β2 is a key nAChR subtype for nicotinic actions, we investigated the effect of varenicline tartrate, a relatively specific α4β2 partial agonist and antagonist, on key biomarkers that are associated with schizophrenia and are previously shown to be responsive to nicotinic challenge in humans. DESIGN: A double-blind, parallel, randomized, placebo-controlled trial of patients with schizophrenia or schizoaffective disorder to examine the effects of varenicline on biomarkers at 2 weeks (short-term treatment) and 8 weeks (long-term treatment), using a slow titration and moderate dosing strategy for retaining α4β2-specific effects while minimizing adverse effects. SETTING: Outpatient clinics. PARTICIPANTS: A total of 69 smoking and nonsmoking patients; 64 patients completed week 2, and 59 patients completed week 8. Intervention Varenicline. MAIN OUTCOME MEASURES: Prepulse inhibition, sensory gating, antisaccade, spatial working memory, eye tracking, processing speed, and sustained attention. RESULTS: A moderate dose of varenicline (1) significantly reduced the P50 sensory gating deficit in nonsmokers after long-term treatment (P = .006), (2) reduced startle reactivity (P = .02) regardless of baseline smoking status, and (3) improved executive function by reducing the antisaccadic error rate (P = .03) regardless of smoking status. A moderate dose of varenicline had no significant effect on spatial working memory, predictive and maintenance pursuit measures, processing speed, or sustained attention by Conners' Continuous Performance Test. Clinically, there was no evidence of exacerbation of psychiatric symptoms, psychosis, depression, or suicidality using a gradual titration (1-mg daily dose). CONCLUSIONS: Moderate-dose treatment with varenicline has a unique treatment profile on core schizophrenia-related biomarkers. Further development is warranted for specific nAChR compounds and dosing and duration strategies to target subgroups of schizophrenic patients with specific biological deficits. |
Lorelei R. Howard; Dharshan Kumaran; Hauður F. Ólafsdóttir; Hugo J. Spiers Double dissociation between hippocampal and parahippocampal responses to object-background context and scene novelty Journal Article In: Journal of Neuroscience, vol. 31, no. 14, pp. 5253–5261, 2011. @article{Howard2011, Several recent models of medial temporal lobe (MTL) function have proposed that the parahippocampal cortex processes context information, the perirhinal cortex processes item information, and the hippocampus binds together items and contexts. While evidence for a clear functional distinction between the perirhinal cortex and other regions within the MTL has been well supported, it has been less clear whether such a dissociation exists between the hippocampus and parahippocampal cortex. In the current study, we use a novel approach applying a functional magnetic resonance imaging adaptation paradigm to address these issues. During scanning, human subjects performed an incidental target detection task while viewing trial-unique sequentially presented pairs of natural scenes, each containing a single prominent object. We observed a striking double dissociation between the hippocampus and parahippocampal cortex, with the former showing a selective sensitivity to changes in the spatial relationship between objects and their background context and the latter engaged only by scene novelty. Our findings provide compelling support for the hypothesis that rapid item-context binding is a function of the hippocampus, rather than the parahippocampal cortex, with the former acting to detect relational novelty of this nature through its function as a match-mismatch detector. |
Yanbo Hu; Robin Walker The neural basis of parallel saccade programming: An fMRI study Journal Article In: Journal of Cognitive Neuroscience, vol. 23, no. 11, pp. 3669–3680, 2011. @article{Hu2011, The neural basis of parallel saccade programming was examined in an event-related fMRI study using a variation of the double-step saccade paradigm. Two double-step conditions were used: one enabled the second saccade to be partially programmed in parallel with the first saccade while in a second condition both saccades had to be prepared serially. The inter-saccadic interval, observed in the parallel programming (PP) condition, was significantly reduced compared with latency in the serial programming (SP) condition and also to the latency of single saccades in control conditions. The fMRI analysis revealed greater activity (BOLD response) in the frontal and parietal eye fields for the PP condition compared with the SP double-step condition and when compared with the single-saccade control conditions. By contrast, activity in the supplementary eye fields was greater for the double-step condition than the single-step condition but did not distinguish between the PP and SP requirements. The role of the frontal eye fields in PP may be related to the advanced temporal preparation and increased salience of the second saccade goal that may mediate activity in other downstream structures, such as the superior colliculus. The parietal lobes may be involved in the preparation for spatial remapping, which is required in double-step conditions. The supplementary eye fields appear to have a more general role in planning saccade sequences that may be related to error monitoring and the control over the execution of the correct sequence of responses. |
2010 |
Sven Hohenstein; Jochen Laubrock; Reinhold Kliegl Semantic preview benefit in eye movements during reading: A parafoveal fast-priming study Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 36, no. 5, pp. 1150–1170, 2010. @article{Hohenstein2010, Eye movements in reading are sensitive to foveal and parafoveal word features. Whereas the influence of orthographic or phonological parafoveal information on gaze control is undisputed, there has been no reliable evidence for early parafoveal extraction of semantic information in alphabetic script. Using a novel combination of the gaze-contingent fast-priming and boundary paradigms, we demonstrate semantic preview benefit when a semantically related parafoveal word was available during the initial 125 ms of a fixation on the pretarget word (Experiments 1 and 2). When the target location was made more salient, significant parafoveal semantic priming occurred only at 80 ms (Experiment 3). Finally, with short primes only (20, 40, 60 ms), effects were not significant but were numerically in the expected direction for 40 and 60 ms (Experiment 4). In all experiments, fixation durations on the target word increased with prime durations under all conditions. The evidence for extraction of semantic information from the parafoveal word favors an explanation in terms of parallel word processing in reading. |
S. Lee Hong; Melissa R. Beck Uncertainty compensation in human attention: Evidence from response times and fixation durations Journal Article In: PLoS ONE, vol. 5, no. 7, pp. e11461, 2010. @article{Hong2010, BACKGROUND: Uncertainty and predictability have remained at the center of the study of human attention. Yet, studies have only examined whether response times (RT) or fixations were longer or shorter under levels of stimulus uncertainty. To date, no study has examined patterns of stimuli and responses through a unifying framework of uncertainty. METHODOLOGY/PRINCIPAL FINDINGS: We asked 29 college students to generate repeated responses to a continuous series of visual stimuli presented on a computer monitor. Subjects produced these responses by pressing on a keypad as soon a target was detected (regardless of position) while the durations of their visual fixations were recorded. We manipulated the level of stimulus uncertainty in space and time by changing the number of potential stimulus locations and time intervals between stimulus presentations. To allow the analyses to be conducted using uncertainty as common description of stimulus and response we calculated the entropy of the RT and fixation durations. We tested the hypothesis of uncertainty compensation across space and time by fitting the RT and fixation duration entropy values to a quadratic surface. The quadratic surface accounted for 80% of the variance in the entropy values of both RT and fixation durations. RT entropy increased as a function of spatial and temporal uncertainty of the stimulus, alongside a symmetric, compensatory decrease in the entropy of fixation durations as the level of spatial and temporal uncertainty of the stimuli was increased. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate that greater uncertainty in the stimulus leads to greater uncertainty in the response, and that the effects of spatial and temporal uncertainties are compensatory. We also observed compensatory relationship across the entropies of fixation duration and RT, suggesting that a more predictable visual search strategy leads to more uncertain response patterns and vice versa. |
Tien Ho-Phuoc; Nathalie Guyader; Anne Guérin-Dugué A functional and statistical bottom-up saliency model to reveal the relative contributions of low-level visual guiding factors Journal Article In: Cognitive Computation, vol. 2, no. 4, pp. 344–359, 2010. @article{HoPhuoc2010, When looking at a scene, we frequently move our eyes to place consecutive interesting regions on the fovea, the retina centre. At each fixation, only this specific foveal region is analysed in detail by the visual system. The visual attention mechanisms control eye movements and depend on two types of factor: bottom-up and top-down factors. Bottom-up factors include different visual features such as colour, luminance, edges, and orientations. In this paper, we evaluate quantitatively the relative contribution of basic low-level features as candidate guiding factors to visual attention and hence to eye movements. We also study how these visual features can be combined in a bottom-up saliency model. Our work consists of three interactive parts: a functional saliency model, a statistical model and eye movement data recorded during free viewing of natural scenes. The functional saliency model, inspired by the primate visual system, decomposes a visual scene into different feature maps. The statistical model indicates which features best explain the recorded eye movements. We show an essential role of high frequency luminance and an important contribution of central fixation bias. The relative contribution of features, calculated by the statistical model, is then used to combine the different feature maps into a saliency map. Finally, the comparison between the saliency model and experimental data confirmed the influence of these contributions. |
Tako A. Horsley; Bram Orobio De Castro; Menno Van Der Schoot In the eye of the beholder: Eye-tracking assessment of social information processing in aggressive behavior Journal Article In: Journal of Abnormal Child Psychology, vol. 38, no. 5, pp. 587–599, 2010. @article{Horsley2010, Acording to social information processing theories, aggressive children are hypersensitive to cues of hostility and threat in other people's behavior. However, even though there is ample evidence that aggressive children over-interpret others' behaviors as hostile, it is unclear whether this hostile attribution tendency does actually result from overattending to hostile and threatening cues. Since encoding is posited to consist of rapid automatic processes, it is hard to assess with the self report measures that have been used so far. Therefore, we used a novel approach to investigate visual encoding of social information. The eye movements of thirty 10-13 year old children with lower levels and thirty children with higher levels of aggressive behavior were monitored in real time with an eyetracker, as the children viewed ten different cartoon series of ambiguous provocation situations. In addition, participants answered questions concerning encoding and interpretation. Aggressive children did not attend more to hostile cues, nor attend less to non-hostile cues than non-aggressive children. Contrary, aggressive children looked longer at non-hostile cues, but nonetheless attributed more hostile intent than their non-aggressive peers. These findings contradict the traditional bottom-up processing hypotheses that aggressive behavior would be related with failure to attend to non-hostile cues. The findings seem best explained by topdown information processing, where aggressive children's pre-existing hostile intent schemata (1) direct attention towards schema inconsistent non-hostile cues, (2) prevent further processing and recall of such schema-inconsistent information, and (3) lead to hostile intent attribution and aggressive responding, disregarding the schema-inconsistent non-hostile information. |
Po-Jang Hsieh; Peter U. Tse BOLD signal in both ipsilateral and contralateral retinotopic cortex modulates with perceptual fading Journal Article In: PLoS ONE, vol. 5, no. 3, pp. e9638, 2010. @article{Hsieh2010, Under conditions of visual fixation, perceptual fading occurs when a stationary object, though present in the world and continually casting light upon the retina, vanishes from visual consciousness. The neural correlates of the consciousness of such an object will presumably modulate in activity with the onset and cessation of perceptual fading. METHOD: In order to localize the neural correlates of perceptual fading, a green disk that had been individually set to be equiluminant with the orange background, was presented in one of the four visual quadrants; Subjects indicated with a button press whether or not the disk was subjectively visible as it perceptually faded in and out. RESULTS: Blood oxygen-level dependent (BOLD) signal in V1 and ventral retinotopic areas V2v and V3v decreases when the disk subjectively disappears, and increases when it subjectively reappears. This effect occurs in early visual areas both ipsilaterally and contralaterally to the fading figure. That is, it occurs regardless of whether the fading stimulus is presented inside or outside of the corresponding portion of visual field. In addition, we find that the microsaccade rate rises before and after perceptual transitions from not seeing to seeing the disk, and decreases before perceptual transitions from seeing to not seeing the disk. These BOLD signal changes could be driven by a global process that operates across contralateral and ipsilateral visual cortex or by a confounding factor, such as microsaccade rate. |
Gesche M. Huebner; Karl R. Gegenfurtner Effects of viewing time, fixations, and viewing strategies on visual memory for briefly presented natural objects Journal Article In: Quarterly Journal of Experimental Psychology, vol. 63, no. 7, pp. 1398–1413, 2010. @article{Huebner2010, We investigated the impact of viewing time and fixations on visual memory for briefly presented natural objects. Participants saw a display of eight natural objects arranged in a circle and used a partial report procedure to assign one object to the position it previously occupied during stimulus presentation. At the longest viewing time of 7,000 ms or 10 fixations, memory performance was significantly higher than at the shorter times. This increase was accompanied by a primacy effect, suggesting a contribution of another memory component-for example, visual long-term memory (VLTM). We found a very limited beneficial effect of fixations on objects; fixated objects were only remembered better at the shortest viewing times. Our results revealed an intriguing difference between the use of a blocked versus an interleaved experimental design. When trial length was predictable, in the blocked design, target fixation durations increased with longer viewing times. When trial length was unpredictable, fixation durations stayed the same for all viewing lengths. Memory performance was not affected by this design manipulation, thus also supporting the idea that the number and duration of fixations are not closely coupled to memory performance. |
Lynn Huestegge Effects of vowel length on gaze durations in silent and oral reading Journal Article In: Journal of Eye Movement Research, vol. 3, no. 5, pp. 1–18, 2010. @article{Huestegge2010a, Vowel length is known to affect reaction times in single word reading. Eye movement studies involving silent sentence reading showed that phonological information of a word can be acquired even before it is fixated. However, it remained an open question whether vowel length directly influences oculomotor control in sentence reading. In the present eye tracking study, subjects read sentences that included target words of varying vowel length and frequency. In Experiment 1, subjects read silently for comprehension, whereas Experiment 2 involved oral reading. Experiments 3 and 4 additionally included an articulatory suppression task and a foot tapping task. Results indicated that in conditions that did not require additional articulation (Experiments 1 and 4) gaze durations were increased for words with long vowels compared to words with short vowels. Conditions that required simultaneous articulation (Experiments 2 and 3) did not yield a vowel length effect. The results point to an influence of phonetic properties on oculomotor control during silent reading around the time of the completion of lexical access. |
Lynn Huestegge; Diana Bocianski Effects of syntactic context on eye movements during reading Journal Article In: Advances in Cognitive Psychology, vol. 6, no. 6, pp. 79–87, 2010. @article{Huestegge2010b, Previous research has demonstrated that properties of a currently fixated word and of adjacent words influence eye movement control in reading. In contrast to such local effects, little is known about the global effects on eye movement control, for example global adjustments caused by processing difficulty of previous sentences. In the present study, participants read text passages in which voice (active vs. passive) and sentence structure (embedded vs. non-embedded) were manipulated. These passages were followed by identical target sentences. The results revealed effects of previous sentence structure on gaze durations in the target sentence, implying that syntactic properties of previously read sentences may lead to a global adjustment of eye movement control. |
Lynn Huestegge; Iring Koch Crossmodal action selection: Evidence from dual-task compatibility Journal Article In: Memory & Cognition, vol. 38, no. 4, pp. 493–501, 2010. @article{Huestegge2010, Response-related mechanisms of multitasking were studied by analyzing simultaneous processing of responses in different modalities (i.e., crossmodal action). Participants responded to a single auditory stimulus with a saccade, a manual response (single-task conditions), or both (dual-task condition). We used a spatially incompatible stimulus-response mapping for one task, but not for the other. Critically, inverting these mappings varied temporal task overlap in dual-task conditions while keeping spatial incompatibility across responses constant. Unlike previous paradigms, temporal task overlap was manipulated without utilizing sequential stimulus presentation, which might induce strategic serial processing. The results revealed dual-task costs, but these were not affected by an increase of temporal task overlap. This finding is evidence for parallel response selection in multitasking. We propose that crossmodal action is processed by a central mapping-selection mechanism in working memory and that the dual-task costs are mainly caused by mapping-related crosstalk. |
Lynn Huestegge; Iring Koch Fixation disengagement enhances peripheral perceptual processing: Evidence for a perceptual gap effect Journal Article In: Experimental Brain Research, vol. 201, no. 4, pp. 631–640, 2010. @article{Huestegge2010c, Temporal gaps between the offset of a central fixation stimulus and the onset of an eccentric target typically reduce saccade latencies (saccadic gap effect). Here, we test whether temporal gaps also affect perceptual performance in peripheral vision. In Experiment 1, subjects executed saccades to briefly presented peripheral target letters and reported letter identity afterwards. A central fixation stimulus either remained visible throughout the trial (overlap) or disappeared 200 ms before letter onset (gap). Experiment 2 tested perceptual performance without saccade execution, whereas Experiment 3 tested saccade execution without perceptual demands. Peripheral letter perception performance was enhanced in gap as compared to overlap conditions (perceptual gap effect) irrespective of concurrent oculomotor demands. Furthermore, the saccadic gap effect was modulated by concurrent perceptual demands. Experiment 4 ruled out a general warning explanation of the perceptual gap effect. These findings extend recent theories assuming a strong coupling between the preparation of goal-directed saccades and shifts of visual attention from the spatial to the temporal domain. |
Lynn Huestegge; Hanns Jürgen Kunert; Ralph Radach Long-term effects of cannabis on eye movement control in reading Journal Article In: Psychopharmacology, vol. 209, no. 1, pp. 77–84, 2010. @article{Huestegge2010d, INTRODUCTION: Cannabis is known to produce substantial acute effects on human cognition and visuomotor skills. Many recent studies additionally revealed rather long-lasting effects on basic oculomotor control, especially after chronic use. However, it is still unknown to what extent these deficits play a role in everyday tasks that strongly rely on an efficient saccade system, such as reading. MATERIALS AND METHODS: In the present study, eye movements during sentence reading of 20 healthy long-term cannabis users (without acute tetrahydrocannabinol-intoxication) and 20 control participants were compared. Analyses focused on both spatial and temporal parameters of oculomotor control during reading. RESULTS: Long-term cannabis users exhibited increased fixation durations, more revisiting of previously inspected text, and a substantial prolongation of word viewing times, which were highly inflated for longer and less frequent words. DISCUSSION: The results indicate that relatively subtle performance deficits on the level of basic oculomotor control scale up as task complexity and cognitive demands increase. |
Lynn Huestegge; Eva Maria Skottke; Sina Anders; Jochen Müsseler; Günter Debus The development of hazard perception: Dissociation of visual orientation and hazard processing Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 13, no. 1, pp. 1–8, 2010. @article{Huestegge2010e, Eye movements are a key behavior for visual information processing in traffic situations and for vehicle control. Previous research showed that effective ways of eye guidance are related to better hazard perception skills. Furthermore, hazard perception is reported to be faster for experienced drivers as compared to novice drivers. However, little is known whether this difference can be attributed to the development of visual orientation, or hazard processing. In the present study, we compared eye movements of 20 inexperienced and 20 experienced drivers in a hazard perception task. We separately measured (a) the interval between the onset of a static hazard scene and the first fixation on a potential hazard, and (b) the interval between the first fixation on a potential hazard and the final response. While overall RT was faster for experienced compared to inexperienced drivers, the scanning patterns revealed that this difference was due to faster processing after the initial fixation on the hazard, whereas scene scanning times until the initial fixation on the hazard did not differ between groups. © 2009 Elsevier Ltd. All rights reserved. |
Falk Huettig; Jidong Chen; Melissa Bowerman; Asifa Majid In: Journal of Cognition and Culture, vol. 10, no. 1-2, pp. 39–58, 2010. @article{Huettig2010a, In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers – a grammatical category in the language – on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category – even when the classifier is not explicitly present (cf., Huettig and Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifiermatch objects (e.g., chair) than to distractor objects, but when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing. |
Falk Huettig; Robert J. Hartsuiker Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception Journal Article In: Language and Cognitive Processes, vol. 25, no. 3, pp. 347–374, 2010. @article{Huettig2010, Theories of verbal self-monitoring generally assume an internal (pre-articu- latory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception- based theories predict that listening to one's own inner speech has similar behavioural consequences as listening to someone else's speech. Our experi- ment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonolo- gically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception. |
Jukka Hyönä; Raymond Bertram Do frequency characteristics of nonfixated words influence the processing of fixated words during reading? Journal Article In: European Journal of Cognitive Psychology, vol. 16, no. 1-2, pp. 104–127, 2010. @article{Hyoenae2010, Are readers capable of lexically processing more than one word at a time? In five eye movement experiments, we examined to what extent lexical characteristics of the nonfixated word to the right of fixation influenced readers' eye behavior on the fixated word. In three experiments, we varied the frequency of the initial constituent of two-noun compounds, while in two experiments the whole-word frequency was manipulated. The results showed that frequency characteristics of the parafoveal word sometimes affected eye behavior prior to fixating it, but the direction of effects was not consistent & the effects were not replicated across all experiments. Follow-up regression analyses suggested that foveal & parafoveal word length as well as the frequency of the word-initial trigram of the parafoveal word may modulate the parafoveal-on-foveal effects. It is concluded that low-frequency words or lexemes may under certain circumstances serve as a magnet to attract an early eye movement to them. However, further corroborative evidence is clearly needed. |
Aarlenne Zein Khan; Stephen J. Heinen; Robert M. McPeek Attentional cueing at the saccade goal, not at the target location, facilitates saccades Journal Article In: Journal of Neuroscience, vol. 30, no. 16, pp. 5481–5488, 2010. @article{Khan2010, Presenting a behaviorally irrelevant cue shortly before a target at the same location decreases the latencies of saccades to the target, a phenomenon known as exogenous attention facilitation. It remains unclear whether exogenous attention interacts with early, sensory stages or later, motor planning stages of saccade production. To distinguish between these alternatives, we used a saccadic adaptation paradigm to dissociate the location of the visual target from the saccade goal. Three male and four female human subjects performed both control trials, in which saccades were made to one of two target eccentricities, and adaptation trials, in which the target was shifted from one location to the other during the saccade. This manipulation adapted saccades so that they eventually were directed to the shifted location. In both conditions, a behaviorally irrelevant cue was flashed 66.7 ms before target appearance at a randomly selected one of seven positions that included the two target locations. In control trials, saccade latencies were shortest when the cue was presented at the target location and increased with cue-target distance. In contrast, adapted saccade latencies were shortest when the cue was presented at the adapted saccade goal, and not at the visual target location. The dynamics of adapted saccades were also altered, consistent with prior adaptation studies, except when the cue was flashed at the saccade goal. Overall, the results suggest that attentional cueing facilitates saccade planning rather than visual processing of the target. |
Paul S. Khayat; Robert Niebergall; Julio C. Martinez-Trujillo Attention differentially modulates similar neuronal responses evoked by varying contrast and direction stimuli in area MT. Journal Article In: Journal of Neuroscience, vol. 30, no. 6, pp. 2188–2197, 2010. @article{Khayat2010, The effects of attention on the responses of visual neurons have been described as a scaling or additive modulation independent of stimulus features and contrast, or as a contrast-dependent modulation. We explored these alternatives by recording neuronal responses in macaque area MT to moving stimuli that evoked similar firing rates but varied in contrast and direction. We presented two identical pairs of stimuli, one inside the neurons' receptive field and the other outside, in the opposite hemifield. One stimulus of each pair always had high contrast and moved in the recorded cell's antipreferred direction (AP pattern), while the other (test pattern) could either move in the cell's preferred direction and vary in contrast, or have the same contrast as the AP pattern and vary in direction. For different stimulus pairs evoking similar responses, switching attention between the two AP patterns, or directing attention from a fixation spot to the AP pattern inside or outside the receptive field, produced a stronger suppression of responses to varying contrast pairs, reaching a maximum ( approximately 20%) at intermediate contrast. For invariable contrast pairs, switching attention from the fixation spot to the AP pattern produced a modulation that ranged from 10% suppression when the test pattern moved in the cells preferred direction to 14% enhancement when it moved in a direction 90 degrees away from that direction. Our results are incompatible with a scaling or additive modulation of MT neurons' response by attention, but support models where spatial and feature-based attention modulate input signals into the area normalization circuit. |
Paul S. Khayat; Robert Niebergall; Julio C. Martinez-Trujillo Frequency-dependent attentional modulation of local field potential signals in macaque area MT Journal Article In: Journal of Neuroscience, vol. 30, no. 20, pp. 7037–7048, 2010. @article{Khayat2010a, Visual attention modulates neuronal responses in primate motion processing area MT. However, whether it modulates the strength local field potentials (LFP-power) within this area remains unexplored, as well as how this modulation relates to the one of the neurons' response. We investigated these issues by simultaneously recording LFPs and neuronal responses evoked by moving random dot patterns of varying direction and contrast in area MT of two male monkeys (Macaca mulatta) during different behavioral conditions. We found that: (1) LFP-power in the gamma (30-120 Hz), but not in the delta (2-4 Hz), (4-8 Hz), alpha (8-12 Hz), beta(1) (12-20 Hz), and beta(2) (20-30 Hz) frequency bands, was tuned for motion direction and contrast, similarly to the neurons' response, (2) shifting attention into a neuron's receptive field (RF) decreased LFP-power in the bands below 30 Hz (except the band), whereas shifting attention to a stimulus motion direction outside the RF had no effect in these bands, (3) LFP-power in the gamma band, however, exhibited both spatial- and motion direction-dependent attentional modulation (increase or decrease), which was highly correlated with the modulation of the neurons' response. These results demonstrate that in area MT, shifting attention into the RFs of neurons in the vicinity of the recording electrode, or to the direction of a moving stimulus located far away from these RFs, distinctively modulates LFP-power in the various frequency bands. They further suggest differences in the neural mechanisms underlying these types of attentional modulation of visual processing. |