全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2013 |
Irene Sperandio; Shaleeza Kaderali; Philippe A. Chouinard; Jared Frey; Melvyn A. Goodale Perceived size change induced by nonvisual signals in darkness: The relative contribution of vergence and proprioception Journal Article In: Journal of Neuroscience, vol. 33, no. 43, pp. 16915–16923, 2013. @article{Sperandio2013, Most of the time, the human visual system computes perceived size by scaling the size of an object on the retina with its perceived distance. There are instances, however, in which size-distance scaling is not based on visual inputs but on extraretinal cues. In the Taylor illusion, the perceived afterimage that is projected on an observer's hand will change in size depending on how far the limb is positioned from the eyes-even in complete darkness. In the dark, distance cues might derive from hand position signals either by an efference copy of the motor command to the moving hand or by proprioceptive input. Alternatively, there have been reports that vergence signals from the eyes might also be important. We performed a series of behavioral and eye-tracking experiments to tease apart how these different sources of distance information contribute to the Taylor illusion. We demonstrate that, with no visual information, perceived size changes mainly as a function of the vergence angle of the eyes, underscoring its importance in size-distance scaling. Interestingly, the strength of this relationship decreased when a mismatch between vergence and proprioception was introduced, indicating that proprioceptive feedback from the arm also affected size perception. By using afterimages, we provide strong evidence that the human visual system can benefit from sensory signals that originate from the hand when visual information about distance is unavailable. |
Miriam Spering; Elisa C. Dias; Jamie L. Sanchez; Alexander C. Schutz; Daniel C. Javitt Efference copy failure during smooth pursuit eye movements in schizophrenia Journal Article In: Journal of Neuroscience, vol. 33, no. 29, pp. 11779–11787, 2013. @article{Spering2013, Abnormal smooth pursuit eye movements in patients with schizophrenia are often considered a consequence of impaired motion perception. Here we used a novel motion prediction task to assess the effects of abnormal pursuit on perception in human patients. Schizophrenia patients (n = 15) and healthy controls (n = 16) judged whether a briefly presented moving target ("ball") would hit/miss a stationary vertical line segment ("goal"). To relate prediction performance and pursuit directly, we manipulated eye movements: in half of the trials, observers smoothly tracked the ball; in the other half, they fixated on the goal. Strict quality criteria ensured that pursuit was initiated and that fixation was maintained. Controls were significantly better in trajectory prediction during pursuit than during fixation, their performance increased with presentation duration, and their pursuit gain and perceptual judgments were correlated. Such perceptual benefits during pursuit may be due to the use of extraretinal motion information estimated from an efference copy signal. With an overall lower performance in pursuit and perception, patients showed no such pursuit advantage and no correlation between pursuit gain and perception. Although patients' pursuit showed normal improvement with longer duration, their prediction performance failed to benefit from duration increases. This dissociation indicates relatively intact early visual motion processing, but a failure to use efference copy information. Impaired efference function in the sensory system may represent a general deficit in schizophrenia and thus contribute to symptoms and functional outcome impairments associated with the disorder. |
Patti Spinner; Susan M. Gass; Jennifer Behney Ecological validity in eye-tracking: An empirical study Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 2, pp. 389–415, 2013. @article{Spinner2013, Eye-trackers are becoming increasingly widespread as a tool to investigate second language (L2) acquisition. Unfortunately, clear standards for methodology—including font size, font type, and placement of interest areas—are not yet available. Although many researchers stress the need for ecological validity—that is, the simulation of natural reading conditions—it may not be prudent to use such a design to investigate new directions in eye-tracking research, and particularly in research involving small lexical items such as articles. In this study, we examine whether two different screen layouts can lead to different results in an eye-tracking study on the L2 acquisition of Italian gender. The results of an experiment with an ecologically valid design are strikingly different than the results of an experiment with a design tailored to track eye movements to articles. We conclude that differences in screen layout can have significant effects on results and that it is crucial that researchers report screen layout information. |
Andreas Sprenger; Monique Friedrich; Matthias Nagel; Christiane S. Schmidt; Steffen Moritz; Rebekka Lencer Advanced analysis of free visual exploration patterns in schizophrenia Journal Article In: Frontiers in Psychology, vol. 4, pp. 737, 2013. @article{Sprenger2013, Background: Visual scanpath analyses provide important information about attention allocation and attention shifting during visual exploration of social situations. This study investigated whether patients with schizophrenia simply show restricted free visual exploration behavior reflected by reduced saccade frequency and increased fixation duration or whether patients use qualitatively different exploration strategies than healthy controls. Methods: Scanpaths of 32 patients with schizophrenia and age-matched 33 healthy controls were assessed while participants freely explored six photos of daily life situations (20 s/photo) evaluated for cognitive complexity and emotional strain. Using fixation and saccade parameters, we compared temporal changes in exploration behavior, cluster analyses, attentional landscapes, and analyses of scanpath similarities between both groups. Results: We found fewer fixation clusters, longer fixation durations within a cluster, fewer changes between clusters, and a greater increase of fixation duration over time in patients compared to controls. Scanpath patterns and attentional landscapes in patients also differed significantly from those of controls. Generally, cognitive complexity and emotional strain had significant effects on visual exploration behavior. This effect was similar in both groups as were physical properties of fixation locations. Conclusions: Longer attention allocation to a given feature in a scene and less attention shifts in patients suggest a more focal processing mode compared to a more ambient exploration strategy in controls. These visual exploration alterations were present in patients independently of cognitive complexity, emotional strain or physical properties of visual cues implying that they represent a rather general deficit. Despite this impairment, patients were able to adapt their scanning behavior to changes in cognitive complexity and emotional strain similar to controls. |
Andreas Sprenger; Peter Trillenberg; Matthias Nagel; John A. Sweeney; Rebekka Lencer Enhanced top-down control during pursuit eye tracking in schizophrenia Journal Article In: European Archives of Psychiatry and Clinical Neuroscience, vol. 263, no. 3, pp. 223–231, 2013. @article{Sprenger2013a, Alterations in sensorimotor processing and predictive mechanisms have both been proposed as the primary cause of eye tracking deficits in schizophrenia. 20 schizophrenia patients and 20 healthy controls were assessed on blocks of predictably moving visual targets at constant speeds of 10, 15 or 30 degrees /s. To assess internal drive to the eye movement system based on predictions about the ongoing target movement, targets were blanked off for either 666 or 1,000 ms during the ongoing pursuit movement in additional conditions. Main parameters of interest were eye deceleration after extinction of the visual target and residual eye velocity during blanking intervals. Eye deceleration after target extinction, reflecting persistence of predictive signals, was slower in patients than in controls, implying greater rather than diminished utilization of predictive mechanisms for pursuit in schizophrenia. Further, residual gain was not impaired in patients indicating a basic integrity of internal predictive models. Pursuit velocity gain in patients was reduced in all conditions with visible targets replicating previous findings about a sensorimotor transformation deficit in schizophrenia. A pattern of slower eye deceleration and unimpaired residual gain during blanking intervals implies greater adherence to top-down predictive models for pursuit tracking in schizophrenia. This suggests that predictive modeling is relatively intact in schizophrenia and that the primary cause of abnormal visual pursuit is impaired sensorimotor transformation of the retinal error signal needed for the maintenance of accurate visually driven pursuit. This implies that disruption in extrastriate and sensorimotor systems rather than frontostriatal predictive mechanisms may underlie this widely reported endophenotypes for schizophrenia. |
Dominic Thompson; S. P. Ling; Andriy Myachykov; Fernanda Ferreira; Christoph Scheepers Patient-related constraints on get- and be-passive uses in English: Evidence from paraphrasing Journal Article In: Frontiers in Psychology, vol. 4, pp. 848, 2013. @article{Thompson2013, In English, transitive events can be described in various ways. The main possibilities are active-voice and passive-voice, which are assumed to have distinct semantic and pragmatic functions. Within the passive, there are two further options, namely be-passive or get-passive. While these two forms are generally understood to differ, there is little agreement on precisely how and why. The passive Patient is frequently cited as playing a role, though again agreement on the specifics is rare. Here we present three paraphrasing experiments investigating Patient-related constraints on the selection of active vs. passive voice, and be- vs. get-passive, respectively. Participants either had to re-tell short stories in their own words (Experiments 1 and 2) or had to answer specific questions about the Patient in those short stories (Experiment 3). We found that a given Agent in a story promotes the use of active-voice, while a given Patient promotes be-passives specifically. Meanwhile, get-passive use increases when the Patient is marked as important. We argue that the three forms of transitive description are functionally and semantically distinct, and can be arranged along two dimensions: Patient Prominence and Patient Importance. We claim that active-voice has a near-complementary relationship with the be-passive, driven by which protagonist is given. Since both get and be are passive, they share the features of a Patient-subject and an optional Agent by-phrase; however, get specifically responds to a Patient being marked as important. Each of these descriptions has its own set of features that differentiate it from the others. |
Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner Selection of visual information for lightness judgements by eye movements Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–8, 2013. @article{Toscani2013, When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance. |
Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner Optimal sampling of visual information for lightness judgments Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 27, pp. 11163–11168, 2013. @article{Toscani2013a, The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting showthat higher values in an object's luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. |
Joseph C. Toscano; Nathaniel D. Anderson; Bob McMurray Reconsidering the role of temporal order in spoken word recognition Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 5, pp. 981–987, 2013. @article{Toscano2013, Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes. |
R. Blythe Towal; Milica Mormann; Christof Koch Simultaneous modeling of visual saliency and value computation improves predictions of economic choice Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 40, pp. E3858–E3867, 2013. @article{Towal2013, Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. |
David J. Townsend Aspectual coercion in eye movements Journal Article In: Journal of Psycholinguistic Research, vol. 42, no. 3, pp. 281–306, 2013. @article{Townsend2013, Comprehension includes interpreting sentences in terms of aspectual categories such as processes (Harry climbed) and culminations (Harry reached the top). Adding a verbal modifier such as for many years to a culmination coerces its interpretation from one to many culminations. Previous studies have found that coercion increases lexical decision and meaning judgment time, but not eye fixation time. This study recorded eye movements as participants read sentences in which a coercive adverb increased the interpretation of multiple events. Adverbs appeared at the end of a clause and line; the post-adverb region appeared at the beginning of the next line; follow-up questions occasionally asked about aspectual meaning; and clause type varied systematically. Coercive adverbs increased eye fixation time in the post-adverb region and in the adverb and post-adverb regions combined. Factors that influence the appearance of aspectual coercion may include world knowledge, follow-up questions, and the location and ambiguity of adverbs. |
Annie Tremblay; Elsa Spinelli Segmenting liaison-initial words: The role of predictive dependencies Journal Article In: Language and Cognitive Processes, vol. 28, no. 8, pp. 1093–1113, 2013. @article{Tremblay2013, Listeners use several cues to segment speech into words. However, it is unclear how these cues work together. This study examines the relative weight of distributional and (natural) acoustic-phonetic cues in French listeners' recognition of temporarily ambiguous vowel-initial words in liaison contexts (e.g., parfai t [t]abri "perfect shelter") and corresponding consonant-initial words (e.g., parfait tableau "perfect painting"). Participants completed a visual-world eye-tracking experiment in which they heard adjective-noun sequences where the pivotal consonant was /t/ (more frequent as word-initial consonant and thus expected advantage for consonant-initial words), /z/ (more frequent as liaison consonant and thus expected advantage for liaison-initial words), or /n/ (roughly as frequent as word-initial and liaison consonants and thus no expected advantage). The results for /t/ and /z/ were as expected, but those for /n/ showed an advantage for consonant-initial words over liaison-initial ones. These results are consistent with speech segmentation theories in which distributional information supersedes acoustic-phonetic information, but they also suggest a privileged status for consonant-initial words when the input does not strongly favour liaison-initial words. |
Alison M. Trude; Annie Tremblay; Sarah Brown-Schmidt Limitations on adaptation to foreign accents Journal Article In: Journal of Memory and Language, vol. 69, no. 3, pp. 349–367, 2013. @article{Trude2013, Although foreign accents can be highly dissimilar to native speech, existing research suggests that listeners readily adapt to foreign accents after minimal exposure. However, listeners often report difficulty understanding non-native accents, and the time-course and specificity of adaptation remain unclear. Across five experiments, we examined whether listeners could use a newly learned feature of a foreign accent to eliminate lexical competitors during on-line speech perception. Participants heard the speech of a native English speaker and a native speaker of Québec French who, in English, pronounces /i/ as [. i] (e.g., weak as wick) before all consonants except voiced fricatives. We examined whether listeners could learn to eliminate a shifted /i/-competitor (e.g., weak) when interpreting the accented talker produce an unshifted word (e.g., wheeze). In four experiments, adaptation was strikingly limited, though improvement across the course of the experiment and with stimulus variations indicates learning was possible. In a fifth experiment, adaptation was not improved when a native English talker produced the critical vowel shift, demonstrating that the limitation is not simply due to the fact the accented talker was non-native. These findings suggest that although listeners can arrive at the correct interpretation of a foreign accent, this process can pose significant difficulty. |
George T. Gitchel; Paul A. Wetzel; Mark S. Baron Slowed saccades and increased square wave jerks in essential tremor Journal Article In: Tremor and Other Hyperkinetic Movements, vol. 3, 2013. @article{Gitchel2013, BACKGROUND: Eye movements in essential tremor (ET) are poorly described and may present useful information on the underlying pathophysiology of the disorder. METHODS: Sixty patients with ET, including 15 de novo untreated patients, and 60 age-matched controls constitute the study population. A video-based eye tracker was used to assess binocular eye position. Oculomotor function was assessed while subjects followed random horizontally and vertically step-displaced targets. RESULTS: For all reflexive saccades, latencies were increased in ET subjects by a mean of 16.3% (p<0.01). Saccades showed reduced peak velocities with a lengthy, wavering velocity plateau, followed by slowed decelerations. For larger 30°+ saccades, peak velocities were decreased by a mean of 25.2% (p<0.01) and durations increased by 31.8% (p<0.01). The frequency of square wave jerks (SWJs) in patients was more than triple that of controls (p<0.0001). Despite frequent interruptions by SWJs, fixations were otherwise stable and indistinguishable from controls (root mean square [RMS] velocity |
Mackenzie G. Glaholt; Keith Rayner; Eyal M. Reingold Spatial frequency filtering and the direct control of fixation durations during scene viewing Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1761–1773, 2013. @article{Glaholt2013, The present study employed a saccade-contingent change paradigm to investigate the effect of spatial frequency filtering on fixation durations during scene viewing. Subjects viewed grayscale scenes while encoding them for a later memory test. During randomly chosen saccades, the scene was replaced with an alternate version that remained throughout the critical fixation that followed. In Experiment 1, during the critical fixation, the scene could be changed to high-pass and low-pass spatial frequency filtered versions. Under both conditions, fixation durations increased, and the low-pass condition produced a greater effect than the high-pass condition. In subsequent experiments, we manipulated the familiarity of scene information during the critical fixation by flipping the filtered scenes upside down or horizontally. Under these conditions, we observed lengthening of fixation durations but no difference between the high-pass and low-pass conditions, suggesting that the filtering effect is related to the mismatch between information extracted within the critical fixation and the ongoing scene representation in memory. We also conducted control experiments that tested the effect of changes to scene orientation (Experiment 2a) and the addition of color to a grayscale scene (Experiment 2b). Fixation distribution analysis suggested two effects on the distribution fixation durations: a fast-acting effect that was sensitive to all transsaccadic changes tested and a later effect in the tail of the distribution that was likely tied to the processing of scene information. These findings are discussed in the context of theories of oculomotor control during scene viewing. |
Fiona C. Glen; Nicholas D. Smith; David P. Crabb Saccadic eye movements and face recognition performance in patients with central glaucomatous visual field defects Journal Article In: Vision Research, vol. 82, pp. 42–51, 2013. @article{Glen2013, Patients with more advanced glaucoma are likely to experience problems with everyday visual tasks such as face recognition. However, some patients still perform well at face recognition despite their visual field (VF) defects. This study investigated whether certain eye movement patterns are associated with better performance in the Cambridge Face Memory Test. For patients with bilateral VF defects in their central 10° of VF, making larger saccades appeared to be associated with better face recognition performance (rho = 0.60 |
Aline Godfroid; Frank Boers; Alex Housen An eye for words: Gauging the role of attention in incidental L2 vocabulary acquisition by means of eye-tracking Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 3, pp. 483–517, 2013. @article{Godfroid2013, This eye-tracking study tests the hypothesis that more attention leads to more learning, following claims that attention to new language elements in the input results in their initial representation in long-term memory (i.e., intake; Robinson, 2003; Schmidt, 1990, 2001). Twenty-eight advanced learners of English read English texts that contained 12 targets for incidental word learning. The target was a known word (control condition), a matched pseudoword, or that pseudoword preceded or followed by the known word (with the latter being a cue to the pseudoword's meaning). Participants' eye-fixation durations on the targets during reading served as a measure of the amount of attention paid (see Rayner, 2009). Results indicate that participants spent more time processing the unknown pseudowords than their matched controls. The longer participants looked at a pseudoword during reading, the more likely they were to recognize that word in an unannounced vocabulary posttest. Finally, the known, appositive cues were fixated longer when they followed the pseudowords than when they preceded them; however, their presence did not lead to higher retention of the pseudowords. We discuss how eye-tracking may add to existing methodologies for studying attention and noticing (Schmidt, 1990) in SLA. |
Aline Godfroid; Maren S. Uggen Attention to irregular verbs by beginning learners of German: An eye-movement study Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 2, pp. 291–322, 2013. @article{Godfroid2013a, This study focuses on beginning second language learners' attention to irregular verb morphology, an area of grammar that many adults fi nd diffi cult to acquire (e.g., DeKeyser, 2005 ; Larsen-Freeman, 2010 ). We measured beginning learners' eye movements during sentence processing to investigate whether or not they actually attend to irregular verb features and, if so, whether the amount of attention that they pay to these features predicts their acquisition. On the assumption that attention facilitates learning (e.g., Gass, 1997 ; Robinson, 2003 ; Schmidt, 2001 ), we expected more attention (i.e., longer fi xations or more frequent comparisons between verb forms) to lead to more learning of the irregular verbs. Forty beginning learners of German read 12 German sentence pairs with stem-changing verbs and 12 German sentence pairs with regular verbs while an Eyelink 1000. We recorded their eye movements. The stem-changing verbs consisted of six a → ä changing verbs and six e → i(e) changing verbs. Each verb appeared in a baseline sentence in the fi rst-person singular, which has no stem change, and a critical sentence in the second- or third-person singular, which have a stem change for the irregular but not the regular verbs, on the same screen. Productive pre- and posttests measured the effects of exposure on learning. Results indicate that learners looked longer overall at stem-changing verbs than regular verbs, revealing a late effect of verb irregularity on reading times. Longer total times had a modest, favorable effect on the subsequent production of the stem vowel. Finally, the production of only the a → ä verbs—not the e → i(e) verbs—benefi ted from direct visual comparisons during reading, possibly because of the umlaut in the former. We interpret the results with reference to recent theory and research on attention, noticing, and language learning and provide a more nuanced and empirically based understanding of the noticing construct. |
Hayward J. Godwin; Valerie Benson; Denis Drieghe Using interrupted visual displays to explore the capacity, time course, and format of fixation plans during visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 6, pp. 1700–1712, 2013. @article{Godwin2013, We assessed fixation planning in visual search in two experiments by tracking participants' eye movements while they each searched for a simple target (a T shape) among a set of distractors (L shapes). On some trials, the search display was briefly interrupted with a blank screen and, following a randomly determined period of elapsed time, the search display was reinstated. In Experiment 1, we found that search continued during the interruption but fixation durations were increased and the accuracy of saccadic targeting was impaired. A MLM demonstrated that acuity played a role in determining whether fixated missing objects were processed during the interruption and that fixation planning was uninfluenced by the length of time available prior to the interruption. In Experiment 2, to check that fixations in the interruption periods were not random, half the distractors were blue (the target was blue as well) and half were yellow. All of the findings from Experiment 1 were replicated and the majority of fixations in the interruption period landed upon blue distractors. Results are discussed in relation to the capacity, time course, and format of fixation plans in visual search. |
Hayward J. Godwin; Stuart Hyde; Dominic Taunton; James Calver; James I. R. Blake; Simon P. Liversedge The influence of expertise on maritime driving behaviour Journal Article In: Applied Cognitive Psychology, vol. 27, no. 4, pp. 483–492, 2013. @article{Godwin2013a, We compared expert and novice behaviour in a group of participants as they engaged in a simulated maritime driving task. We varied the difficulty of the driving task by controllling the severity of the sea state in which they were driving their craft. Increases in sea severity increased the size of the upcoming waves while also increasing the length of the waves. Expert participants drove their craft at a higher speed than novices and decreased their fixation durations as wave severity increased. Furthermore, the expert participants increased the horizontal spread of their fixation positions as wave severity increased to a greater degree than novices. Conversely, novice participants showed evidence of a greater vertical spread of fixations than experts. By connecting our findings with previous research investigating eye movement behaviour and road driving, we suggest that novice or inexperienced drivers show inflexibility in adaptation to changing driving conditions. |
Hiroaki Gomi; Naotoshi Abekawa; Shinsuke Shimojo The hand sees visual periphery better than the eye: Motor-dependent visual motion analyses Journal Article In: Journal of Neuroscience, vol. 33, no. 42, pp. 16502–16509, 2013. @article{Gomi2013, Information pertaining to visual motion is used in the brain not only for conscious perception but also for various kinds of motor controls. In contrast to the increasing amount of evidence supporting the dissociation of visual processing for action versus perception, it is less clear whether the analysis of visual input is shared for characterizing various motor outputs, which require different kinds of interactions with environments. Here we show that, in human visuomotor control, motion analysis for quick hand control is distinct from that for quick eye control in terms of spatiotemporal analysis and spatial integration. The amplitudes of implicit and quick hand and eye responses induced by visual motion stimuli differently varied with stimulus size and pattern smoothness (e.g., spatial frequency). Surprisingly, the hand response did not decrease even when the visual motion with a coarse pattern was mostly occluded over the visual center, whereas the eye response markedly decreased. Since these contrasts cannot be ascribed to any difference in motor dynamics, they clearly indicate different spatial integration of visual motion for the individual motor systems. Going against the overly unified hierarchical view of visual analysis, our data suggest that visual motion analyses are separately tailored from early levels to individual motor modalities. Namely, the hand and eyes see the external world differently. |
Claudia C. Gonzalez; Melanie R. Burke The brain uses efference copy information to optimise spatial memory Journal Article In: Experimental Brain Research, vol. 224, no. 2, pp. 189–197, 2013. @article{Gonzalez2013a, Does a motor response to a target improve the subsequent recall of the target position or can we simply use peripheral position information to guide an accurate response? We suggest that a motor plan of the hand can be enhanced with actual motor and efference copy feedback (GoGo trials), which is absent in the passive observation of a stimulus (NoGo trials). To investigate this effect during eye and hand coordination movements, we presented stimuli in two formats (memory guided or visually guided) under three modality conditions (eyes only, hands only (with eyes fixated), or eyes and hand together). We found that during coordinated movements, both the eye and hand response times were facilitated when efference feedback of the movement was provided. Furthermore, both eye and hand movements to remembered locations were significantly more accurate in the GoGo than in the NoGo trial types. These results reveal that an efference copy of a motor plan enhances memory for a location that is not only observed in eye movements, but also translated downstream into a hand movement. These results have significant implications on how we plan, code and guide behavioural responses, and how we can optimise accuracy and timing to a given target. |
Esther G. González; Linda Lillakas; Alexander Lam; Brenda L. Gallie; Martin J. Steinbach Horizontal saccade dynamics after childhood monocular enucleation Journal Article In: Investigative Ophthalmology & Visual Science, vol. 54, no. 10, pp. 6463–6471, 2013. @article{Gonzalez2013, PURPOSE: We investigated the effects of monocularity on oculomotor control by examining the characteristics of the horizontal saccades of people with one eye, and comparing them to those of a group of age-matched controls who viewed the stimuli monocularly and binocularly. METHODS: Participants were tested in a no-gap, no-overlap saccadic task using a video-based remote eye tracker. One group consisted of unilaterally eye enucleated participants (N = 15; mean age, 31.27 years), the other of age-matched people with normal binocular vision (N = 18; mean age, 30.17 years). RESULTS: The horizontal saccade dynamics of enucleated people are similar to those of people with normal binocularity when they view monocularly and, with the exception of latency, when they view binocularly. The data show that the monocular saccades of control and enucleated observers have longer latencies than the binocular saccades of the control group, the saccades of the enucleated observers are as accurate as those of the controls viewing monocularly or binocularly, smaller saccades are more accurate than the larger ones, and abducting saccades are faster than adducting saccades. CONCLUSIONS: Our data suggest that the true monocularity produced by early enucleation does not result in slower visual processing in the afferent (sensory) pathway, or in deficits in the efferent (motor) pathways of the saccadic system. Possible mechanisms to account for the effects of monocular vision on saccades are discussed. |
Craig Hedge; Ute Leonards Using eye movements to explore switch costs in working memory Journal Article In: Journal of Vision, vol. 13, no. 4, pp. 1–19, 2013. @article{Hedge2013, Updating object locations in working memory (WM) is faster when the same object is updated twice in a row compared to updating another object. In analogy to repetition priming effects in perceptual attention, this object-switch cost in WM is thought of as being due to the necessity to shift attention internally from one object to another. However, evidence for this hypothesis is only indirect. Here, we used eye tracking and a classic model of perceptual attention to get a more direct handle on the different processes underlying switch costs in spatial WM. Eye-movement data revealed three different contributors to switch costs. First, overt attention was attracted initially towards locations of the previously updated object. Second, longer fixation periods preceded eye movements between locations of different objects as compared to (previous and new) locations of the same object, most likely due to disengaging and reorienting focal attention between objects. Third, longer dwell times at the to-be-updated location preceded manual responses for switch updates as compared to repeats, probably indicating increased uncertainty between competing sources of activity after the actual attention shift. Results can easily be interpreted with existing (perceptual) attention models that propose competitive activation in an attention map for target objects. |
Sarah R. Heilbronner; Michael L. Platt Causal evidence of performance monitoring by neurons in posterior cingulate cortex during learning Journal Article In: Neuron, vol. 80, no. 6, pp. 1384–1391, 2013. @article{Heilbronner2013, The posterior cingulate cortex (CGp) is a major hub of the default mode network (DMN), a set of cortical areas with high resting activity that declines during task performance. This relationship suggests that DMN activity contributes to mental processes that are antagonistic to performance. Alternatively, DMN may detect conditions under which performance is poor and marshal cognitive resources for improvement. To test this idea, we recorded activity of CGp neurons in monkeys performing a learning task while varying reward size and novelty. We found that CGp neurons responded to errors, and this activity was magnified by small reward and novel stimuli. Inactivating CGp with muscimol impaired new learning when rewards were small but had no effect when rewards were large; inactivation did not affect performance on well-learned associations. Thus, CGp, and by extension the DMN, may support learning, and possibly other cognitive processes, by monitoring performance and motivating exploration. |
Jennifer J. Heisz; Molly M. Pottruff; David I. Shore Females scan more than males: A potential mechanism for sex differences in recognition memory Journal Article In: Psychological Science, vol. 24, no. 7, pp. 1157–1163, 2013. @article{Heisz2013, Recognition-memory tests reveal individual differences in episodic memory; however, by themselves, these tests provide little information regarding the stage (or stages) in memory processing at which differences are manifested. We used eye-tracking technology, together with a recognition paradigm, to achieve a more detailed analysis of visual processing during encoding and retrieval. Although this approach may be useful for assessing differences in memory across many different populations, we focused on sex differences in face memory. Females outperformed males on recognition-memory tests, and this advantage was directly related to females' scanning behavior at encoding. Moreover, additional exposures to the faces reduced sex differences in face recognition, which suggests that males may be able to improve their recognition memory by extracting more information at encoding through increased scanning. A strategy of increased scanning at encoding may prove to be a simple way to enhance memory performance in other populations with memory impairment. |
Christoph Helmchen; Stefan Glasauer; Andreas Sprenger Inverse eye position dependency of downbeat nystagmus in midline medullary lesion Journal Article In: Journal of Neurology, vol. 260, no. 11, pp. 2908–2910, 2013. @article{Helmchen2013, Downbeat nystagmus (DBN) is a persisting ocular motor sign most often found in vestibulo-cerebellar midline lesions. DBN typically increases on lateral and downward gaze and decreases with upward gaze [3], and thereby obeys Alexander's law which states that the slow phase velocity (SPV) is higher with gaze in the fast phase direction of nystagmus compared with gaze in the slow phase direction [11]. DBN may reverse into upward beating nystagmus on upward gaze or in the supine position [3, 4]. Only exceptionally, DBN may occur in midline pontomedullary [12], pontine [6] or medullary [1, 8] brainstem lesions. Current pathophysiological models on DBN are based on a dysfunction of the cerebellar flocculus and an impaired cerebellar negative feedback loop to the brainstem neural integrator, which accounts for the typical gaze- dependent increase of DBN with lateral and downward gaze [5]. Contrary to this concept, we here report about a patient with a confined lesion in the midline medullary brainstem presenting with an inverse eye position dependency: DBN increased with upgaze and decreased with downgaze. This may shed some new light on the impact of floccular afferents, i.e., the paramedian medullary tract (PMT) cells, on the eye position dependency of DBN and it may have some new clinical implications. |
John M. Henderson; Steven G. Luke; Joseph Schmidt; John E. Richards Co-registration of eye movements and event-related potentials in connected-text paragraph reading Journal Article In: Frontiers in Systems Neuroscience, vol. 7, pp. 28, 2013. @article{Henderson2013, Eyetracking during reading has provided a critical source of on-line behavioral data informing basic theory in language processing. Similarly, event-related potentials (ERPs) have provided an important on-line measure of the neural correlates of language processing. Recently there has been strong interest in co-registering eyetracking and ERPs from simultaneous recording to capitalize on the strengths of both techniques, but a challenge has been devising approaches for controlling artifacts produced by eye movements in the EEG waveform. In this paper we describe our approach to correcting for eye movements in EEG and demonstrate its applicability to reading. The method is based on independent components analysis, and uses three criteria for identifying components tied to saccades: (1) component loadings on the surface of the head are consistent with eye movements; (2) source analysis localizes component activity to the eyes, and (3) the temporal activation of the component occurred at the time of the eye movement and differed for right and left eye movements. We demonstrate this method's applicability to reading by comparing ERPs time-locked to fixation onset in two reading conditions. In the text-reading condition, participants read paragraphs of text. In the pseudo-reading control condition, participants moved their eyes through spatially similar pseudo-text that preserved word locations, word shapes, and paragraph spatial structure, but eliminated meaning. The corrected EEG, time-locked to fixation onsets, showed effects of reading condition in early ERP components. The results indicate that co-registration of eyetracking and EEG in connected-text paragraph reading is possible, and has the potential to become an important tool for investigating the cognitive and neural bases of on-line language processing in reading. |
John M. Henderson; Antje Nuthmann; Steven G. Luke Eye movement control during scene viewing: Immediate effects of scene luminance on fixation durations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 2, pp. 318–322, 2013. @article{Henderson2013a, Recent research on eye movements during scene viewing has primarily focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using a novel scene degradation paradigm based on a saccade-contingent display change method, scenes were reduced in luminance during saccades ending in critical fixations. Results from two experiments showed that the durations of the critical fixations were immediately affected by scene luminance, with a monotonic relationship between luminance reduction and fixation duration. The results are the first to demonstrate that fixation durations in scene viewing are immediately influenced by the ease of processing of the image currently in view. These results are consistent with the CRISP (a timer- Controlled Random-Walk with Inhibition for Saccade Planning) computational model of saccade generation in scenes, proposing that difficulty in moment-by-moment visual and cognitive processing of the scene modulates fixation durations. |
John M. Henderson; Svetlana V. Shinkareva; Jing Wang; Steven G. Luke; Jennifer Olejarczyk Predicting cognitive state from eye movements Journal Article In: PLoS ONE, vol. 8, no. 5, pp. e64937, 2013. @article{Henderson2013b, In human vision, acuity and color sensitivity are greatest at the center of fixation and fall off rapidly as visual eccentricity increases. Humans exploit the high resolution of central vision by actively moving their eyes three to four times each second. Here we demonstrate that it is possible to classify the task that a person is engaged in from their eye movements using multivariate pattern classification. The results have important theoretical implications for computational and neural models of eye movement control. They also have important practical implications for using passively recorded eye movements to infer the cognitive state of a viewer, information that can be used as input for intelligent human-computer interfaces and related applications. |
James P. Herman; C. Phillip Cloud; Josh Wallman End-point variability is not noise in saccade adaptation Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e59731, 2013. @article{Herman2013, When each of many saccades is made to overshoot its target, amplitude gradually decreases in a form of motor learning called saccade adaptation. Overshoot is induced experimentally by a secondary, backwards intrasaccadic target step (ISS) triggered by the primary saccade. Surprisingly, however, no study has compared the effectiveness of different sizes of ISS in driving adaptation by systematically varying ISS amplitude across different sessions. Additionally, very few studies have examined the feasibility of adaptation with relatively small ISSs. In order to best understand saccade adaptation at a fundamental level, we addressed these two points in an experiment using a range of small, fixed ISS values (from 0° to 1° after a 10° primary target step). We found that significant adaptation occurred across subjects with an ISS as small as 0.25°. Interestingly, though only adaptation in response to 0.25° ISSs appeared to be complete (the magnitude of change in saccade amplitude was comparable to size of the ISS), further analysis revealed that a comparable proportion of the ISS was compensated for across conditions. Finally, we found that ISS size alone was sufficient to explain the magnitude of adaptation we observed; additional factors did not significantly improve explanatory power. Overall, our findings suggest that current assumptions regarding the computation of saccadic error may need to be revisited. |
Peter C. Gordon; Patrick Plummer; Wonil Choi See before you jump: Full recognition of parafoveal words precedes skips during reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 2, pp. 633–641, 2013. @article{Gordon2013, Serial attention models of eye-movement control during reading were evaluated in an eye-tracking experiment that examined how lexical activation combines with visual information in the parafovea to affect word skipping (where a word is not fixated during first-pass reading). Lexical activation was manipulated by repetition priming created through prime-target pairs embedded within a sentence. The boundary technique (Rayner, 1975) was used to determine whether the target word was fully available during parafoveal preview or whether it was available with transposed letters (e.g., Herman changed to Hreman). With full parafoveal preview, the target word was skipped more frequently when it matched the earlier prime word (i.e., was repeated) than when it did not match the earlier prime word (i.e., was new). With transposed-letter (TL) preview, repetition had no effect on skipping rates despite the great similarity of the TL preview string to the target word and substantial evidence that TL strings activate the words from which they are derived (Perea & Lupker, 2003). These results show that lexically based skipping is based on full recognition of the letter string in parafoveal preview and does not involve using the contextual constraint to compensate for the reduced information available from the parafovea. These results are consistent with models of eye-movement control during reading in which successive words in a text are processed 1 at a time (serially) and in which word recognition strongly influences eye movements. |
Frauke Görges; Frank Oppermann; Jörg D. Jescheniak; Herbert Schriefers Activation of phonological competitors in visual search Journal Article In: Acta Psychologica, vol. 143, no. 2, pp. 168–175, 2013. @article{Goerges2013, Recently, Meyer, Belke, Telling and Humphreys (2007) reported that competitor objects with homophonous names (e.g., boy) interfere with identifying a target object (e.g., buoy) in a visual search task, suggesting that an object name's phonology becomes automatically activated even in situations in which participants do not have the intention to speak. The present study explored the generality of this finding by testing a different phonological relation (rhyming object names, e.g., cat-hat) and by varying details of the experimental procedure. Experiment 1 followed the procedure by Meyer et al. Participants were familiarized with target and competitor objects and their names at the beginning of the experiment and the picture of the target object was presented prior to the search display on each trial. In Experiment 2, the picture of the target object presented prior to the search display was replaced by its name. In Experiment 3, participants were not familiarized with target and competitor objects and their names at the beginning of the experiment. A small interference effect from phonologically related competitors was obtained in Experiments 1 and 2 but not in Experiment 3, suggesting that the way the relevant objects are introduced to participants affects the chances of observing an effect from phonologically related competitors. Implications for the information flow in the conceptual-lexical system are discussed. |
Martin Gorges; Hans Peter Müller; Dorothée Lulé; Albert C. Ludolph; Elmar H. Pinkhardt; Jan Kassubek In: Brain Connectivity, vol. 3, no. 3, pp. 265–272, 2013. @article{Gorges2013a, In addition to the skeleto-motor deficits, patients with Parkinson's disease (PD) frequently present with oculomotor dysfunctions such as impaired smooth pursuit and saccadic abnormalities. There is increasing evidence for an impaired cortical function to be responsible for oculomotor deficits that are associated with lack of inhibitory control; however, these pathomechanisms still remain poorly understood. By means of "task-free" resting-state functional magnetic resonance imaging (rs-fMRI), functional connectivity changes in PD within the default mode network (DMN) have been reported. The aim of this study was to investigate whether altered functional connectivity within the DMN was correlated with oculomotor parameter changes in PD. Twelve PD patients and 13 matched healthy controls underwent rs-fMRI at 1.5 T and videooculography (VOG) using Eye-Link-System. Rs-fMRI seed-based region-to-region connectivity analysis was performed, including medial prefrontal cortex (mPFC), medial temporal lobe (MTL), posterior cingulate cortex (PCC), and hippocampal formation (HF); while VOG examination comprised ocular reactive saccades, smooth pursuit, and executive tests. Rs-fMRI analysis demonstrated a decreased region-to-region functional connectivity between mPFC and PCC as well as increased connectivity between bilateral HF in PD compared with controls. In VOG, patients and controls differed in terms of executive tests outcome, smooth pursuit eye movement, and visually guided reactive saccades but not in peak eye velocity. A significant relationship was observed between saccadic accuracy and functional connectivity strengths between MTL and PCC. These results suggest that PD-associated changes of DMN connectivity are correlated with PD-associated saccadic hypometria, in particular in the vertical direction. |
Davood G. Gozli; Amy Chow; Alison L. Chasteen; Jay Pratt Valence and vertical space: Saccade trajectory deviations reveal metaphorical spatial activation Journal Article In: Visual Cognition, vol. 21, no. 5, pp. 628–646, 2013. @article{Gozli2013, Concepts of positive and negative valence are metaphorically structured in space (e.g., happy is up,sad is down). In fact, coupling a conceptual task (e.g., evaluating words as positive or negative) with a visuospatial task (e.g., identifying stimuli above or below fixation) often gives rise to metaphorical congruency effects. For instance, after reading a positive concept, visual target processing is facilitated above fixation. However, it is possible that tasks requiring upwards and downwards attentional orienting artificially strengthen the link between vertical space and semantic valence. For this reason, in the present study the vertical axis was uncoupled from the response axis. Participants made eye movements along the horizontal axis after reading positive or negative affect words, while their saccade movement trajectories were recorded. Based on previous research on saccade trajectory deviation, we predicted that fast saccade trajectories curve towards the salient segment of space, whereas slow saccade trajectories would curve away from the salient segment. Examining saccadic trajectories revealed a pattern of deviations along the vertical axis consistent with the metaphorical congruency account, although this pattern was mainly driven by positive concepts. These results suggest that semantic processing of valence can automatically recruit spatial features along the vertical axis. |
Joshua A. Granek; Laure Pisella; John Stemberger; Alain Vighetto; Yves Rossetti; Lauren E. Sergio Decoupled visually-guided reaching in optic ataxia: Differences in motor control between canonical and non-canonical orientations in space Journal Article In: PLoS ONE, vol. 8, no. 12, pp. e86138, 2013. @article{Granek2013, Guiding a limb often involves situations in which the spatial location of the target for gaze and limb movement are not congruent (i.e. have been decoupled). Such decoupled situations involve both the implementation of a cognitive rule (i.e. strategic control) and the online monitoring of the limb position relative to gaze and target (i.e. sensorimotor recalibration). To further understand the neural mechanisms underlying these different types of visuomotor control, we tested patient IG who has bilateral caudal superior parietal lobule (SPL) damage resulting in optic ataxia (OA), and compared her performance with six age-matched controls on a series of center-out reaching tasks. The tasks comprised 1) directing a cursor that had been rotated (180° or 90°) within the same spatial plane as the visual display, or 2) moving the hand along a different spatial plane than the visual display (horizontal or para-sagittal). Importantly, all conditions were performed towards visual targets located along either the horizontal axis (left and right; which can be guided from strategic control) or the diagonal axes (top-left and top-right; which require on-line trajectory elaboration and updating by sensorimotor recalibration). The bilateral OA patient performed much better in decoupled visuomotor control towards the horizontal targets, a canonical situation in which well-categorized allocentric cues could be utilized (i.e. guiding cursor direction perpendicular to computer monitor border). Relative to neurologically intact adults, IG's performance suffered towards diagonal targets, a non-canonical situation in which only less-categorized allocentric cues were available (i.e. guiding cursor direction at an off-axis angle to computer monitor border), and she was therefore required to rely on sensorimotor recalibration of her decoupled limb. We propose that an intact caudal SPL is crucial for any decoupled visuomotor control, particularly when relying on the realignment between vision and proprioception without reliable allocentric cues towards non-canonical orientations in space. |
Tsafrir Greenberg; Joshua M. Carlson; Jiook Cha; Greg Hajcak; Lilianne R. Mujica-Parodi Neural reactivity tracks fear generalization gradients Journal Article In: Biological Psychology, vol. 92, no. 1, pp. 2–8, 2013. @article{Greenberg2013, Recent studies on fear generalization have demonstrated that fear-potentiated startle and skin conductance responses to a conditioned stimulus (CS) generalize to similar stimuli, with the strength of the fear response linked to perceptual similarity to the CS. The aim of the present study was to extend this work by examining neural correlates of fear generalization. An initial experiment (N= 8) revealed that insula reactivity tracks the conditioned fear gradient. We then replicated this effect in a larger independent sample (N= 25). Activation in the insula, anterior cingulate, right supplementary motor cortex and caudate increased reactivity as generalization stimuli (GS) were more similar to the CS, consistent with participants' overall ratings of perceived shock likelihood and pupillary response to each stimulus. |
Tsafrir Greenberg; Joshua M. Carlson; Jiook Cha; Greg Hajcak; Lilianne R. Mujica-Parodi Ventromedial prefrontal cortex reactivity is altered in generalized anxiety disorder during fear generalization Journal Article In: Depression and Anxiety, vol. 30, no. 3, pp. 242–250, 2013. @article{Greenberg2013a, BACKGROUND: Fear generalization is thought to contribute to the development and maintenance of anxiety symptoms and accordingly has been the focus of recent research. Previously, we reported that in healthy individuals (N = 25) neural reactivity in the insula, anterior cingulate cortex (ACC), supplementary motor area (SMA), and caudate follow a generalization gradient with a peak response to a conditioned stimulus (CS) that declines with greater perceptual dissimilarity of generalization stimuli (GS) to the CS. In contrast, reactivity in the ventromedial prefrontal cortex (vmPFC), a region linked to fear inhibition, showed an opposite response pattern. The aim of the current study was to examine whether neural responses to fear generalization differ in generalized anxiety disorder (GAD). A second aim was to examine connectivity of primary regions engaged by the generalization task in the GAD group versus healthy group, using psychophysiological interaction analysis. METHODS: Thirty-two women diagnosed with GAD were scanned using the same generalization task as our healthy group. RESULTS: Individuals with GAD exhibited a less discriminant vmPFC response pattern suggestive of deficient recruitment of vmPFC during fear inhibition. Across participants, there was enhanced anterior insula (aINS) coupling with the posterior insula, ACC, SMA, and amygdala during presentation of the CS, consistent with a modulatory role for the aINS in the execution of fear responses. CONCLUSIONS: These findings suggest that deficits in fear regulation, rather than in the excitatory response itself, are more critical to the pathophysiology of GAD in the context of fear generalization. |
Harold H. Greene; James M. Brown; Bryce A. Paradis Luminance contrast and the visual span during visual target localization Journal Article In: Displays, vol. 34, no. 1, pp. 27–32, 2013. @article{Greene2013, A concern for designers of monocular and binocular devices is the ability of users to search for, and localize target items embedded in noisy displays. Twenty-two participants searched (12 under monocular conditions) to localize a target embedded in random gray dot displays. The target was defined by a variation in pattern that did not differ in average contrast from the rest of each display. Displays were presented at.54 and.04 Michelson contrast. Across binocular and monocular viewings, fixation counts increased with decreasing contrast, but the gradient was steeper for monocular viewing. With decreasing contrast, fixations were longer, and the amplitudes of saccades used to localize the target decreased. The findings highlight for monocular vs. binocular target localization, the importance of considering separately, how many fixations are needed to localize the target, and how close to fixation the target must be for it to be noticed. |
Gordian Griffiths; Arvid Herwig; Werner X. Schneider Stimulus localization interferes with stimulus recognition: Evidence from an attentional blink paradigm Journal Article In: Journal of Vision, vol. 13, no. 7, pp. 1–14, 2013. @article{Griffiths2013, Recognition of a second target (T2) can be impaired if presented within 500 ms after a first target (T1): This interference phenomenon is called the attentional blink (AB; e.g., Raymond, Shapiro, & Arnell, 1992) and can be viewed as emerging from limitations in the allocation of visual attention (VA) over time. AB tasks typically require participants to detect or identify targets based on their visual properties, i.e., pattern recognition. However, no study so far has investigated whether an AB for pattern recognition of T2 can be elicited if T1 implies a second major function of the visual system, i.e., spatial computations. Therefore, we tested in two experiments whether localization of a peripherally presented dot (T1) interferes with the identification of a trailing centrally presented letter T2. For Experiment 1, T2 performance increased with onset asynchrony of both targets in single-task (only report letter) and dual-task conditions. Besides this task-independent T2 deficit, task-dependent interference (difference between single- and dual-task conditions) was observed in Experiment 2, when T1 was followed by location distractors. Overall, our results indicate that limitations in the allocation of VA over time (i.e., an AB) can also be found if T1 requires localization while T2 requires the standard pattern recognition task. The results are interpreted on the basis of a common temporal attentional mechanism for pattern recognition and spatial computations. |
Martin Groen; Jan Noyes Establishing goals and maintaining coherence in multiparty computer-mediated communication Journal Article In: Discourse Processes, vol. 50, no. 2, pp. 85–106, 2013. @article{Groen2013, Communicating via text-only computer-mediated communication (CMC) channels is associated with a number of issues that would impair users in achieving dialogue coherence and goals. It has been suggested that humans have devised novel adaptive strategies to deal with those issues. However, it could be that humans rely on “classic” coherence devices too. In this study, we investigate whether relevancy markers, a subset of discourse markers, are relied on to assess dialogue coherence and goals. The results of two experiments showed that participants oriented systematically on the relevancy markers and that substitution of the original markers for other (dis)similar words affected eye movements and task performance. It appears that, despite the loosely connected dialogue contributions, the multiple concurrent dialogues, and the multiparty nature of text-only CMC dialogues, humans are able still to locate coherence- and goal-related information by relying on the presence of the relevancy markers. |
Michael A. Grubb; Nancy J. Minshew; David J. Heeger; Marisa Carrasco Exogenous spatial attention: Evidence for intact functioning in adults with autism spectrum disorder Journal Article In: Journal of Vision, vol. 13, no. 14, pp. 1–13, 2013. @article{Grubb2013, Deficits or atypicalities in attention have been reported in individuals with autism spectrum disorder (ASD), yet no consensus on the nature of these deficits has emerged. We conducted three experiments that paired a peripheral precue with a covert discrimination task, using protocols for which the effects of covert exogenous spatial attention on early vision have been well established in typically developing populations. Experiment 1 assessed changes in contrast sensitivity, using orientation discrimination of a contrast-defined grating; Experiment 2 evaluated the reduction of crowding in the visual periphery, using discrimination of a letter-like figure with flanking stimuli at variable distances; and Experiment 3 assessed improvements in visual search, using discrimination of the same letter-like figure with a variable number of distractor elements. In all three experiments, we found that exogenous attention modulated visual discriminability in a group of high-functioning adults with ASD and that it did so in the same way and to the same extent as in a matched control group. We found no evidence to support the hypothesis that deficits in exogenous spatial attention underlie the emergence of core ASD symptomatology. |
Katherine Guérard; Jean Saint-Aubin; Marilyne Maltais The role of verbal memory in regressions during reading Journal Article In: Memory & Cognition, vol. 41, no. 1, pp. 122–136, 2013. @article{Guerard2013, During reading, a number of eye movements are made backward, on words that have already been read. Recent evidence suggests that such eye movements, called regressions, are guided by memory. Several studies point to the role of spatial memory, but evidence for the role of verbal memory is more limited. In the present study, we examined the factors that modulate the role of verbal memory in regressions. Participants were required to make regressions on target words located in sentences displayed on one or two lines. Verbal interference was shown to affect regressions, but only when participants executed a regression on a word located in the first part of the sentence, irrespective of the number of lines on which the sentence was displayed. Experiments 2 and 3 showed that the effect of verbal interference on words located in the first part of the sentence disappeared when participants initiated the regression from the middle of the sentence. Our results suggest that verbal memory is recruited to guide regressions, but only for words read a longer time ago. |
Jason P. Gallivan; Craig S. Chapman; D. Adam Mclean; J. Randall Flanagan; Jody C. Culham Activity patterns in the category-selective occipitotemporal cortex predict upcoming motor actions Journal Article In: European Journal of Neuroscience, vol. 38, no. 3, pp. 2408–2424, 2013. @article{Gallivan2013a, Converging lines of evidence point to the occipitotemporal cortex (OTC) as a critical structure in visual perception. For instance, human functional magnetic resonance imaging (fMRI) has revealed a modular organisation of object-selective, face-selective, body-selective and scene-selective visual areas in the OTC, and disruptions to the processing within these regions, either in neuropsychological patients or through transcranial magnetic stimulation, can produce category-specific deficits in visual recognition. Here we show, using fMRI and pattern classification methods, that the activity in the OTC also represents how stimuli will be interacted with by the body–a level of processing more traditionally associated with the preparatory activity in sensorimotor circuits of the brain. Combining functional mapping of different OTC areas with a real object-directed delayed movement task, we found that the pre-movement spatial activity patterns across the OTC could be used to predict both the action of an upcoming hand movement (grasping vs. reaching) and the effector (left hand vs. right hand) to be used. Interestingly, we were able to extract this wide range of predictive movement information even though nearly all OTC areas showed either baseline-level or below baseline-level activity prior to action onset. Our characterisation of different OTC areas according to the features of upcoming movements that they could predict also revealed a general gradient of effector-to-action-dependent movement representations along the posterior-anterior OTC axis. These findings suggest that the ventral visual pathway, which is well known to be involved in object recognition and perceptual processing, plays a larger than previously expected role in preparing object-directed hand actions. |
Jason P. Gallivan; D. Adam McLean; J. Randall Flanagan; Jody C. Culham In: Journal of Neuroscience, vol. 33, no. 5, pp. 1991–2008, 2013. @article{Gallivan2013, Planning object-directed hand actions requires successful integration of the movement goal with the acting limb. Exactly where and how this sensorimotor integration occurs in the brain has been studied extensively with neurophysiological recordings in nonhuman primates, yet to date, because of limitations of non-invasive methodologies, the ability to examine the same types of planning-related signals in humans has been challenging. Here we show, using a multivoxel pattern analysis of functional MRI (fMRI) data, that the preparatory activity patterns in several frontoparietal brain regions can be used to predict both the limb used and hand action performed in an upcoming movement. Participants performed an event-related delayed movement task whereby they planned and executed grasp or reach actions with either their left or right hand toward a single target object. We found that, although the majority of frontoparietal areas represented hand actions (grasping vs reaching) for the contralateral limb, several areas additionally coded hand actions for the ipsilateral limb. Notable among these were subregions within the posterior parietal cortex (PPC), dorsal premotor cortex (PMd), ventral premotor cortex, dorsolateral prefrontal cortex, presupplementary motor area, and motor cortex, a region more traditionally implicated in contralateral movement generation. Additional analyses suggest that hand actions are represented independently of the intended limb in PPC and PMd. In addition to providing a unique mapping of limb-specific and action-dependent intention-related signals across the human cortical motor system, these findings uncover a much stronger representation of the ipsilateral limb than expected from previous fMRI findings. |
Lesya Y. Ganushchak; Andrea Krott; Steven Frisson; Antje S. Meyer Processing words and Short Message Service shortcuts in sentential contexts: An eye movement study Journal Article In: Applied Psycholinguistics, vol. 34, no. 1, pp. 163–179, 2013. @article{Ganushchak2013, The present study investigated whether ShortMessage Service shortcuts are more difficult to process in sentence context than the spelled-out word equivalent and, if so, how any additional processing difficulty arises. Twenty-four student participants read 37 Short Message Service shortcuts and word equivalents embedded in semantically plausible and implausible contexts (e.g., He left/drank u/you a note) while their eye movements were recorded. There were effects of plausibility and spelling on early measures of processing difficulty (first fixation durations, gaze durations, skipping, and first-pass regression rates for the targets), but there were no interactions of plausibility and spelling. Late measures of processing difficulty (second run gaze duration and total fixation duration) were only affected by plausibility but not by spelling. These results suggest that shortcuts are harder to recognize, but that, once recognized, they are integrated into the sentence context as easily as ordinary words. |
Hanna S. Gauvin; Robert J. Hartsuiker; Falk Huettig Speech monitoring and phonologically-mediated eye gaze in language perception and production: A comparison using printed word eye-tracking Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 818, 2013. @article{Gauvin2013, The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. |
Franziska Geringswald; Anne Herbik; Michael B. Hoffmann; Stefan Pollmann Contextual cueing impairment in patients with age-related macular degeneration Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–18, 2013. @article{Geringswald2013, Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues. |
Sarah J. Gervais; Arianne M. Holland; Michael D. Dodd My eyes are up here: The nature of the objectifying gaze toward women Journal Article In: Sex Roles, vol. 69, no. 11-12, pp. 557–570, 2013. @article{Gervais2013, Although objectification theory suggests that women frequently experience the objectifying gaze with many adverse consequences, there is scant research examining the nature and causes of the objectifying gaze for perceivers. The main purpose of this work was to examine the objectifying gaze toward women via eye tracking technology. A secondary purpose was to examine the impact of body shape on this objectifying gaze. To elicit the gaze, we asked participants (29 women, 36 men from a large Midwestern University in the U.S.), to focus on the appearance (vs. personality) of women and presented women with body shapes that fit cultural ideals of feminine attractiveness to varying degrees, including high ideal (i.e., hourglass-shaped women with large breasts and small waist-to-hip ratios), average ideal (with average breasts and average waist-to-hip ratios), and low ideal (i.e., with small breasts and large waist-to-hip ratios). Consistent with our main hypothesis, we found that participants focused on women's chests and waists more and faces less when they were appearance-focused (vs. personality-focused). Moreover, we found that this effect was particularly pronounced for women with high (vs. average and low) ideal body shapes in line with hypotheses. Finally, compared to female participants, male participants showed an increased tendency to initially exhibit the objectifying gaze and they regarded women with high (vs. average and low) ideal body shapes more positively, regardless of whether they were appearance-focused or personality-focused. Implications for objectification and person perception theories are discussed. |
Saeideh Ghahghaei; Karina J. Linnell; Martin H. Fischer; Amit Dubey; Robert Davis Effects of load on the time course of attentional engagement, disengagement, and orienting in reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 453–470, 2013. @article{Ghahghaei2013, We examined how the frequency of the fixated word influences the spatiotemporal distribution of covert attention during reading. Participants discriminated gaze-contingent probes that occurred with different spatial and temporal offsets from randomly chosen fixation points during reading. We found that attention was initially focused at fixation and that subsequent defocusing was slower when the fixated word was lower in frequency. Later in a fixation, attention oriented more towards the next saccadic target for high- than for low-frequency words. These results constitute the first report of the time course of the effect of load on attentional engagement and orienting in reading. They are discussed in the context of serial and parallel models of reading. |
Frouke Hermens; Tandra Ghose; Johan Wagemans Advance information modulates the global effect even without instruction on where to look Journal Article In: Experimental Brain Research, vol. 226, no. 4, pp. 639–648, 2013. @article{Hermens2013, When observers are asked to make an eye movement to a visual target in the presence of a near distractor, their eyes tend to land on a position in between the target and the distractor, an effect known as the global effect. While it was initially believed that the global effect is a mandatory eye movement strategy, recent studies have shown that explicit instructions to make an eye movement to a certain part of the scene can overrule the effect. We here investigate whether such top-down influences are also found when people are not actively involved in an explicit eye movement task, but instead, make eye movements in the service of another task. Participants were presented with arrays of yellow and green discs, each containing a letter, and were asked to identify a target letter. Because the discs were presented away from fixation, participants made an eye movement to the array of discs on most of the trials. An analysis of the landing sites of these eye movements revealed that, even without an explicit instruction, observers take the advance information about the colour of the disc containing the target into account before moving their eyes. Moreover, when asking participants to maintain fixation for intervals of different durations, it was found that the implicit top-down influences operated on a very similar time-scale as previously observed for explicit eye movement instructions. |
Halim Hicheur; Steeve Zozor; Aurélie Campagne; Alan Chauvin Microsaccades are modulated by both attentional demands of a visual discrimination task and background noise Journal Article In: Journal of Vision, vol. 13, no. 13, pp. 1–20, 2013. @article{Hicheur2013, Microsaccades are miniature saccades occurring once or twice per second during visual fixation. While microsaccades and saccades share similarities at the oculomotor level, the functional roles of microsaccades are still debated. In this study, we examined the hypothesis that the microsaccadic activity is affected by the type of noisy background during the execution of a particular discrimination task. Human subjects had to judge the orientation of a tilted stimulus embedded in static or dynamic backgrounds in a forced choice-task paradigm, as adapted from Rucci, Iovin, Poletti, and Santini (2007). Static backgrounds induced more microsaccades than dynamic ones only during the execution of the discrimination task. A directional bias of microsaccades, dictated by the stimulus orientation, was temporally coupled with this period of increased activity. Both microsaccade rates and orientations were comparable across background types after the response time although subjects maintained fixation until the end of the trial. This represents a background-specific modulation of the microsaccadic activity driven by attentional demands. The visual influence of microsaccades on discrimination performances was modeled at the retinal level for both types of backgrounds. A higher simulated microsaccadic activity was necessary for static backgrounds in order to achieve discrimination performance scores comparable to that of dynamic ones. Taken together, our experimental and theoretical findings further support the idea that microsaccades are under attentional control and represent an efficient sampling strategy allowing spatial information acquisition. |
Clayton Hickey; Wieske Zoest Reward-associated stimuli capture the eyes in spite of strategic attentional set Journal Article In: Vision Research, vol. 92, pp. 67–74, 2013. @article{Hickey2013, Theories of reinforcement learning have proposed that the association of reward to visual stimuli may cause these objects to become fundamentally salient and thus attention-drawing. A number of recent studies have investigated the oculomotor correlates of this reward-priming effect, but there is some ambiguity in this literature regarding the involvement of top-down attentional set. Existing paradigms tend to create a situation where participants are actively looking for a reward-associated stimulus before subsequently showing that this selective bias sustains when it no longer has strategic purpose. This perseveration of attentional set is potentially different in nature than the direct impact of reward proposed by theory. Here we investigate the effect of reward on saccadic selection in a paradigm where strategic attentional set is decoupled from the effect of reward. We find that during search for a uniquely oriented target, the receipt of reward following selection of a target characterized by an irrelevant unique color causes subsequent stimuli characterized by this color to be preferentially selected. Importantly, this occurs regardless of whether the color characterizes the target or distractor. Other analyses demonstrate that only features associated with correct selection of the target prime the target representation, and that the magnitude of this effect can be predicted by variability in saccadic indices of feedback processing. These results add to a growing literature demonstrating that reward guides visual selection, often in spite of our strategic efforts otherwise. |
Stephen L. Hicks; Rakesh Sharma; Amad N. Khan; Claire M. Berna; Andrea Waldecker; Kevin Talbot; Christopher Kennard; Martin R. Turner An eye-tracking version of the Trail-Making Test Journal Article In: PLoS ONE, vol. 8, no. 12, pp. e84061, 2013. @article{Hicks2013, The neurodegenerative disorder amyotrophic lateral sclerosis may render patients unable to speak or write, so that objective assessment of cognitive impairment, which is commonly of a dysexecutive nature, is challenging. There is therefore a need to develop other methods of assessment that utilize other relatively unaffected motor systems. In this proof-of-principle study a novel eye-tracking version of the trail-making test was compared with performance on the standard written version in a group of healthy volunteers. There was good correlation for speed between both versions of Part B (R(2)=0.73), suggesting that this is a viable method to objectively assess cognitive impairment in disorders where patients are unable to speak or write. |
Matthew D. Hilchey; Jason Satel; Jason Ivanoff; Raymond M. Klein On the nature of the delayed "inhibitory" Cueing effects generated by uninformative arrows at fixation Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 3, pp. 593–600, 2013. @article{Hilchey2013, When the interval between a spatially uninformative arrow and a visual target is short (<500 ms), response times (RTs) are fastest when the arrow points to the target. When this interval exceeds 500 ms, there is a near-universal absence of an effect of the arrow on RTs. Contrary to this expected pattern of results, Taylor and Klein (J Exp Psychol Hum Percept Perform 26:1639-1656, 2000) observed that RTs were slowest when a to-be-localized visual target occurred in the direction of a fixated arrow presented 1 s earlier (i.e., an "inhibitory" Cueing effect; ICE). Here we examined which factor(s) may have allowed the arrow to generate an ICE. Our experiments indicated that the ICE was a side effect of subthreshold response activation attributable to a task-induced association between the arrow and a keypress response. Because the cause of this ICE was more closely related to subthreshold keypress activation than to oculomotor activation, we considered that the effect might be more similar to the negative compatibility effect (NCE) than to inhibition of return (IOR). This similarity raises the possibility that classical IOR, when caused by a spatially uninformative peripheral onset event and measured by a keypress response to a subsequent onset, might represent, in part, another instance of an NCE. Serendipitously, we discovered that context (i.e., whether an uninformative peripheral onset could occur at the time of an uninformative central arrow) ultimately determined whether the "inhibitory" aftermath of automatic response activation would affect output or input pathways. |
Rumi Hisakata; Masahiko Terao; Ikuya Murakami Illusory position shift induced by motion within a moving envelope during smooth-pursuit eye movements Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 1–12, 2013. @article{Hisakata2013, The static envelope of a Gabor patch with a moving carrier appears to shift in the direction of the carrier motion; this phenomenon is known as the motion-induced position shift (De Valois & De Valois, 1991; Ramachandran & Anstis, 1990). This conventional stimulus configuration contains at least three covarying factors: the retinal carrier velocity, the environmental carrier velocity, and the carrier velocity relative to the envelope velocity, which happens to be zero. We manipulated these velocities independently to identify which is critical, and we measured the perceived position of the moving Gabor patch relative to a reference stimulus moving in the same direction at the same speed. In the first experiment, the position of the moving envelope observed with fixation appeared to shift in the direction of the carrier velocity relative to the envelope velocity. Furthermore, the illusion was more pronounced when the carrier moved in a direction opposite to that of the envelope. In the second and third experiments, we measured the illusion during smooth-pursuit eye movement in which the envelope was either static or moving, thereby dissociating retinal and environmental velocities. Under all conditions, the illusion occurred according to the envelope-relative velocity of the carrier. Additionally, the illusion was more pronounced when the carrier and envelope moved in opposite directions. We conclude that the carrier's envelope-relative velocity is the primary determinant of the motion-induced position shift. |
Jillian Hobson; Gillian Bruce; Stephen H. Butler A flicker change blindness task employing eye tracking reveals an association with levels of craving not consumption Journal Article In: Journal of Psychopharmacology, vol. 27, no. 1, pp. 93–97, 2013. @article{Hobson2013, We investigated attentional biases with a flicker paradigm, examining the proportion of alcohol relative to neutral changes detected. Furthermore, we examined how measures of the participants initial orienting of attention and of their maintained attention relate to levels of alcohol consumption and subjective craving in social drinkers. The eye movements of 58 participants (24 male) were monitored whilst they completed a flicker-induced change blindness task using both simple stimuli and real world scenes, with both an alcohol and neutral change competing for detection. When examined in terms of consumption levels, we observed that heavier social drinkers detected a higher proportion of alcohol related changes in real world scenes only. However, we also observed that levels of craving were not indicative of levels of consumption in social drinkers. Furthermore, also in real world scenes only, higher cravers detected a greater proportion of alcohol related changes compared to lower cravers, and were also quicker to initially fixate on alcohol related stimuli. Thus we conclude that processing biases in the orienting of attention to alcohol related stimuli were demonstrated in higher craving compared to lower craving social users in real world scenes. However, this was not related to the level of consumption as would be expected. These results highlight various methodological and conceptual issues to be considered in future research. |
Ulrike Hochpöchler; Wolfgang Schnotz; Thorsten Rasch; Mark Ullrich; Holger Horz; Nele McElvany; Jürgen Baumert Dynamics of mental model construction from text and graphics Journal Article In: European Journal of Psychology of Education, vol. 28, no. 4, pp. 1105–1126, 2013. @article{Hochpoechler2013, When students read for learning, they frequently are required to integrate text and graphics information into coherent knowledge structures. The following study aimed at analyzing how students deal with texts and how they deal with graphics when they try to integrate the two sources of information. Furthermore, the study investigated differences between students from different school types and grades. Forty students from grades 5 and 8 from higher track and lower track of the German school system were asked to process and integrate texts and graphics in order to answer items from different levels of a text-picture integration taxonomy. Students' eye movements were recorded and analyzed. Results suggest fundamentally different functions of text and graphics, which are associated with different processing strategies. Texts are more likely to be used according to a coherence-formation strategy, whereas graphics are more likely to be used on demand as visual cognitive tools according to an information-selection strategy. Students from different tracks of schooling revealed different adaptivity with regard to the requirements of combining text and graphic information. |
Timothy L. Hodgson; Petroc Sumner; Dimitra Molyva; Ray Sheridan; Christopher Kennard Learning and switching between stimulus-saccade associations in Parkinson's disease Journal Article In: Neuropsychologia, vol. 51, no. 7, pp. 1350–1360, 2013. @article{Hodgson2013, Making flexible associations between what we see and what we do is important for many everyday tasks. Previous work in patients with focal lesions has shown that the control of saccadic eye movements in such contexts relies on a network of areas in the frontal cerebral cortex. These regions are reciprocally connected with structures in the basal ganglia although the contribution of these sub-cortical structures to oculomotor control in complex tasks is not well understood. We report the performance of patients with idiopathic Parkinsons disease (PDs) in a test which required learning and switching between arbitrary cue-saccade rules. In Experiment 1 feedback was given following each response which reliably indicated which of the two possible rules was correct. PDs were slower to learn the first cue-saccade association presented, but did not show increased error or reaction time switch costs when switching between two rules within blocks. In a follow up experiment the feedback given by the computer was adjusted to be probabilistic such that executing a response based upon the "correct" rule only resulted in positive feedback on 80% of trials. Under these conditions patients were impaired in terms of response latencies and number of errors. In all conditions PDs showed multi-stepping/hypometria of saccades consistent with a motoric deficit in executing actions based on cognitive cues. The findings are consistent with a role for the nigrostriatal dopamine system in the reinforcement of saccade-response-outcome associations. Intact performance of PDs when associations are not stochastically reinforced suggests that striatal learning systems are complemented by cognitive representations of task rules which are unaffected in the early stages of PD. |
Maria J. S. Guerreiro; Dana R. Murphy; Pascal W. M. Van Gerven Making sense of age-related distractibility: The critical role of sensory modality Journal Article In: Acta Psychologica, vol. 142, no. 2, pp. 184–194, 2013. @article{Guerreiro2013, Older adults are known to have reduced inhibitory control and therefore to be more distractible than young adults. Recently, we have proposed that sensory modality plays a crucial role in age-related distractibility. In this study, we examined age differences in vulnerability to unimodal and cross-modal visual and auditory distraction. A group of 24 younger (mean age = 21.7. years) and 22 older adults (mean age = 65.4. years) performed visual and auditory n-back tasks while ignoring visual and auditory distraction. Whereas reaction time data indicated that both young and older adults are particularly affected by unimodal distraction, accuracy data revealed that older adults, but not younger adults, are vulnerable to cross-modal visual distraction. These results support the notion that age-related distractibility is modality dependent. |
Jon Guez; Adam P. Morris; Bart Krekelberg Intrasaccadic suppression is dominated by reduced detector gain Journal Article In: Journal of Vision, vol. 13, no. 8, pp. 1–11, 2013. @article{Guez2013, Human vision requires fast eye movements (saccades). Each saccade causes a self-induced motion signal, but we are not aware of this potentially jarring visual input. Among the theorized causes of this phenomenon is a decrease in visual sensitivity before (presaccadic suppression) and during (intrasaccadic suppression) saccades. We investigated intrasaccadic suppression using a perceptual template model (PTM) relating visual detection to different signal-processing stages. One stage changes the gain on the detector's input; another increases uncertainty about the stimulus, allowing more noise into the detector; and other stages inject noise into the detector in a stimulus-dependent or -independent manner. By quantifying intrasaccadic suppression of flashed horizontal gratings at varying external noise levels, we obtained threshold-versus-noise (TVN) data, allowing us to fit the PTM. We tested if any of the PTM parameters changed significantly between the fixation and saccade models and could therefore account for intrasaccadic suppression. We found that the dominant contribution to intrasaccadic suppression was a reduction in the gain of the visual detector. We discuss how our study differs from previous ones that have pointed to uncertainty as an underlying cause of intrasaccadic suppression and how the equivalent noise approach provides a framework for comparing the disparate neural correlates of saccadic suppression. |
M. Guitart-Masip; G. R. Barnes; A. Horner; Markus Bauer; Raymond J. Dolan; E. Duzel Synchronization of medial temporal lobe and prefrontal rhythms in human decision making Journal Article In: Journal of Neuroscience, vol. 33, no. 2, pp. 442–451, 2013. @article{GuitartMasip2013, Optimal decision making requires that we integrate mnemonic information regarding previous decisions with value signals that entail likely rewards and punishments. The fact that memory and value signals appear to be coded by segregated brain regions, the hippocampus in the case of memory and sectors of prefrontal cortex in the case of value, raises the question as to how they are integrated during human decision making. Using magnetoencephalography to study healthy human participants, we show increased theta oscillations over frontal and temporal sensors during nonspatial decisions based on memories from previous trials. Using source reconstruction we found that the medial temporal lobe (MTL), in a location compatible with the anterior hippocampus, and the anterior cingulate cortex in the medial wall of the frontal lobe are the source of this increased theta power. Moreover, we observed a correlation between theta power in the MTL source and behavioral performance in decision making, supporting a role for MTL theta oscillations in decision-making performance. These MTL theta oscillations were synchronized with several prefrontal sources, including lateral superior frontal gyrus, dorsal anterior cingulate gyrus, and medial frontopolar cortex. There was no relationship between the strength of synchronization and the expected value of choices. Our results indicate a mnemonic guidance of human decision making, beyond anticipation of expected reward, is supported by hippocampal-prefrontal theta synchronization. |
Ziad M. Hafed Alteration of visual perception prior to microsaccades Journal Article In: Neuron, vol. 77, no. 6, pp. 775–786, 2013. @article{Hafed2013, Gaze fixation is an active process, with the incessant occurrence of tiny eye movements, including microsaccades. While the retinal consequences of microsaccades may be presumed minimal because of their minute size, a significant perceptual consequence of these movements can also stem from active extraretinal mechanisms associated with corollaries of their motor generation. Here I show that prior to microsaccade onset, spatial perception is altered in a very specific manner: foveal stimuli are erroneously perceived as more eccentric, whereas peripheral stimuli are rendered more foveal. The mechanism for this perceptual " compression of space" is consistent with a spatially specific gain modulation of visual representations caused by the upcoming eye movements, as is hypothesized to happen for much larger saccades. I then demonstrate that this perimicrosaccadic perceptual alteration has at least one important functional consequence: it mediates visual-performance alterations similar to ones classically attributed to the cognitive process of covert visual attention |
Peng Han; Daniel R. Saunders; Russell L. Woods; Gang Luo Trajectory prediction of saccadic eye movements using a compressed exponential model Journal Article In: Journal of vision, vol. 13, no. 8, pp. 1–13, 2013. @article{Han2013, Gaze-contingent display paradigms play an important role in vision research. The time delay due to data transmission from eye tracker to monitor may lead to a misalignment during eye movements, and therefore compromise the contingency. We present a method to reduce this misalignment by using a compressed exponential function to model the trajectories of saccadic eye movements. Our algorithm was evaluated using experimental data from 1,212 saccades ranging from 3deg to 30deg, which were collected with an EyeLInk 1000 and a Dual-Purkinje Image (DPI) eye tracker. The model fits eye displacement with a high agreement (R^2 > 0.96). When assuming a 10-millisecond time delay, prediction of 2D saccade trajectories using our model could reduce the misalignment by 30% to 60% with the EyeLink tracker and 20% to 40% with the DPI tracker fo saccades larger than 8%. Because a certain number of samples are required for model fitting, the prediction did not offer improvement for most small saccades and the early stages of large saccades. Evaluation was also performed for a simulated 100-Hz gaze-contingent display using the prerecorded saccade data. With prediction, the percentage of misalignment larger than 2deg dropped from 45% to 20% for EyeLink and 42% to 26% for DPI data. These results suggest that the saccade-prediction algorithm may help create more accurate gaze-contingent displays. |
David J. Hancock; Diane M. Ste-Marie Gaze behaviors and decision making accuracy of higher- and lower-level ice hockey referees Journal Article In: Psychology of Sport & Exercise, vol. 14, no. 1, pp. 66–71, 2013. @article{Hancock2013, Background: Gaze behaviors are often studied in athletes, but infrequently for sport officials. There is a need to better understand gaze behavior in refereeing in order to improve training and education related to visual search patterns, which have been argued to be related to decision making (Abernethy & Russell, 1987a). Objective: To examine gaze behaviors, decision accuracy, and decision sensitivity (using signal detection analysis) of ice hockey referees of varying skill levels in a laboratory setting. Design: Using an experimental design, we conducted multiple t-tests. Method: Higher-level (N = 15) and lower-level ice hockey referees (N = 15) wore a head-mounted eye movement recorder and made penalty/no penalty decisions related to ice hockey video clips on a computer screen. We recorded gaze behaviors, decision accuracy, and decision sensitivity for each participant. Results: Results of the t-tests indicated no group differences in gaze behaviors; however, higher-level referees made significantly more accurate decisions (both accuracy and sensitivity) than lower-level referees. Conclusion: Higher-level ice hockey referees are superior to lower-level referees on decision making, but referees do not differ on gaze behaviors. Possibly, higher-level referees process relevant decision making information more effectively. |
Anthony M. Harris; Roger W. Remington; Stefanie I. Becker Feature specificity in attentional capture by size and color Journal Article In: Journal of vision, vol. 13, no. 3, pp. 1–15, 2013. @article{Harris2013, Top-down guidance of visual attention has classically been thought to operate in a feature-specific manner. However, recent studies have shown that top-down visual attention can also be guided by information about target-nontarget feature relations (e.g., larger, redder, brighter). Here we recommend a minimal set of cues for differentiating between relational and feature-specific attentional guidance and examine contrasting predictions for the guidance of attention by size and color stimuli in a spatial cueing paradigm. In Experiment 1 we demonstrate that in search for size, when both feature-specific and relational strategies are available, participants adopt a relational search strategy. Experiment 2 shows that when feature-specific information is the only reliable information to guide attention to the target, participants are able to adopt a feature-specific set for size information. Finally, in Experiment 3 we extend our paradigm to differentiate between feature-specific and relational strategies in search for color. Together, these experiments help to clarify the conditions under which different attentional guidance strategies will be employed, and demonstrate a useful minimum cue requirement for differentiating between these two forms of top-down guidance. Implications for current theories of attention are discussed. |
Jesse A. Harris; Charles Clifton; Lyn Frazier Processing and domain selection: Quantificational variability effects Journal Article In: Language and Cognitive Processes, vol. 28, no. 10, pp. 1519–1544, 2013. @article{Harris2013a, Three studies investigated how readers interpret sentences with variable quantificational domains, for example, The army was mostly in the capital, where mostly may quantify over individuals or parts (Most of the army was in the capital) or over times (The army was in the capital most of the time). It is proposed that a general conceptual economy principle, No Extra Times, discourages the postulation of potentially unnecessary times, and thus favours the interpretation quantifying over parts. Disambiguating an ambiguously quantified sentence to a quantification over times interpretation was rated as less natural than disambiguating it to a quantification over parts interpretation (Experiment 1). In an interpretation questionnaire, sentences with similar quantificational variability were constructed so that both interpretations of the sentence would require postulating multiple times; this resulted in the elimination of the preference for a quantification over parts interpretation, suggesting the parts preference observed in Experiment 1 is not reducible to a lexical bias of the adverb mostly (Experiment 2). An eye movement recording study showed that, in the absence of prior evidence for multiple times, readers exhibit greater difficulty when reading material that forces a quantification over times interpretation than when reading material that allows a quantification over parts interpretation (Experiment 3). These experiments contribute to understanding readers' default assumptions about the temporal properties of sentences, which is essential for understanding the selection of a domain for adverbial quantifiers and, more generally, for understanding how situational constraints influence sentence processing. |
William J. Harrison; Jason B. Mattingley; Roger W. Remington Eye movement targets are released from visual crowding Journal Article In: Journal of Neuroscience, vol. 33, no. 7, pp. 2927–2933, 2013. @article{Harrison2013, Our ability to recognize objects in peripheral vision is impaired when other objects are nearby (Bouma, 1970). This phenomenon, known as crowding, is often linked to interactions in early visual processing that depend primarily on the retinal position of visual stimuli (Pelli, 2008; Pelli and Tillman, 2008). Here we tested a new account that suggests crowding is influenced by spatial information derived from an extraretinal signal involved in eye movement preparation. We had human observers execute eye movements to crowded targets and measured their ability to identify those targets just before the eyes began to move. Beginning ∼50 ms before a saccade toward a crowded object, we found that not only was there a dramatic reduction in the magnitude of crowding, but the spatial area within which crowding occurred was almost halved. These changes in crowding occurred despite no change in the retinal position of target or flanking stimuli. Contrary to the notion that crowding depends on retinal signals alone, our findings reveal an important role for eye movement signals. Eye movement preparation effectively enhances object discrimination in peripheral vision at the goal of the intended saccade. These presaccadic changes may enable enhanced recognition of visual objects in the periphery during active search of visually cluttered environments. |
Josephine Hartwig; Katharina M. Schnitzspahn; Matthias Kliegel; Boris M. Velichkovsky; Jens R. Helmert I see you remembering: What eye movements can reveal about process characteristics of prospective memory Journal Article In: International Journal of Psychophysiology, vol. 88, no. 2, pp. 193–199, 2013. @article{Hartwig2013, Prospective memory performance describes the delayed execution of an intended action. As this requires a mixture of memory and attentional control functions, current research aims at delineating the specific processes associated with solving a prospective memory task. Therefore, the current study measured, analysed and compared eye movements of participants who performed a prospective memory, a free viewing, and a visual search task. By keeping constant the prospective memory cue as well as the context of tasks, we aimed at putting the processes of solving prospective memory tasks into context. The results show, that when a prospective memory task is missed, the continuous gaze behaviour is rather similar to the gaze behaviour during free viewing. When the prospective memory task is successfully solved, on the other hand, average gaze behaviour is between free viewing and visual search. Furthermore, individual differences in eye movements were found between low and high performers. Our data suggest that a prospective memory task can be solved in different ways, therefore different processes can be observed. |
Alistair J. Harvey; Wendy Kneller; Alison C. Campbell The elusive effects of alcohol intoxication on visual attention and eyewitness memory Journal Article In: Applied Cognitive Psychology, vol. 27, pp. 617–624, 2013. @article{Harvey2013, Alcohol is a contributing factor in many crimes, yet little is known of its effects on eyewitness memory and face iden- tification. Some authors suggest that intoxication impairs attention and memory, particularly for peripheral scene information, but the data supporting this claim are limited. The present study therefore sought to determine whether (i) intoxicated participants spend less time fixating on peripheral regions ofcrime images than sober counterparts, (ii) whether less information is recognised from image regions receiving fewer gaze fixations and (iii) whether intoxicated participants are less able to identify the perpetrator of a crime than sober participants. Contrary to expectations, participants' ability to explore and subsequently recognise the contents of the stimulus scenes was unaffected by alcohol, suggesting that the relationship between intoxication, attention and eyewitness memory requires closer scrutiny. |
Alistair J. Harvey; Wendy Kneller; Alison C. Campbell The effects of alcohol intoxication on attention and memory for visual scenes. Journal Article In: Memory, vol. 21, no. 8, pp. 969–980, 2013. @article{Harvey2013a, This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes. |
Uwe Hassler; Uwe Friese; Ulla Martens; Nelson Trujillo-Barreto; Thomas Gruber Repetition priming effects dissociate between miniature eye movements and induced gamma-band responses in the human electroencephalogram Journal Article In: European Journal of Neuroscience, vol. 38, no. 3, pp. 2425–2433, 2013. @article{Hassler2013, The role of induced gamma-band responses (iGBRs) in the human electroencephalogram (EEG) is a controversial topic. On the one hand, iGBRs have been associated with neuronal activity reflecting the (re-)activation of cortical object representations. On the other hand, it was shown that miniature saccades (MSs) lead to high-frequency artifacts in the EEG that can mimic cortical iGBRs. We recorded EEG and eye movements simultaneously while participants were engaged in a combined repetition priming and object recognition experiment. MS rates were mainly modulated by object familiarity in a time window from 100 to 300 ms after stimulus onset. In contrast, artifact-corrected iGBRs were sensitive to object repetition and object familiarity in a prolonged time window. EEG source analyses revealed that stimulus repetitions modulated iGBRs in temporal and occipital cortex regions while familiarity was associated with activity in parieto-occipital regions. These results are in line with neuroimaging studies employing functional magnetic resonance imaging or magnetoencephalography. We conclude that MSs reflect early mechanisms of visual perception while iGBRs mirror the activation of cortical networks representing a perceived object. |
Stefan Hawelka; Sarah Schuster; Benjamin Gagl; Florian Hutzler Beyond single syllables: The effect of first syllable frequency and orthographic similarity on eye movements during silent reading Journal Article In: Language and Cognitive Processes, vol. 28, no. 8, pp. 1134–1153, 2013. @article{Hawelka2013, The study assessed the eye movements of 60 adult German readers during silent reading of target words, consisting of two and three syllables, embedded in sentences. The first objective was to assess whether the inhibitory effect of first syllable frequency, which was up to now primarily shown for isolated words, generalises to natural reading. The second objective was to assess the effect of orthographic similarity. First syllable frequency was defined phonologically and was based on the SUBTLEX norms for spoken language [Brysbaert et al. (2011). The word frequency effect: A review of recent developments and implications for the choice of frequency estimates in German. Experimental Psychology, 58, 412?424]. Orthographic similarity was indexed by orthographic Levenshtein distance neighbourhood frequency (NF) [Yarkoni, T., Balota, D. & Yap, M. (2008). Moving beyond Coltheart's N: A new measure of orthographic similarity. Psychonomic Bulletin & Review, 15, 971?979]. We found inhibitory effects for first syllable frequency and for orthographic NF. First syllable frequency affected first fixation duration which was considered as reflecting early effects in visual word recognition. Orthographic NF affected ?late? measures. These findings show that, first, the effect of first syllable frequency does generalise to silent reading. Second, the effect of orthographic NF, up to now investigated only for short words in the context of English, does generalise to multisyllabic words in the German orthography. Relating the effects to the individual reading rate of the participants revealed that the effects were consistent in fast readers but highly variable in slow readers.$backslash$nThe study assessed the eye movements of 60 adult German readers during silent reading of target words, consisting of two and three syllables, embedded in sentences. The first objective was to assess whether the inhibitory effect of first syllable frequency, which was up to now primarily shown for isolated words, generalises to natural reading. The second objective was to assess the effect of orthographic similarity. First syllable frequency was defined phonologically and was based on the SUBTLEX norms for spoken language [Brysbaert et al. (2011). The word frequency effect: A review of recent developments and implications for the choice of frequency estimates in German. Experimental Psychology, 58, 412?424]. Orthographic similarity was indexed by orthographic Levenshtein distance neighbourhood frequency (NF) [Yarkoni, T., Balota, D. & Yap, M. (2008). Moving beyond Coltheart's N: A new measure of orthographic similarity. Psychonomic Bulletin & Review, 15, 971?979]. We found inhibitory effects for first syllable frequency and for orthographic NF. First syllable frequency affected first fixation duration which was considered as reflecting early effects in visual word recognition. Orthographic NF affected ?late? measures. These findings show that, first, the effect of first syllable frequency does generalise to silent reading. Second, the effect of orthographic NF, up to now investigated only for short words in the context of English, does generalise to multisyllabic words in the German orthography. Relating the effects to the individual reading rate of the participants revealed that the effects were consistent in fast readers but highly variable in slow readers. |
Benjamin Y. Hayden; Jack L. Gallant Working memory and decision processes in visual area V4 Journal Article In: Frontiers in Neuroscience, vol. 7, pp. 18, 2013. @article{Hayden2013, Recognizing and responding to a remembered stimulus requires the coordination of perception, working memory and decision-making. To investigate the role of visual cortex in these processes, we recorded responses of single V4 neurons during performance of a delayed match-to-sample task that incorporates rapid serial visual presentation of natural images. We found that neuronal activity during the delay period after the cue but before the images depends on the identity of the remembered image and that this change persists while distractors appear. This persistent response modulation has been identified as a diagnostic criterion for putative working memory signals; our data thus suggest that working memory may involve reactivation of sensory neurons. When the remembered image reappears in the neuron's receptive field, visually evoked responses are enhanced; this match enhancement is a diagnostic criterion for decision. One model that predicts these data is the matched filter hypothesis, which holds that during search V4 neurons change their tuning so as to match the remembered cue, and thus become detectors for that image. More generally, these results suggest that V4 neurons participate in the perceptual, working memory and decision processes that are needed to perform memory-guided decision-making. |
Lisa Kloft; Benedikt Reuter; Anja Riesel; Norbert Kathmann Impaired volitional saccade control: First evidence for a new candidate endophenotype in obsessive-compulsive disorder Journal Article In: European Archives of Psychiatry and Clinical Neuroscience, vol. 263, pp. 215–222, 2013. @article{Kloft2013, Recent research suggests that patients with obsessive-compulsive disorder (OCD) have deficits in the volitional control of saccades. Specific evidence comes from increased latencies of saccadic eye movements when they were volitionally executed but not when they were visually guided. The present study sought to test whether this deviance represents a cognitive endophenotype. To this end, first-degree relatives of OCD patients as genetic risk carriers were compared with OCD patients and healthy controls without a family history of OCD. Furthermore, as volitional response generation comprises selection and initiation of the required response, the study also sought to specify the cognitive mechanisms underlying impaired volitional response generation. Twenty-two unaffected first-degree relatives of OCD patients, 22 unmedicated OCD patients, and 22 healthy comparison subjects performed two types of volitional saccade tasks measuring response selection or only response initiation, respectively. Visually guided saccades were used as a control condition. Our results showed that unaffected first-degree relatives and OCD patients were significantly slowed compared to healthy comparison subjects in volitional response selection. Patients and relatives did not differ from each other. There was no group difference in the visually guided control condition. Taken together, the study provides first evidence that dysfunctional volitional response selection is a candidate endophenotype for OCD. |
Jonas Knöll; M. Concetta Morrone; Frank Bremmer Spatio-temporal topography of saccadic overestimation of time Journal Article In: Vision Research, vol. 83, pp. 56–65, 2013. @article{Knoell2013, Rapid eye movements (saccades) induce visual misperceptions. A number of studies in recent years have investigated the spatio-temporal profiles of effects like saccadic suppression or perisaccadic mislocalization and revealed substantial functional similarities. Saccade induced chronostasis describes the subjective overestimation of stimulus duration when the stimulus onset falls within a saccade. In this study we aimed to functionally characterize saccade induced chronostasis in greater detail. Specifically we tested if chronostasis is influenced by or functionally related to saccadic suppression. In a first set of experiments, we measured the perceived duration of visual stimuli presented at different spatial positions as a function of presentation time relative to the saccade. We further compared perceived duration during saccades for isoluminant and luminant stimuli. Finally, we investigated whether or not saccade induced chronostasis is dependent on the execution of a saccade itself. We show that chronostasis occurs across the visual field with a clear spatio-temporal tuning. Furthermore, we report chronostasis during simulated saccades, indicating that spurious retinal motion induced by the saccade is a prime origin of the phenomenon. |
Sungryong Koh; Nakyeong Yoon; Si On Yoon; Alexander Pollatsek Word frequency and root-morpheme frequency effects on processing of Korean particle-suffixed words Journal Article In: Journal of Cognitive Psychology, vol. 25, no. 1, pp. 64–72, 2013. @article{Koh2013, Two experiments investigated the roles of the frequency of the root morpheme and the frequency of the whole word for a particular type of suffixed word in Korean in which the suffixed word can be thought of as a phrase (e.g., grandson-with). In both experiments, sentence frames were constructed so that they could have one of two target words that varied on frequency characteristics in the same location in the sentence. In Experiment 1, the frequency of the root morpheme was varied with the frequency of the word controlled, and in Experiment 2, the frequency of the word was varied with the frequency of the root morpheme controlled. Word frequency had a significant effect on fixation times, whereas root morpheme frequency did not. The results were surprising as native Korean speakers view the root morpheme as the ‘‘word'' (analogous to how English readers would view a noun followed by a prepositional phrase). |
Oleg V. Komogortsev; Corey D. Holland; Sampath Jayarathna; Alex Karpov 2D linear oculomotor plant mathematical model: Verification and biometric applications Journal Article In: ACM Transactions on Applied Perception, vol. 10, no. 4, pp. 1–18, 2013. @article{Komogortsev2013a, This article assesses the ability of a two-dimensional (2D) linear homeomorphic oculomotor plant mathematical model to simulate normal human saccades on a 2D plane. The proposed model is driven by a simplified pulse-step neuronal control signal and makes use of linear simplifications to account for the unique characteristics of the eye globe and the extraocular muscles responsible for horizontal and vertical eye movement. The linear nature of the model sacrifices some anatomical accuracy for computational speed and analytic tractability, and may be implemented as two one-dimensional models for parallel signal simulation. Practical applications of the model might include improved noise reduction and signal recovery facilities for eye tracking systems, additional metrics from which to determine user effort during usability testing, and enhanced security in biometric identification systems. The results indicate that the model is capable of produce oblique saccades with properties resembling those of normal human saccades and is capable of deriving muscle constants that are viable as biometric indicators. Therefore, we conclude that sacrifice in the anatomical accuracy of the model produces negligible effects on the accuracy of saccadic simulation on a 2D plane and may provide a usable model for applications in computer science, human-computer interaction, and related fields. |
Oleg V. Komogortsev; Alex Karpov Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades Journal Article In: Behavior Research, vol. 45, pp. 203–215, 2013. @article{Komogortsev2013, Ternary eye movement classification, which separates fixations, saccades, and smooth pursuit from the raw eye positional data, is extremely challenging. This article develops new and modifies existing eye-tracking algorithms for the purpose of conducting meaningful ternary classification. To this end, a set of qualitative and quantitative behavior scores is introduced to facilitate the assessment of classification performance and to provide means for automated threshold selection. Experimental evaluation of the proposed methods is conducted using eye movement records obtained from 11 subjects at 1000 Hz in response to a step-ramp stimulus eliciting fixations, saccades, and smooth pursuits. Results indicate that a simple hybrid method that incorporates velocity and dispersion thresholding allows producing robust classification performance. It is concluded that behavior scores are able to aid automated threshold selection for the algorithms capable of successful classification. |
Arnout W. Koornneef; Ted J. M. Sanders Establishing coherence relations in discourse: The influence of implicit causality and connectives on pronoun resolution Journal Article In: Language and Cognitive Processes, vol. 28, no. 8, pp. 1169–1206, 2013. @article{Koornneef2013, Many studies have shown that readers and listeners recruit verb-based implicit causality information rapidly in the service of pronoun resolution. However, since most of these studies focused on constructions in which because connected the two critical clauses, it is unclear to what extent implicit causality information affects the processing of pronouns embedded in other types of coherence relations. In an eye-tracking and completion study we addressed this void by varying whether because, but, and and joined a primary clause containing the implicit causality verb, with a secondary clause containing a critical gender-marked pronoun. The results showed that the claims made for implicit causality hold if the connective because is present (i.e., a reading time delay following a pronoun that is inconsistent with the implicit causality bias of the verb), but do not generalise to other connectives like but and and. This shows that the strength and persistence of implicit causality as a pronoun resolution cue depends on the coherence relation in which the verb, the antecedent and the pronoun appear. Many studies have shown that readers and listeners recruit verb-based implicit causality information rapidly in the service of pronoun resolution. However, since most of these studies focused on constructions in which because connected the two critical clauses, it is unclear to what extent implicit causality information affects the processing of pronouns embedded in other types of coherence relations. In an eye-tracking and completion study we addressed this void by varying whether because, but, and and joined a primary clause containing the implicit causality verb, with a secondary clause containing a critical gender-marked pronoun. The results showed that the claims made for implicit causality hold if the connective because is present (i.e., a reading time delay following a pronoun that is inconsistent with the implicit causality bias of the verb), but do not generalise to other connectives like but and and. This shows that the strength and persistence of implicit causality as a pronoun resolution cue depends on the coherence relation in which the verb, the antecedent and the pronoun appear. |
Roger P. Levy; Frank Keller Expectation and locality effects in German verb-final structures Journal Article In: Journal of Memory and Language, vol. 68, no. 2, pp. 199–222, 2013. @article{Levy2013a, Probabilistic expectations and memory limitations are central factors governing the real-time comprehension of natural language, but how the two factors interact remains poorly understood. One respect in which the two factors have come into theoretical conflict is the documentation of both locality effects, in which having more dependents preceding a governing verb increases processing difficulty at the verb, and anti-locality effects, in which having more preceding dependents facilitates processing at the verb. However, no controlled study has previously demonstrated both locality and anti-locality effects in the same type of dependency relation within the same language. Additionally, many previous demonstrations of anti-locality effects have been potentially confounded with lexical identity, plausibility, and sentence position. Here, we provide new evidence of both locality and anti-locality effects in the same type of dependency relation in a single language-verb-final constructions in German-while controlling for lexical identity, plausibility, and sentence position. In main clauses, we find clear anti-locality effects, with the presence of a preceding dative argument facilitating processing at the final verb; in subject-extracted relative clauses with identical linear ordering of verbal dependents, we find both anti-locality and locality effects, with processing facilitated when the verb is preceded by a dative argument alone, but hindered when the verb is preceded by both the dative argument and an adjunct. These results indicate that both expectations and memory limitations need to be accounted for in any complete theory of online syntactic comprehension. |
Richard L. Lewis; Michael Shvartsman; Satinder Singh The adaptive nature of eye movements in linguistic tasks: How payoff and architecture shape speed-accuracy trade-offs Journal Article In: Topics in Cognitive Science, vol. 5, no. 3, pp. 581–610, 2013. @article{Lewis2013, We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed- accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to—and found to accord with—eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. |
Louise Ann Leyland; Julie A. Kirkby; Barbara J. Juhasz; Alexander Pollatsek; Simon P. Liversedge The influence of word shading and word length on eye movements during reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 471–486, 2013. @article{Leyland2013, An interesting issue in reading is how parafoveal information affects saccadic targeting and fixation durations. We investigated the influence of shading selected regions of text on eye movements during reading of long and short words within sentences. A target word, either four- or eight-letters long, was presented in one of four shading conditions: the whole target word shaded; the first half shaded; second half shaded; no shading. There was no evidence of a visually mediated parafoveal-on-foveal effect. Saccadic targeting was modulated by the shading on the first half of the word, such that fixations landed closer to the beginning of the word than in the other three shading conditions. Furthermore, partial word shading, resulting in visual non-uniformity of the target word, produced longer gaze durations than the other conditions. Finally, readers spent more time re-reading target words when they were partially shaded than in the other two conditions. We suggest that our effects are due to targeting of the optimal viewing location and revisits to check words that appear visually unusual. Together, the results indicate robust effects of low-level visual characteristics of the word on oculomotor decisions of where and when to move the eyes during reading. |
Xingshan Li; Junjuan Gu; Pingping Liu; Keith Rayner The advantage of word-based processing in chinese reading: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 3, pp. 879–889, 2013. @article{Li2013, In 2 experiments, we tested the prediction that reading is more efficient when characters belonging to a word are presented simultaneously than when they are not in Chinese reading using a novel variation of the moving window paradigm (McConkie & Rayner, 1975). In Experiment 1, we found that reading was slowed down when Chinese readers could not see characters belonging to a word simultaneously compared to when they could do so. In Experiment 2, when Chinese readers could choose whether the 2 characters in the moving window contained a word or 2 characters that did not constitute a word, they had a clear tendency to look at 2 characters belonging to a word simultaneously. The results of the current study provide strong evidence that character processing is affected by word knowledge and the processing of other characters belonging to the same word in Chinese reading, and add to a growing body of evidence demonstrating that words do have psychological reality for Chinese readers. The results also suggest that the eye movement control strategy of Chinese readers is rather flexible in that it can be adjusted online to modify the characteristics of the window. |
Shane Lindsay; Christoph Scheepers; Yuki Kamide To dash or to dawdle: Verb-associated speed of motion influences eye movements during spoken sentence comprehension Journal Article In: PLoS ONE, vol. 8, no. 6, pp. e67187, 2013. @article{Lindsay2013, In describing motion events verbs of manner provide information about the speed of agents or objects in those events. We used eye tracking to investigate how inferences about this verb-associated speed of motion would influence the time course of attention to a visual scene that matched an event described in language. Eye movements were recorded as participants heard spoken sentences with verbs that implied a fast ("dash") or slow ("dawdle") movement of an agent towards a goal. These sentences were heard whilst participants concurrently looked at scenes depicting the agent and a path which led to the goal object. Our results indicate a mapping of events onto the visual scene consistent with participants mentally simulating the movement of the agent along the path towards the goal: when the verb implies a slow manner of motion, participants look more often and longer along the path to the goal; when the verb implies a fast manner of motion, participants tend to look earlier at the goal and less on the path. These results reveal that event comprehension in the presence of a visual world involves establishing and dynamically updating the locations of entities in response to linguistic descriptions of events. |
Pingping Liu; Xingshan Li Optimal viewing position effects in the processing of isolated Chinese words Journal Article In: Vision Research, vol. 81, pp. 45–57, 2013. @article{Liu2013a, Previous studies have found that words are identified most quickly when the eyes fixate near the word center (the Optimal Viewing Position, OVP) in alphabetic languages. Two experiments were performed to determine the presence of OVP effects during the processing of isolated Chinese words. Participants' eye movements were recorded while they performed a lexical decision task. The results suggest that Chinese readers exhibit OVP effects and that the OVP tends to be the first character for 2-character words. For 3- and 4-character words, the OVP effects appear as a U-shaped curve with a minimum towards the second character. As fixations deviate from the OVP, word processing times increase at a rate of 30-70. ms per character, and fixation duration is strongly influenced by the initial viewing position. Moreover, the present study did not observe an I-OVP effect for first fixation durations nor a fixation-duration trade-off in two-fixation cases in the case of isolated Chinese words processing. |
Taosheng Liu; Youyang Hou A hierarchy of attentional priority signals in human frontoparietal cortex Journal Article In: Journal of Neuroscience, vol. 33, no. 42, pp. 16606–16616, 2013. @article{Liu2013, Humans can voluntarily attend to a variety of visual attributes to serve behavioral goals. Voluntary attention is believed to be controlled by a network of dorsal frontoparietal areas. However, it is unknown how neural signals representing behavioral relevance (attentional priority) for different attributes are organized in this network. Computational studies have suggested that a hierarchical organization reflecting the similarity structure of the task demands provides an efficient and flexible neural representation. Here we examined the structure of attentional priority using functional magnetic resonance imaging. Participants were cued to attend to location, color, or motion direction within the same stimulus. We found a hierarchical structure emerging in frontoparietal areas, such that multivoxel patterns for attending to spatial locations were most distinct from those for attending to features, and the latter were further clustered into different dimensions (color vs motion). These results provide novel evidence for the organization of the attentional control signals at the level of distributed neural activity. The hierarchical organization provides a computationally efficient scheme to support flexible top-down control. |
Cai S. Longman; Aureliu Lavric; Stephen Monsell More attention to attention? An eye-tracking investigation of selection of perceptual attributes during a task switch Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 4, pp. 1142–1151, 2013. @article{Longman2013, Switching tasks prolongs response times, an effect reduced but not eliminated by active preparation. To explore the role of attentional selection of the relevant stimulus attribute in these task-switch costs, we measured eye fixations in participants cued to identify either a face or a letter displayed on its forehead. With only 200 ms between cue and stimulus onsets, the eyes fixated the currently relevant region of the stimulus less and the irrelevant region more on switch than on repeat trials, at stimulus onset and for 500 ms thereafter, in a pattern suggestive of delayed orientation of attention to the relevant region on switch trials. With 800 ms to prepare, both switch costs and inappropriate fixations were reduced, but on switch trials participants still tended (relative to repeat trials) to fixate the now-irrelevant region more at stimulus onset and to maintain fixation on, or refixate, the irrelevant region more during the next 500 ms. The size of this attentional persistence was associated with differences in performance costs between and within participants. We suggest that reorientation of attention is an important, albeit somewhat neglected and controversial, component of advance task-set reconfiguration and that the task-set inertia (or reactivation) to which many attribute the residual task-switch cost seen after preparation includes inertia in (or reactivation of) attentional parameters. |
Maciej Kosilo; Sophie M. Wuerger; Matt Craddock; Ben J. Jennings; Amelia R. Hunt; Jasna Martinovic Low-level and high-level modulations of fixational saccades and high frequency oscillatory brain activity in a visual object classification task Journal Article In: Frontiers in Psychology, vol. 4, pp. 948, 2013. @article{Kosilo2013, Until recently induced gamma-band activity (GBA) was considered a neural marker of cortical object representation. However, induced GBA in the electroencephalogram (EEG) is susceptible to artifacts caused by miniature fixational saccades. Recent studies have demonstrated that fixational saccades also reflect high-level representational processes. Do high-level as opposed to low-level factors influence fixational saccades? What is the effect of these factors on artifact-free GBA? To investigate this, we conducted separate eye tracking and EEG experiments using identical designs. Participants classified line drawings as objects or non-objects. To introduce low-level differences, contours were defined along different directions in cardinal color space: S-cone-isolating, intermediate isoluminant, or a full-color stimulus, the latter containing an additional achromatic component. Prior to the classification task, object discrimination thresholds were measured and stimuli were scaled to matching suprathreshold levels for each participant. In both experiments, behavioral performance was best for full-color stimuli and worst for S-cone isolating stimuli. Saccade rates 200-700 ms after stimulus onset were modulated independently by low and high-level factors, being higher for full-color stimuli than for S-cone isolating stimuli and higher for objects. Low-amplitude evoked GBA and total GBA were observed in very few conditions, showing that paradigms with isoluminant stimuli may not be ideal for eliciting such responses. We conclude that cortical loops involved in the processing of objects are preferentially excited by stimuli that contain achromatic information. Their activation can lead to relatively early exploratory eye movements even for foveally-presented stimuli. |
Anastasia Kourkoulou; Gustav Kuhn; John M. Findlay; Susan R. Leekam Eye movement difficulties in autism spectrum disorder: Implications for implicit contextual learning Journal Article In: Autism Research, vol. 6, no. 3, pp. 177–189, 2013. @article{Kourkoulou2013, It is widely accepted that we use contextual information to guide our gaze when searching for an object. People with autism spectrum disorder (ASD) also utilise contextual information in this way; yet, their visual search in tasks of this kind is much slower compared with people without ASD. The aim of the current study was to explore the reason for this by measuring eye movements. Eye movement analyses revealed that the slowing of visual search was not caused by making a greater number of fixations. Instead, participants in the ASD group were slower to launch their first saccade, and the duration of their fixations was longer. These results indicate that slowed search in ASD in contextual learning tasks is not due to differences in the spatial allocation of attention but due to temporal delays in the initial-reflexive orienting of attention and subsequent-focused attention. These results have broader implications for understanding the unusual attention profile of individuals with ASD and how their attention may be shaped by learning. Autism Res 2013, 6: 177–189. |
Hamutal Kreiner; Simon Garrod; Patrick Sturt Number agreement in sentence comprehension: The relationship between grammatical and conceptual factors Journal Article In: Language and Cognitive Processes, vol. 28, no. 6, pp. 829–874, 2013. @article{Kreiner2013, Studies in theoretical linguistics argue that subject-verb agreement is more sensitive to grammatical number, while pronoun-antecedent agreement is more sensitive to conceptual number. This claim is robustly supported by speech production research, but few studies have examined this issue in comprehension. We investigated this dissociation between conceptual and grammatical number agreement in three eye-tracking reading experiments, using collective nouns like ?group?, which can be notionally interpreted as either singular or plural. Experiment 1 indicated that pronoun-antecedent agreement is conceptually driven; Experiment 2 indicated that subject-verb agreement is morpho-syntactically driven. Experiment 3 indicated that the morpho-grammatical processes that control the initial processing of subject-verb agreement do not bias later semantic processing of pronoun-antecedent number agreement, even when the anaphor and the verb occur in the same sentence, and the same collective noun is both the subject of the verb and antecedent of the pronoun. In view of these findings we propose that the processes that control number agreement in comprehension show a dissociation between semantic and morpho-syntactic processing that is similar to the dissociation demonstrated in speech-production. We discuss various theoretical frameworks that can account for this similarity.$backslash$nStudies in theoretical linguistics argue that subject-verb agreement is more sensitive to grammatical number, while pronoun-antecedent agreement is more sensitive to conceptual number. This claim is robustly supported by speech production research, but few studies have examined this issue in comprehension. We investigated this dissociation between conceptual and grammatical number agreement in three eye-tracking reading experiments, using collective nouns like ?group?, which can be notionally interpreted as either singular or plural. Experiment 1 indicated that pronoun-antecedent agreement is conceptually driven; Experiment 2 indicated that subject-verb agreement is morpho-syntactically driven. Experiment 3 indicated that the morpho-grammatical processes that control the initial processing of subject-verb agreement do not bias later semantic processing of pronoun-antecedent number agreement, even when the anaphor and the verb occur in the same sentence, and the same collective noun is both the subject of the verb and antecedent of the pronoun. In view of these findings we propose that the processes that control number agreement in comprehension show a dissociation between semantic and morpho-syntactic processing that is similar to the dissociation demonstrated in speech-production. We discuss various theoretical frameworks that can account for this similarity. |
Mariska E. Kret; Karin Roelofs; Jeroen J. Stekelenburg; Beatrice Gelder Emotional signals from faces, bodies and scenes influence observers' face expressions, fixations and pupil-size Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 810, 2013. @article{Kret2013, We receive emotional signals from different sources, including the face, the whole body, and the natural scene. Previous research has shown the importance of context provided by the whole body and the scene on the recognition of facial expressions. This study measured physiological responses to face-body-scene combinations. Participants freely viewed emotionally congruent and incongruent face-body and body-scene pairs whilst eye fixations, pupil-size, and electromyography (EMG) responses were recorded. Participants attended more to angry and fearful vs. happy or neutral cues, independent of the source and relatively independent from whether the face body and body scene combinations were emotionally congruent or not. Moreover, angry faces combined with angry bodies and angry bodies viewed in aggressive social scenes elicited greatest pupil dilation. Participants' face expressions matched the valence of the stimuli but when face-body compounds were shown, the observed facial expression influenced EMG responses more than the posture. Together, our results show that the perception of emotional signals from faces, bodies and scenes depends on the natural context, but when threatening cues are presented, these threats attract attention, induce arousal, and evoke congruent facial reactions. |
Mariska E. Kret; Jeroen J. Stekelenburg; Karin Roelofs; Beatrice Gelder Perception of face and body expressions using electromyography, pupillometry and gaze measures Journal Article In: Frontiers in Psychology, vol. 4, pp. 28, 2013. @article{Kret2013a, Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and emotionally congruent and incongruent face-body compounds. Participants' fixations were measured and their pupil size recorded with eye-tracking equipment and their facial reactions measured with electromyography. The results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, vice versa as well. From their facial expressions, it appeared that observers acted with signs of negative emotionality (increased corrugator activity) to angry and fearful facial expressions and with positive emotionality (increased zygomaticus) to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body facilitates the recognition of the emotion. |
Franziska Kretzschmar; Dominique Pleimling; Jana Hosemann; Stephan Füssel; Ina Bornkessel-Schlesewsky; Matthias Schlesewsky Subjective impressions do not mirror online reading effort: Concurrent EEG-eyetracking evidence from the reading of books and digital media Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e56178, 2013. @article{Kretzschmar2013, In the rapidly changing circumstances of our increasingly digital world, reading is also becoming an increasingly digital experience: electronic books (e-books) are now outselling print books in the United States and the United Kingdom. Nevertheless, many readers still view e-books as less readable than print books. The present study thus used combined EEG and eyetracking measures in order to test whether reading from digital media requires higher cognitive effort than reading conventional books. Young and elderly adults read short texts on three different reading devices: a paper page, an e-reader and a tablet computer and answered comprehension questions about them while their eye movements and EEG were recorded. The results of a debriefing questionnaire replicated previous findings in that participants overwhelmingly chose the paper page over the two electronic devices as their preferred reading medium. Online measures, by contrast, showed shorter mean fixation durations and lower EEG theta band voltage density–known to covary with memory encoding and retrieval–for the older adults when reading from a tablet computer in comparison to the other two devices. Young adults showed comparable fixation durations and theta activity for all three devices. Comprehension accuracy did not differ across the three media for either group. We argue that these results can be explained in terms of the better text discriminability (higher contrast) produced by the backlit display of the tablet computer. Contrast sensitivity decreases with age and degraded contrast conditions lead to longer reading times, thus supporting the conclusion that older readers may benefit particularly from the enhanced contrast of the tablet. Our findings thus indicate that people's subjective evaluation of digital reading media must be dissociated from the cognitive and neural effort expended in online information processing while reading from such devices. |
Hannah M. Krüger; Amelia R. Hunt Inhibition of return across eye and object movements: The role of prediction Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 3, pp. 735–744, 2013. @article{Krueger2013, Responses are slower to targets appearing in recently inspected locations, an effect known as Inhibition of Return (IOR). IOR is typically viewed as the consequence of an involuntary mechanism that prevents reinspection of previously visited locations and thereby biases attention toward novel locations during visual search. For an inhibitory tagging mechanism to serve this function effectively, it should be robust against eye movements and the movements of objects in the environment. We investigated whether the predictability of motion supports the coding of inhibitory tags in spatiotopic coordinates across eye movements and object-based coordinates across object motion. IOR was observed in both retinotopic and spatiotopic coordinates across eye movements, regardless of the predictability of the eye movement direction. In a matching experiment, but with predictable or unpredictable object motion instead of eye movements, IOR was reduced in magnitude by object motion and was not observed in object-based coordinates, even when the motion was predictable. Together the results suggest inhibitory tags can track objects as they move across the retina, but only when this motion is self-generated. We conclude that efference copy, not prediction, plays a key role in maintaining inhibition on previously attended objects across saccades. |
Ada Le; Matthias Niemeier Left visual field preference for a bimanual grasping task with ecologically valid object sizes Journal Article In: Experimental Brain Research, vol. 230, pp. 187–196, 2013. @article{Le2013a, Grasping using two forelimbs in opposition to one another is evolutionary older than the hand with an opposable thumb (Whishaw and Coles in Behav Brain Res 77:135–148, 1996); yet, the mechanisms for bimanual grasps remain unclear. Similar to unimanual grasping, the localization of matching stable grasp points on an object is computationally expensive and so it makes sense for the signals to converge in a single cortical hemisphere. Indeed, bimanual grasps are faster and more accurate in the left visual field, and are disrupted if there is transcra- nial stimulation of the right hemisphere (Le and Niemeier in Exp Brain Res 224:263–273, 2013; Le et al. in Cereb Cortex. doi:10.1093/cercor/bht115, 2013). However, research so far has tested the right hemisphere dominance based on small objects only, which are usually grasped with one hand, whereas bimanual grasping is more com- monly used for objects that are too big for a single hand. Because grasping large objects might involve different neural circuits than grasping small objects (Grol et al. in J Neurosci 27:11877–11887, 2007), here we tested whether a left visual field/right hemisphere dominance for biman- ual grasping exists with large and thus more ecologically valid objects or whether the right hemisphere dominance is a function of object size. We asked participants to fixate to the left or right of an object and to grasp the object with the index and middle fingers of both hands. Consistent with previous observations, we found that for objects in the left visual field, the maximum grip apertures were scaled closer to the object width and were smaller and less variable, than for objects in the right visual field. Our results demonstrate that bimanual grasping is predominantly controlled by the right hemisphere, even in the context of grasping larger objects. |
Ada Le; Matthias Niemeier A right hemisphere dominance for bimanual grasps Journal Article In: Experimental Brain Research, vol. 224, no. 2, pp. 263–273, 2013. @article{Le2013, To find points on the surface of an object that ensure a stable grasp, it would be most effective to employ one area in one cortical hemisphere. But grasping the object with both hands requires control through both hemispheres. To better understand the control mechanisms underlying this "bimanual grasping", here we examined how the two hemispheres coordinate their control processes for bimanual grasping depending on visual field. We asked if bimanual grasping involves both visual fields equally or one more than the other. To test this, participants fixated either to the left or right of an object and then grasped or pushed it off a pedestal. We found that when participants grasped the object in the right visual field, maximum grip aperture (MGA) was larger and more variable, and participants were slower to react and to show MGA compared to when they grasped the object in the left visual field. In contrast, when participants pushed the object we observed no comparable visual field effects. These results suggest that grasping with both hands, specifically the computation of grasp points on the object, predominantly involves the right hemisphere. Our study provides new insights into the interactions of the two hemispheres for grasping. |
Matthew L. Leavitt; Florian Pieper; Adam J. Sachs; Ridha Joober; Julio C. Martinez-Trujillo In: PLoS ONE, vol. 8, no. 4, pp. e61503, 2013. @article{Leavitt2013, Neurons within the primate dorsolateral prefrontal cortex (dlPFC) are clustered in microcolumns according to their visuospatial tuning. One issue that remains poorly investigated is how this anatomical arrangement influences functional interactions between neurons during behavior. To investigate this question we implanted 4 mm×4 mm multielectrode arrays in two macaques' dlPFC area 8a and measured spike count correlations (rsc ) between responses of simultaneously recorded neurons when animals maintained stationary gaze. Positive and negative rsc were significantly higher than predicted by chance across a wide range of inter-neuron distances (from 0.4 to 4 mm). Positive rsc were stronger between neurons with receptive fields (RFs) separated by ≤90° of angular distance and progressively decreased as a function of inter-neuron physical distance. Negative rsc were stronger between neurons with RFs separated by >90° and increased as a function of inter-neuron distance. Our results show that short- and long-range functional interactions between dlPFC neurons depend on the physical distance between them and the relationship between their visuospatial tuning preferences. Neurons with similar visuospatial tuning show positive rsc that decay with inter-neuron distance, suggestive of excitatory interactions within and between adjacent microcolumns. Neurons with dissimilar tuning from spatially segregated microcolumns show negative rsc that increase with inter-neuron distance, suggestive of inhibitory interactions. This pattern of results shows that functional interactions between prefrontal neurons closely follow the pattern of connectivity reported in anatomical studies. Such interactions may be important for the role of the prefrontal cortex in the allocation of attention to targets in the presence of competing distracters. |
Adrian K. C. Lee; Siddharth Rajaram; Jing Xia; Hari Bharadwaj; Eric D. Larson; Matti S. Hämäläinen; Barbara G. Shinn-Cunningham Auditory selective attention reveals preparatory activity in different cortical regions for selection based on source location and source pitch Journal Article In: Frontiers in Neuroscience, vol. 6, pp. 190, 2013. @article{Lee2013b, In order to extract information in a rich environment, we focus on different features that allow us to direct attention to whatever source is of interest. The cortical network deployed during spatial attention, especially in vision, is well characterized. For example, visuospatial attention engages a frontoparietal network including the frontal eye fields (FEFs), which modulate activity in visual sensory areas to enhance the representation of an attended visual object. However, relatively little is known about the neural circuitry controlling attention directed to non-spatial features, or to auditory objects or features (either spatial or non-spatial). Here, using combined magnetoencephalography (MEG) and anatomical information obtained from MRI, we contrasted cortical activity when observers attended to different auditory features given the same acoustic mixture of two simultaneous spoken digits. Leveraging the fine temporal resolution of MEG, we establish that activity in left FEF is enhanced both prior to and throughout the auditory stimulus when listeners direct auditory attention to target location compared to when they focus on target pitch. In contrast, activity in the left posterior superior temporal sulcus (STS), a region previously associated with auditory pitch categorization, is greater when listeners direct attention to target pitch rather than target location. This differential enhancement is only significant after observers are instructed which cue to attend, but before the acoustic stimuli begin. We therefore argue that left FEF participates more strongly in directing auditory spatial attention, while the left STS aids auditory object selection based on the non-spatial acoustic feature of pitch. |
Chia-lin Lee; Erica L. Middleton; Daniel Mirman; Solène Kalénine; Laurel J. Buxbaum Incidental and context-responsive activation of structure-and function-based action features during object identification Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 1, pp. 257–270, 2013. @article{Lee2013, Previous studies suggest that action representations are activated during object processing, even when task-irrelevant. In addition, there is evidence that lexical-semantic context may affect such activation during object processing. Finally, prior work from our laboratory and others indicates that function-based (“use”) and structure-based (“move”) action subtypes may differ in their activation characteristics. Most studies assessing such effects, however, have required manual object-relevant motor responses, thereby plausibly influencing the activation of action representations. The present work uses eyetracking and a Visual World Paradigm task without object-relevant actions to assess the time course of activation of action representations, as well as their responsiveness to lexical-semantic context. In two experiments, participants heard a target word and selected its referent from an array of four objects. Gaze fixations on nontarget objects signal activation of features shared between targets and nontargets. The experiments assessed activation of structure-based (Experiment 1) or function-based (Experiment 2) distractors, using neutral sentences (“S/he saw the... .”) or sentences with a relevant action verb (Experiment 1: “S/he picked up the... .”; Experiment 2: “S/he used the... .”). We observed task-irrelevant activations of action information in both experiments. In neutral contexts, structure-based activation was relatively faster-rising but more transient than function-based activation. Additionally, action verb contexts reliably modified patterns of activation in both Experiments. These data provide fine-grained information about the dynamics of activation of function-based and structure-based actions in neutral and action-relevant contexts, in support of the “Two Action System” model of object and action processing (e.g., Buxbaum & Kalénine, 2010). |