全部EyeLink出版物
以下按年份列出了截至2024年(2025年初)的所有13000多篇同行评审的EyeLink研究出版物。您可以使用视觉搜索、平滑追求、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼动追踪研究可以在解决方案页面上找到。如果我们错过了任何EyeLink眼动追踪论文,请给我们发电子邮件!
2013 |
Ralph Radach; Albrecht W. Inhoff; Lisa Glover; Christian Vorstius Contextual constraint and N+2 preview effects in reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 619–633, 2013. @article{Radach2013a, Extracting linguistic information from locations beyond the currently fixated word is a core component of skilled reading. Recent debate on this topic is focused on the question of whether useful linguistic information can be extracted from more than one (parafoveally visible) word to the right of a fixated word (N). The current study examined this issue through the use parafoveal previews with a short and high-frequency next (N + 1) word, as this should increase the opportunity for the extraction of useful information from the subsequent (N + 2) word. Pairs of N + 2 words were selected so that contextual constraint was either high or low. Using saccade contingent display manipulations, preview of a N + 2 target word during word N viewing consisted of either a visually dissimilar nonword or a word. The results revealed a substantial drop in fixation probability for word N + 1 when the N + 2 preview was masked with a nonword. Furthermore, the masking of word N + 2 influenced its viewing duration even when word N + 1 was fixated prior to word N + 2 viewing. These results provide compelling evidence for the view that the linguistic processing can encompass more than one word at a time. |
Pavan Ramkumar; Mainak Jas; Sebastian Pannasch; Riitta Hari; Lauri Parkkonen Feature-specific information processing precedes concerted activation in human visual cortex Journal Article In: Journal of Neuroscience, vol. 33, no. 18, pp. 7691–7699, 2013. @article{Ramkumar2013, Current knowledge about the precise timing of visual input to the cortex relies largely on spike timings in monkeys and evoked-response latencies in humans. However, quantifying the activation onset does not unambiguously describe the timing of stimulus-feature-specific information processing. Here, we investigated the information content of the early human visual cortical activity by decoding low-level visual features from single-trial magnetoencephalographic (MEG) responses. MEG was measured from nine healthy subjects as they viewed annular sinusoidal gratings (spanning the visual field from 2 to 10° for a duration of 1 s), characterized by spatial frequency (0.33 cycles/degree or 1.33 cycles/degree) and orientation (45° or 135°); gratings were either static or rotated clockwise or anticlockwise from 0 to 180°. Time-resolved classifiers using a 20 ms moving window exceeded chance level at 51 ms (the later edge of the window) for spatial frequency, 65 ms for orientation, and 98 ms for rotation direction. Decoding accuracies of spatial frequency and orientation peaked at 70 and 90 ms, respectively, coinciding with the peaks of the onset evoked responses. Within-subject time-insensitive pattern classifiers decoded spatial frequency and orientation simultaneously (mean accuracy 64%, chance 25%) and rotation direction (mean 82%, chance 50%). Classifiers trained on data from other subjects decoded the spatial frequency (73%), but not the orientation, nor the rotation direction. Our results indicate that unaveraged brain responses contain decodable information about low-level visual features already at the time of the earliest cortical evoked responses, and that representations of spatial frequency are highly robust across individuals. |
Sarah J. Rappaport; Glyn W. Humphreys; M. Jane Riddoch The attraction of yellow corn: Reduced attentional constraints on coding learned conjunctive relations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 1016–1031, 2013. @article{Rappaport2013, Physiological evidence indicates that different visual features are computed quasi-independently. The subsequent step of binding features, to generate coherent perception, is typically considered a major rate-limiting process, confined to one location at a time and taking 25 ms per item or longer (A. Treisman & S. Gormican, 1988, Feature analysis in early vision: Evidence from search asymmetries, Psychological Review, Vol. 95, pp. 15-48). We examined whether these processing limitations remain once bindings are learned for familiar objects. Participants searched for objects that could appear either in familiar or unfamiliar colors. Objects in familiar colors were detected efficiently at rates consistent with simultaneous binding across multiple stimuli. Processing limitations were evident for objects in unfamiliar colors. The advantage for the learned color for known targets was eliminated when participants searched for geometric shapes carrying the object colors and when the colors fell in local background areas around the shapes. The effect occurred irrespective of whether the nontargets had familiar colors, but was largest when nontargets had incorrect colors. The efficient search for targets in familiar colors held, even when the search was biased to favor objects in unfamiliar colors. The data indicate that learned bindings can be computed with minimal attentional limitations, consistent with the direct activation of learned conjunctive representations in vision. |
Keith Rayner; Bernhard Angele; Elizabeth R. Schotter; Klinton Bicknell On the processing of canonical word order during eye fixations in reading: Do readers process transposed word previews? Journal Article In: Visual Cognition, vol. 21, no. 3, pp. 353–381, 2013. @article{Rayner2013, Whether readers always identify words in the order they are printed is subject to considerable debate. In the present study, we used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the preview for a two-word target region (e.g. white walls in My neighbor painted the white walls black). Readers received an identical (white walls), transposed (walls white), or unrelated preview (vodka clubs). We found that there was a clear cost of having a transposed preview compared to an identical preview, indicating that readers cannot or do not identify words out of order. However, on some measures, the transposed preview condition did lead to faster processing than the unrelated preview condition, suggesting that readers may be able to obtain some useful information from a transposed preview. Implications of the results for models of eye movement control in reading are discussed. |
Clare M. Press; James M. Kilner The time course of eye movements during action observation reflects sequence learning Journal Article In: NeuroReport, vol. 24, no. 14, pp. 822–826, 2013. @article{Press2013, When we observe object-directed actions such as grasping, we make predictive eye movements. However, eye movements are reactive when observing similar actions without objects. This reactivity may reflect a lack of attribution of intention to observed actors when they perform actions without 'goals'. Alternatively, it may simply signal that there is no cue present that has been predictive of the subsequent trajectory in the observer's experience. To test this hypothesis, the present study investigated how the time course of eye movements changes as a function of visual experience of predictable, but arbitrary, actions without objects. Participants observed a point-light display of a model performing sequential finger actions in a serial reaction time task. Eye movements became less reactive across blocks. In addition, participants who exhibited more predictive eye movements subsequently demonstrated greater learning when required either to execute, or to recognize, the sequence. No measures were influenced by whether participants had been instructed that the observed movements were human or lever generated. The present data indicate that eye movements when observing actions without objects reflect the extent to which the trajectory can be predicted through experience. The findings are discussed with reference to the implications for the mechanisms supporting perception of actions both with and without objects as well as those mediating inanimate object processing. |
Tim J. Preston; Fei Guo; Koel Das; Barry Giesbrecht; Miguel P. Eckstein Neural representations of contextual guidance in visual search of real-world scenes Journal Article In: Journal of Neuroscience, vol. 33, no. 18, pp. 7846–7855, 2013. @article{Preston2013, Exploiting scene context and object– object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. |
Silvia Primativo; Lisa S. Arduino; Maria De Luca; Roberta Daini; Marialuisa Martelli Neglect dyslexia: A matter of "good looking" Journal Article In: Neuropsychologia, vol. 51, no. 11, pp. 2109–2119, 2013. @article{Primativo2013, Brain-damaged patients with right-sided unilateral spatial neglect (USN) often make left-sided errors in reading single words or pseudowords (neglect dyslexia, ND). We propose that both left neglect and low fixation accuracy account for reading errors in neglect dyslexia.Eye movements were recorded in USN patients with (ND+) and without (ND-) neglect dyslexia and in a matched control group of right brain-damaged patients without neglect (USN-). Unlike ND- and controls, ND+ patients showed left lateralized omission errors and a distorted eye movement pattern in both a reading aloud task and a non-verbal saccadic task. During reading, the total number of fixations was larger in these patients independent of visual hemispace, and most fixations were inaccurate. Similarly, in the saccadic task only ND+ patients were unable to reach the moving dot. A third experiment addressed the nature of the left lateralization in reading error distribution by simulating neglect dyslexia in ND- patients. ND- and USN- patients had to perform a speeded reading-at-threshold task that did not allow for eye movements. When stimulus exploration was prevented, ND- patients, but not controls, produced a pattern of errors similar to that of ND+ with unlimited exposure time (e.g., left-sided errors).We conclude that neglect dyslexia reading errors may arise in USN patients as a consequence of an additional and independent deficit unrelated to the orthographic material. In particular, the presence of an altered oculo-motor pattern, preventing the automatic execution of the fine saccadic eye movements involved in reading, uncovers, in USN patients, the attentional bias also in reading single centrally presented words. |
Steven L. Prime; Jonathan J. Marotta Gaze strategies during visually-guided versus memory-guided grasping Journal Article In: Experimental Brain Research, vol. 225, no. 2, pp. 291–305, 2013. @article{Prime2013, Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream. |
Gabriella Óturai; Thorsten Kolling; Monika Knopf Relations between 18-month-olds' gaze pattern and target action performance: A deferred imitation study with eye tracking Journal Article In: Infant Behavior and Development, vol. 36, no. 4, pp. 736–748, 2013. @article{Oturai2013, Deferred imitation studies are used to assess infants' declarative memory performance. These studies have found that deferred imitation performance improves with age, which is usually attributed to advancing memory capabilities. Imitation studies, however, are also used to assess infants' action understanding. In this second research program it has been observed that infants around the age of one year imitate selectively, i.e., they imitate certain kinds of target actions and omit others. In contrast to this, two-year-olds usually imitate the model's exact actions. 18-month-olds imitate more exactly than one-year-olds, but more selectively than two-year-olds, a fact which makes this age group especially interesting, since the processes underlying selective vs. exact imitation are largely debated. The question, for example, if selective attention to certain kinds of target actions accounts for preferential imitation of these actions in young infants is still open. Additionally, relations between memory capabilities and selective imitation processes, as well as their role in shaping 18-month-olds' neither completely selective, nor completely exact imitation have not been thoroughly investigated yet. The present study, therefore, assessed 18-month-olds' gaze toward two types of actions (functional vs. arbitrary target actions) and the model's face during target action demonstration, as well as infants' deferred imitation performance. Although infants' fixation times to functional target actions were not longer than to arbitrary target actions, they imitated the functional target actions more frequently than the arbitrary ones. This suggests that selective imitation does not rely on selective gaze toward functional target actions during the demonstration phase. In addition, a post hoc analysis of interindividual differences suggested that infants' attention to the model's social-communicative cues might play an important role in exact imitation, meaning the imitation of both functional and arbitrary target actions. |
Weston Pack; Thom Carney; Stanley A. Klein Involuntary attention enhances identification accuracy for unmasked low contrast letters using non-predictive peripheral cues Journal Article In: Vision Research, vol. 89, pp. 79–89, 2013. @article{Pack2013, There is controversy regarding whether or not involuntary attention improves response accuracy at a cued location when the cue is non-predictive and if these cueing effects are dependent on backward masking. Various perceptual and decisional mechanisms of performance enhancement have been proposed, such as signal enhancement, noise reduction, spatial uncertainty reduction, and decisional processes. Herein we review a recent report of mask-dependent accuracy improvements with low contrast stimuli and demonstrate that the experiments contained stimulus artifacts whereby the cue impaired perception of low contrast stimuli, leading to an absence of improved response accuracy with unmasked stimuli. Our experiments corrected these artifacts by implementing an isoluminant cue and increasing its distance relative to the targets. The results demonstrate that cueing effects are robust for unmasked stimuli presented in the periphery, resolving some of the controversy concerning cueing enhancement effects from involuntary attention and mask dependency. Unmasked low contrast and/or short duration stimuli as implemented in these experiments may have a short enough iconic decay that the visual system functions similarly as if a mask were present leading to improved accuracy with a valid cue. |
Daniel S. Pages; Jennifer M. Groh Looking at the ventriloquist: Visual outcome of eye movements calibrates sound localization Journal Article In: PLoS ONE, vol. 8, no. 8, pp. e72562, 2013. @article{Pages2013, A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity. |
Maciej Pajak; Antje Nuthmann Object-based saccadic selection during scene perception: Evidence from viewing position effects Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–21, 2013. @article{Pajak2013, The goal of the present study was to further test the hypothesis that objects are important units of saccade targeting and, by inference, attentional selection in real-world scene perception. To this end, we investigated where people fixate within objects embedded in natural scenes. Previously, we reported a preferred viewing location (PVL) close to the center of objects (Nuthmann & Henderson, 2010). Here, we qualify this basic finding by showing that the PVL is affected by object size and the distance between the object and the previous fixation (i.e., launch site distance). Moreover, we examined how within-object fixation position affected subsequent eye-movement behavior on the object. Unexpectedly, there was no refixation optimal viewing position (OVP) effect for objects in scenes. Where viewers initially placed their eyes on an object did not affect the likelihood of refixating that object, suggesting that some refixations on objects in scenes are made for reasons other than insufficient visual information. A fixation-duration inverted-optimal viewing (IOVP) effect was found for large objects: Fixations located at object center were longer than those falling near the edges of an object. Collectively, these findings lend further support to the notion of object-based saccade targeting in scenes. |
Simon Palmer; Uwe Mattler Masked stimuli modulate endogenous shifts of spatial attention Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 486–503, 2013. @article{Palmer2013, Unconscious stimuli can influence participants' motor behavior but also more complex mental processes. Recent research has gradually extended the limits of effects of unconscious stimuli. One field of research where such limits have been proposed is spatial cueing, where exogenous automatic shifts of attention have been distinguished from endogenous controlled processes which govern voluntary shifts of attention. Previous evidence suggests unconscious effects on mechanisms of exogenous shifts of attention. Here, we applied a cue-priming paradigm to a spatial cueing task with arbitrary cues by centrally presenting a masked symmetrical prime before every cue stimulus. We found priming effects on response times in target discrimination tasks with the typical dynamic of cue-priming effects (Experiments 1 and 2) indicating that central symmetrical stimuli which have been associated with endogenous orienting can modulate shifts of spatial attention even when they are masked. Prime-Cue Congruency effects of perceptual dissimilar prime and cue stimuli (Experiment 3) suggest that these effects cannot be entirely reduced to perceptual repetition priming of cue processing. In addition, priming effects did not differ between participants with good and poor prime recognition performance consistent with the view that unconscious stimulus features have access to processes of endogenous shifts of attention. |
Simon Palmer; Uwe Mattler On the source and scope of priming effects of masked stimuli on endogenous shifts of spatial attention Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 528–544, 2013. @article{Palmer2013a, Unconscious stimuli can influence participants' motor behavior as well as more complex mental processes. Previous cue-priming experiments demonstrated that masked cues can modulate endogenous shifts of spatial attention as measured by choice reaction time tasks. Here, we applied a signal detection task with masked luminance targets to determine the source and the scope of effects of masked stimuli. Target-detection performance was modulated by prime-cue congruency, indicating that prime-cue congruency modulates signal enhancement at early levels of target processing. These effects, however, were only found when the prime was perceptually similar to the cue indicting that primes influence early target processing in an indirect way by facilitating cue processing. Together with previous research we conclude that masked stimuli can modulate perceptual and post-central levels of processing. Findings mark a new limit of the effects of unconscious stimuli which seem to have a smaller scope than conscious stimuli. |
Jinger Pan; Ming Yan; Jochen Laubrock; Hua Shu; Reinhold Kliegl Eye-voice span during rapid automatized naming of digits and dice in Chinese normal and dyslexic children Journal Article In: Developmental Science, vol. 16, no. 6, pp. 967–979, 2013. @article{Pan2013, We measured Chinese dyslexic and control children's eye movements during rapid automatized naming (RAN) with alphanumeric (digits) and symbolic (dice surfaces) stimuli. Both types of stimuli required identical oral responses, controlling for effects associated with speech production. Results showed that naming dice was much slower than naming digits for both groups, but group differences in eye-movement measures and in the eye-voice span (i.e. the distance between the currently fixated item and the voiced item) were generally larger in digit-RAN than in dice-RAN. In addition, dyslexics were less efficient in parafoveal processing in these RAN tasks. Since the two RAN tasks required the same phonological output and on the assumption that naming dice is less practiced than naming digits in general, the results suggest that the translation of alphanumeric visual symbols into phonological codes is less efficient in dyslexic children. The dissociation of the print-to-sound conversion and phonological representation suggests that the degree of automaticity in translation from visual symbols to phonological codes in addition to phonological processing per se is also critical to understanding dyslexia. |
Muriel T. N. Panouillères; N. Alahyane; C. Urquizar; Roméo Salemme; Norbert Nighoghossian; B. Gaymard; C. Tilikete; D. Pélisson Effects of structural and functional cerebellar lesions on sensorimotor adaptation of saccades Journal Article In: Experimental Brain Research, vol. 231, no. 1, pp. 1–11, 2013. @article{Panouilleres2013, The cerebellum is critically involved in the adaptation mechanisms that maintain the accuracy of goal-directed acts such as saccadic eye movements. Two categories of saccades, each relying on different adaptation mechanisms, are defined: reactive (externally triggered) saccades and voluntary (internally triggered) saccades. The contribution of the medio-posterior part of the cerebellum to reactive saccades adaptation has been clearly demonstrated, but the evidence that other parts of the cerebellum are also involved is limited. Moreover, the cerebellar substrates of voluntary saccades adaptation have only been marginally investigated. Here, we addressed these two questions by investigating the adaptive capabilities of patients with cerebellar or pre-cerebellar stroke. We recruited three groups of patients presenting focal lesions located, respectively, in the supero-anterior cerebellum, the infero-posterior cerebellum and the lateral medulla (leading to a Wallenberg syndrome including motor dysfunctions similar to those resulting from lesion of the medio-posterior cerebellum). Adaptations of reactive saccades and of voluntary saccades were tested during separate sessions in all patients and in a group of healthy participants. The functional lesion of the medio-posterior cerebellum in Wallenberg syndrome strongly impaired the adaptation of both reactive and voluntary saccades. In contrast, patients with lesion in the supero-anterior part of the cerebellum presented a specific adaptation deficit of voluntary saccades. Finally, patients with an infero-posterior cerebellar lesion showed mild adaptation deficits. We conclude that the medio-posterior cerebellum is critical for the adaptation of both saccade categories, whereas the supero-anterior cerebellum is specifically involved in the adaptation of voluntary saccades. |
Muriel T. N. Panouillères; Solène Frismand; Olivier Sillan; Christian Urquizar; Alain Vighetto; Denis Péisson; Caroline Tilikete Saccades and eye-head coordination in ataxia with oculomotor apraxia type 2 Journal Article In: Cerebellum, vol. 12, no. 4, pp. 557–567, 2013. @article{Panouilleres2013a, Ataxia with oculomotor apraxia type 2 (AOA2) is one of the most frequent autosomal recessive cerebellar ataxias. Oculomotor apraxia refers to horizontal gaze failure due to deficits in voluntary/reactive eye movements. These deficits can manifest as increased latency and/or hypometria of saccades with a staircase pattern and are frequently associated with compensatory head thrust movements. Oculomotor disturbances associated with AOA2 have been poorly studied mainly because the diagnosis of oculomotor apraxia was based on the presence of compensatory head thrusts. The aim of this study was to characterise the nature of horizontal gaze failure in patients with AOA2 and to demonstrate oculomotor apraxia even in the absence of head thrusts. Five patients with AOA2, without head thrusts, were tested in saccadic tasks with the head restrained or free to move and their performance was compared to a group of six healthy participants. The most salient deficit of the patients was saccadic hypometria with a typical staircase pattern. Saccade latency in the patients was longer than controls only for memory-guided saccades. In the head-free condition, head movements were delayed relative to the eye and their amplitude and velocity were strongly reduced compared to controls. Our study emphasises that in AOA2, hypometric saccades with a staircase pattern are a more reliable sign of oculomotor apraxia than head thrust movements. In addition, the variety of eye and head movements' deficits suggests that, although the main neural degeneration in AOA2 affects the cerebellum, this disease affects other structures. |
Muriel T. N. Panouillères; Valérie Gaveau; Camille Socasau; Christian Urquizar; Denis Pélisson Brain processing of visual information during fast eye movements maintains motor performance Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e54641, 2013. @article{Panouilleres2013b, Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution–by shifting a visual target at saccade onset and blanking it at saccade offset–induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation. |
Angelina Paolozza; Rebecca Titman; Donald Brien; Douglas P. Munoz; James N. Reynolds Altered accuracy of saccadic eye movements in children with fetal alcohol spectrum disorder Journal Article In: Alcoholism: Clinical and Experimental Research, vol. 37, no. 9, pp. 1491–1498, 2013. @article{Paolozza2013, Background: Prenatal exposure to alcohol is a major, preventable cause of neurobehavioral dysfunction in children worldwide. The measurement and quantification of saccadic eye movements is a powerful tool for assessing sensory, motor, and cognitive function. The quality of the motor process of an eye movement is known as saccade metrics. Saccade accuracy is 1 component of metrics, which to function optimally requires several cortical brain structures as well as an intact cerebellum and brain-stem. The cerebellum has frequently been reported to be damaged by prenatal alcohol exposure. This study, therefore, tested the hypothesis that children with fetal alcohol spectrum disorder (FASD) will exhibit deficits in the accuracy of saccades.; Methods: A group of children with FASD (n = 27) between the ages of 8 and 16 and typically developing control children (n = 27) matched for age and sex, completed 3 saccadic eye movement tasks of increasing difficulty. Eye movement performance during the tasks was captured using an infrared eye tracker. Saccade metrics (e.g., velocity, amplitude, accuracy) were quantified and compared between the 2 groups for the 3 different tasks.; Results: Children with FASD were more variable in saccade endpoint accuracy, which was reflected by statistically significant increases in the error of the initial saccade endpoint and the frequency of additional, corrective saccades required to achieve final fixation. This increased variability in accuracy was amplified when the cognitive demand of the tasks increased. Children with FASD also displayed a statistically significant increase in response inhibition errors.; Conclusions: These data suggest that children with FASD may have deficits in eye movement control and sensory-motor integration including cerebellar circuits, thereby impairing saccade accuracy. |
Alexander Pastukhov; Victoria Vonau; Solveiga Stonkute; Jochen Braun Spatial and temporal attention revealed by microsaccades Journal Article In: Vision Research, vol. 85, pp. 45–57, 2013. @article{Pastukhov2013, We compared the spatial and temporal allocation of attention as revealed by microsaccades. Observers viewed several concurrent "rapid serial visual presentation" (RSVP) streams in the periphery while maintaining fixation. They continually attended to, and discriminated targets in one particular, cued stream. Over and above this continuous allocation, spatial attention transients ("attention shifts") were prompted by changes in the cued stream location and temporal attention transients ("attentional blinks") by successful target discriminations. Note that the RSVP paradigm avoided the preparatory suppression of microsaccades in anticipation of stimulus or task events, which had been prominent in earlier studies. Both stream changes and target discriminations evoked residual modulations of microsaccade rate and direction, which were consistent with the presumed attentional dynamics in each case (i.e., attention shift and attentional blink, respectively). Interestingly, even microsaccades associated with neither stream change nor target discrimination reflected the continuous allocation of attention, inasmuch as their direction was aligned with the meridian of the target stream. We conclude that attentional allocation shapes microsaccadic activity continuously, not merely during dynamic episodes such as attentional shifts or blinks. |
Kevin B. Paterson; Victoria A. McGowan; Timothy R. Jordan Filtered text reveals adult age differences in reading: Evidence from eye movements Journal Article In: Psychology and Aging, vol. 28, no. 2, pp. 352–364, 2013. @article{Paterson2013, Sensitivity to certain spatial frequencies declines with age and this may have profound effects on reading performance. However, the spatial frequency content of text actually used by older adults (aged 65+), and how this differs from that used by young adults (aged 18-30), remains to be determined. To investigate this issue, the eye movement behavior of young and older adult readers was assessed using a gaze-contingent moving-window paradigm in which text was shown normally within a region centered at the point of gaze, whereas text outside this region was filtered to contain only low, medium, or high spatial frequencies. For young adults, reading times were affected by spatial frequency content when windows of normal text extended up to nine characters wide. Within this processing region, the reading performance of young adults was affected little when text outside the window contained either only high or medium spatial frequencies, but was disrupted substantially when text contained only low spatial frequencies. By contrast, the reading performance of older adults was affected by spatial frequency content when windows extended up to 18 characters wide. Moreover, within this extended processing region, reading performance was disrupted when text contained any one band of spatial frequencies, but was disrupted most of all when text contained only high spatial frequencies. These findings indicate that older adults are sensitive to the spatial frequency content of text from a much wider region than young adults, and rely much more than young adults on coarse-scale components of text when reading. |
Kevin B. Paterson; Victoria A. McGowan; Timothy R. Jordan Effects of adult aging on reading filtered text: Evidence from eye movements Journal Article In: PeerJ, vol. 1, pp. 1–16, 2013. @article{Paterson2013a, Objectives. Sensitivity to spatial frequencies changes with age and this may have profound effects on reading. But how the actual contributions to reading performance made by the spatial frequency content of text differs between young (18-30 years) and older (65+ years) adults remains to be fully determined. Accordingly, we manipulated the spatial frequency content of text and used eye movement measures to assess the effects on reading performance in both age groups. Method. Sentences were displayed as normal or filtered to contain only very low, low, medium, high, or very high spatial frequencies. Reading time and eye movements were recorded as participants read each sentence. Results. Both age groups showed good overall reading ability and high levels of comprehension. However, for young adults, normal performance was impaired only by low and very low spatial frequencies, whereas normal performance for older adults was impaired by all spatial frequencies but least of all by medium. Conclusion. While both young and older adults read and comprehended well, reading ability was supported by different spatial frequencies in each age group. Thus, although spatial frequency sensitivity can change with age, adaptive responses to this change can help maintain reading performance in later life. |
Kevin B. Paterson; Victoria A. McGowan; Timothy R. Jordan Aging and the control of binocular fixations during reading Journal Article In: Psychology and Aging, vol. 28, no. 3, pp. 789–795, 2013. @article{Paterson2013b, Older adults (65 ⫹ years) often have greater difficulty in reading than young adults (18–30 years). However, the extent to which this difficulty is attributable to impaired eye-movement control is uncertain. To address this issue, the alignment and location of the two eyes' fixations during reading were monitored for young and older adults. Older adults showed typical patterns of reading difficulty but the results revealed no age differences in the alignment or location of the two eyes' fixations. Thus, the difficulty older adults experience in reading is not related to oculomotor control, which appears to be preserved into older age. |
Pierre-Vincent Paubel; Philippe Averty; Éric Raufaste Effects of an automated conflict solver on the visual activity of air traffic controllers Journal Article In: International Journal of Aviation Psychology, vol. 23, no. 2, pp. 181–196, 2013. @article{Paubel2013, ERASMUS is a "subliminal" automated aid system designed to reduce air traffic controllers' workload. Prior experiments showed that ERASMUS reduced subjective ratings of mental workload. In this article, the effect of ERASMUS on objective measures of controllers' visual activity was tested in a fully realistic simulation environment. The eye movements of 7 controllers were recorded during experimental traffic sequences, with and without ERASMUS. Consistent with a reduced workload hypothesis, results showed medium to large effects of ERASMUS on the amplitude of saccades, on the time spent gazing at aircraft, and on the distribution of attention over the visual scene. |
Christopher J. Peck; Brian Lau; C. Daniel Salzman The primate amygdala combines information about space and value Journal Article In: Nature Neuroscience, vol. 16, no. 3, pp. 340–348, 2013. @article{Peck2013, A stimulus predicting reinforcement can trigger emotional responses, such as arousal, and cognitive ones, such as increased attention toward the stimulus. Neuroscientists have long appreciated that the amygdala mediates spatially nonspecific emotional responses, but it remains unclear whether the amygdala links motivational and spatial representations. To test whether amygdala neurons encode spatial and motivational information, we presented reward-predictive cues in different spatial configurations to monkeys and assessed how these cues influenced spatial attention. Cue configuration and predicted reward magnitude modulated amygdala neural activity in a coordinated fashion. Moreover, fluctuations in activity were correlated with trial-to-trial variability in spatial attention. Thus, the amygdala integrates spatial and motivational information, which may influence the spatial allocation of cognitive resources. These results suggest that amygdala dysfunction may contribute to deficits in cognitive processes normally coordinated with emotional responses, such as the directing of attention toward the location of emotionally relevant stimuli. |
Florian Perdreau; Patrick Cavanagh The artistís advantage: Better integration of object information across eye movements Journal Article In: i-Perception, vol. 4, no. 6, pp. 380–395, 2013. @article{Perdreau2013, Over their careers, figurative artists spend thousands of hours analyzing objects and scene layout. We examined what impact this extensive training has on the ability to encode complex scenes, comparing participants with a wide range of training and drawing skills on a possible versus impossible objects task. We used a gaze-contingent display to control the amount of information the participants could sample on each fixation either from central or peripheral visual field. Test objects were displayed and participants reported, as quickly as possible, whether the object was structurally possible or not. Our results show that when viewing the image through a small central window, performance improved with the years of training, and to a lesser extent with the level of skill. This suggests that the extensive training itself confers an advantage for integrating object structure into more robust object descriptions. |
Manuel Perea Why does the APA recommend the use of serif fonts? Journal Article In: Psicothema, vol. 25, no. 1, pp. 13–17, 2013. @article{Perea2013, Background: The publication norms of the American Psychological Association recommend the use of a serif font in the manuscripts (Times New Roman). However, there seems to be no well-substantiated reason why serif fonts would produce any advantage during letter/word processing. Method: This study presents an experiment in which sentences were presented either with a serif or sans serif font from the same family while participants' eye movements were monitored. Results: Results did not reveal any differences of type of font in eye movement measures –except for a minimal effect in the number of progressive saccades. Conclusions: There is no reason why the APA publication norms recommend the use of serif fonts other than uniformity in the elaboration/presentation of the manuscripts. |
Laura Pérez Zapata; J. A. Aznar-Casanova; H. Supèr Two stages of programming eye gaze shifts in 3-D space Journal Article In: Vision Research, vol. 86, pp. 15–26, 2013. @article{PerezZapata2013, Accurate saccadic and vergence eye movements towards selected visual targets are fundamental to perceive the 3-D environment. Despite this importance, shifts in eye gaze are not always perfect given that they are frequently followed by small corrective eye movements. The oculomotor system receives dis- tinct information from various visual cues that may cause incongruity in the planning of a gaze shift. To test this idea, we analyzed eye movements in humans performing a saccade task in a 3-D setting. We show that saccades and vergence movements towards peripheral targets are guided by monocular (perceptual) cues. Approximately 200 ms after the start of fixation at the perceived target, a fixational saccade corrected the eye positions to the physical target location. Our findings suggest that shifts in eye gaze occur in two phases; a large eye movement toward the perceived target location followed by a corrective saccade that directs the eyes to the physical target location. |
Adam M. Perkins; Ulrich Ettinger; K. Weaver; Anne Schmechtig; A. Schrantee; P. D. Morrison; A. Sapara; V. Kumari; Steve C. R. Williams; P. J. Corr In: Translational Psychiatry, vol. 3, pp. e246, 2013. @article{Perkins2013, Clinically effective drugs against human anxiety and fear systematically alter the innate defensive behavior of rodents, suggesting that in humans these emotions reflect defensive adaptations. Compelling experimental human evidence for this theory is yet to be obtained. We report the clearest test to date by investigating the effects of 1 and 2 mg of the anti-anxiety drug lorazepam on the intensity of threat-avoidance behavior in 40 healthy adult volunteers (20 females). We found lorazepam modulated the intensity of participants' threat-avoidance behavior in a dose-dependent manner. However, the pattern of effects depended upon two factors: type of threat-avoidance behavior and theoretically relevant measures of personality. In the case of flight behavior (one-way active avoidance), lorazepam increased intensity in low scorers on the Fear Survey Schedule tissue-damage fear but reduced it in high scorers. Conversely, in the case of risk-assessment behavior (two-way active avoidance), lorazepam reduced intensity in low scorers on the Spielberger trait anxiety but increased it in high scorers. Anti-anxiety drugs do not systematically affect rodent flight behavior; therefore, we interpret this new finding as suggesting that lorazepam has a broader effect on defense in humans than in rodents, perhaps by modulating general perceptions of threat intensity. The different patterning of lorazepam effects on the two behaviors implies that human perceptions of threat intensity are nevertheless distributed across two different neural streams, which influence effects observed on one-way or two-way active avoidance demanded by the situation. |
Melanie Perron; Annie Roy-Charland Analysis of eye movements in the judgment of enjoyment and non-enjoyment smiles Journal Article In: Frontiers in Psychology, vol. 4, pp. 659, 2013. @article{Perron2013, Enjoyment smiles are more often associated with the simultaneous presence of the Cheek raiser and Lip corner puller action units, and these units' activation is more often symmetric. Research on the judgment of smiles indicated that individuals are sensitive to these types of indices, but it also suggested that their ability to perceive these specific indices might be limited. The goal of the current study was to examine perceptual-attentional processing of smiles by using eye movement recording in a smile judgment task. Participants were presented with three types of smiles: a symmetric Duchenne, a non-Duchenne, and an asymmetric smile. Results revealed that the Duchenne smiles were judged happier than those with characteristics of non-enjoyment. Asymmetric smiles were also judged happier than the non-Duchenne smiles. Participants were as effective in judging the latter smiles as not really happy as they were in judging the symmetric Duchenne smiles as happy. Furthermore, they did not spend more time looking at the eyes or mouth regardless of types of smiles. While participants made more saccades between each side of the face for the asymmetric smiles than the symmetric ones, they judged the asymmetric smiles more often as really happy than not really happy. Thus, processing of these indices do not seem limited to perceptual-attentional difficulties as reflected in viewing behavior. |
Yoni Pertzov; Paul M. Bays; Sabine Joseph; Masud Husain Rapid forgetting prevented by retrospective attention cues Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1224–1231, 2013. @article{Pertzov2013, Recent studies have demonstrated that memory performance can be enhanced by a cue which indicates the item most likely to be subsequently probed, even when that cue is delivered seconds after a stimulus array is extinguished. Although such retro-cuing has attracted considerable interest, the mechanisms underlying it remain unclear. Here, we tested the hypothesis that retro-cues might protect an item from degradation over time. We employed two techniques that previously have not been deployed in retro-cuing tasks. First, we used a sensitive, continuous scale for reporting the orientation of a memorized item, rather than binary measures (change or no change) typically used in previous studies. Second, to investigate the stability of memory across time, we also systematically varied the duration between the retro-cue and report. Although accuracy of reporting uncued objects rapidly declined over short intervals, retro-cued items were significantly more stable, showing negligible decline in accuracy across time and protection from forgetting. Retro-cuing an object's color was just as advantageous as spatial retro-cues. These findings demonstrate that during maintenance, even when items are no longer visible, attention resources can be selectively redeployed to protect the accuracy with which a cued item can be recalled over time, but with a corresponding cost in recall for uncued items. |
Claudia Peschke; Claus C. Hilgetag; Bettina Olk Influence of stimulus type on effects of flanker, flanker position, and trial sequence in a saccadic eye movement task Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 11, pp. 2253–2267, 2013. @article{Peschke2013, Using the flanker paradigm in a task requiring eye movement responses, we examined how stimulus type (arrows vs. letters) modulated effects of flanker and flanker position. Further, we examined trial sequence effects and the impact of stimulus type on these effects. Participants responded to a central target with a left- or rightward saccade. We reasoned that arrows, being overlearned symbols of direction, are processed with less effort and are therefore linked more easily to a direction and a required response than are letters. The main findings demonstrate that (a) flanker effects were stronger for arrows than for letters, (b) flanker position more strongly modulated the flanker effect for letters than for arrows, and (c) trial sequence effects partly differed between the two stimulus types. We discuss these findings in the context of a more automatic and effortless processing of arrow relative to letter stimuli. |
Anders Petersen; Søren Kyllingsbæk Eye movements and practice effects in the attentional dwell time paradigm Journal Article In: Experimental Psychology, vol. 60, no. 1, pp. 22–33, 2013. @article{Petersen2013a, In the attentional dwell time paradigm by Duncan, Ward, and Shapiro (1994), two backward masked targets are presented at different spatial locations and separated by a varying time interval. Results show that report of the second target is severely impaired when the time interval is less than 500 ms which has been taken as a direct measure of attentional dwell time in human vision. However, we show that eye movements may have confounded the estimate of the dwell time and that the measure may not be robust as previously suggested. The latter is supported by evidence suggesting that intensive training strongly attenuates the dwell time because of habituation to the masks. Thus, this article points to eye movements and masking as two potential methodological pitfalls that should be considered when using the attentional dwell time paradigm to investigate the temporal dynamics of attention. |
Anders Petersen; Søren Kyllingsbæk; Claus Bundesen Attentional dwell times for targets and masks Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–12, 2013. @article{Petersen2013, Studies on the temporal dynamics of attention have shown that the report of a masked target (T2) is severely impaired when the target is presented with a delay (stimulus onset asynchrony) of less than 500 ms after a spatially separate masked target (T1). This is known as the attentional dwell time. Recently, we have proposed a computational model of this effect building on the idea that a stimulus retained in visual short-term memory (VSTM) takes up visual processing resources that otherwise could have been used to encode subsequent stimuli into VSTM. The resources are locked until the stimulus in VSTM has been recoded, which explains the long dwell time. Challenges for this model and others are findings by Moore, Egeth, Berglan, and Luck (1996) suggesting that the dwell time is substantially reduced when the mask of T1 is removed. Here we suggest that the mask of T1 modulates performance not by noticeably affecting the dwell time but instead by acting as a distractor drawing processing resources away from T2. This is consistent with our proposed model in which targets and masks compete for attentional resources and attention dwells on both. We tested the model by replicating the study by Moore et al., including a new condition in which T1 is omitted but the mask of T1 is retained. Results from this and the original study by Moore et al. are modeled with great precision. |
Matthew F. Peterson; Miguel P. Eckstein Individual differences in eye movements during face identification reflect observer-specific optimal points of fixation Journal Article In: Psychological Science, vol. 24, no. 7, pp. 1216–1225, 2013. @article{Peterson2013, In general, humans tend to first look just below the eyes when identifying another person. Does everybody look at the same place on a face during identification, and, if not, does this variability in fixation behavior lead to functional consequences? In two conditions, observers had their free eye movements recorded while they performed a face-identification task. In another condition, the same observers identified faces while their gaze was restricted to specific locations on each face. We found substantial differences, which persisted over time, in where individuals chose to first move their eyes. Observers' systematic departure from a canonical, theoretically optimal fixation point did not correlate with performance degradation. Instead, each individual's looking preference corresponded to an idiosyncratic performance-maximizing point of fixation: Those who looked lower on the face performed better when forced to fixate the lower part of the face. The results suggest an observer-specific synergy between the face-recognition and eye movement systems that optimizes face-identification performance. |
Judith Peth; Johann S. C. Kim; Matthias Gamer Fixations and eye-blinks allow for detecting concealed crime related memories Journal Article In: International Journal of Psychophysiology, vol. 88, no. 1, pp. 96–103, 2013. @article{Peth2013, The Concealed Information Test (CIT) is a method of forensic psychophysiology that allows for revealing concealed crime related knowledge. Such detection is usually based on autonomic responses but there is a huge interest in other measures that can be acquired unobtrusively. Eye movements and blinks might be such measures but their validity is unclear. Using a mock crime procedure with a manipulation of the arousal during the crime as well as the delay between crime and CIT, we tested whether eye tracking measures allow for detecting concealed knowledge. Guilty participants showed fewer but longer fixations on central crime details and this effect was even present after stimulus offset and accompanied by a reduced blink rate. These ocular measures were partly sensitive for induction of emotional arousal and time of testing. Validity estimates were moderate but indicate that a significant differentiation between guilty and innocent subjects is possible. Future research should further investigate validity differences between gaze measures during a CIT and explore the underlying mechanisms. |
Kati Pettersson; Sharman Jagadeesan; Kristian Lukander; Andreas Henelius; Edward Hæggström; Kiti Müller Algorithm for automatic analysis of electro-oculographic data Journal Article In: BioMedical Engineering Online, vol. 12, no. 1, pp. 1–17, 2013. @article{Pettersson2013, BACKGROUND: Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. METHODS: The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. RESULTS: The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. CONCLUSION: The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. |
Sotiris Plainis; Dionysia Petratou; Trisevgeni Giannakopoulou; Hema Radhakrishnan; Ioannis G. Pallikaris; W. Neil Charman Interocular differences in visual latency induced by reduced-aperture monovision Journal Article In: Ophthalmic and Physiological Optics, vol. 33, no. 2, pp. 123–129, 2013. @article{Plainis2013, PURPOSE: To explore the interocular differences in the temporal responses of the eyes induced by the monocular use of small-aperture optics designed to aid presbyopes by increasing their depth-of-focus. METHODS: Monocular and binocular pattern-reversal visual evoked potentials (VEPs) were measured at a mean photopic field luminance of 30 cd/m(2) in seven normal subjects with either natural pupils or when the non-dominant eye wore a small-aperture contact lens (aperture diameter 1.5, 2.5 or 3.5 mm, or an annular opaque stop of inner and outer diameters 1.5 and 4.0 mm respectively). Responses were also measured with varying stimulus luminance (5, 13.9, 27.2 and 45 cd/m(2)) and a fixed 3.0 mm artificial pupil. RESULTS: Mean natural pupil diameters were 4.7 and 4.4 mm under monocular and binocular conditions respectively. The small-aperture contact lenses reduced the amplitude of the P100 component of the VEP and increased its latency. Inter-ocular differences in latency rose to about 20-25 ms when the pupil diameter of the non-dominant eye was reduced to 1.5 mm. The measurements with fixed pupil and varying luminance suggested that the observed effects were explicable in terms of the changes in retinal illuminance produced by the restrictions in pupil area. CONCLUSIONS: The anisocoria induced by small-aperture approaches to aid presbyopes produces marked interocular differences in visual latency. The literature of the Pulfrich effect suggests that such differences can lead to distortions in the perception of relative movement and, in some cases, to possible hazard. |
Sotiris Plainis; Dionysia Petratou; Trisevgeni Giannakopoulou; Hema Radhakrishnan; Ioannis G. Pallikaris; W. Neil Charman Small-aperture monovision and the Pulfrich experience: Absence of neural adaptation effects Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75987, 2013. @article{Plainis2013a, PURPOSE: To explore whether adaptation reduces the interocular visual latency differences and the induced Pulfrich effect caused by the anisocoria implicit in small-aperture monovision. METHODS: Anisocoric vision was simulated in two adults by wearing in the non-dominant eye for 7 successive days, while awake, an opaque soft contact lens (CL) with a small, central, circular aperture. This was repeated with aperture diameters of 1.5 and 2.5 mm. Each day, monocular and binocular pattern-reversal Visual Evoked Potentials (VEP) were recorded. Additionally, the Pulfrich effect was measured: the task of the subject was to state whether a a 2-deg spot appeared in front or behind the plane of a central cross when moved left-to-right or right-to-left on a display screen. The retinal illuminance of the dominant eye was varied using neutral density (ND) filters to establish the ND value which eliminated the Pulfrich effect for each lens. All experiments were performed at luminance levels of 5 and 30 cd/m(2). RESULTS: Interocular differences in monocular VEP latency (at 30 cd/m(2)) rose to about 12-15 ms and 20-25 ms when the CL aperture was 2.5 and 1.5 mm, respectively. The effect was more pronounced at 5 cd/m(2) (i.e. with larger natural pupils). A strong Pulfrich effect was observed under all conditions, with the effect being less striking for the 2.5 mm aperture. No neural adaptation appeared to occur: neither the interocular differences in VEP latency nor the ND value required to null the Pulfrich effect reduced over each 7-day period of anisocoric vision. CONCLUSIONS: Small-aperture monovision produced marked interocular differences in visual latency and a Pulfrich experience. These were not reduced by adaptation, perhaps because the natural pupil diameter of the dominant eye was continually changing throughout the day due to varying illumination and other factors, making adaptation difficult. |
Marc Pomplun; Tyler W. Garaas; Marisa Carrasco The effects of task difficulty on visual search strategy in virtual 3D displays Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–22, 2013. @article{Pomplun2013, Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy'' conjunction search task and a "difficult'' shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy'' task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult'' task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. |
Feng Du; Yue Qi; Xingshan Li; Kan Zhang Dual processes of oculomotor capture by abrupt onset: Rapid involuntary capture and sluggish voluntary prioritization Journal Article In: PLoS ONE, vol. 8, no. 11, pp. e80678, 2013. @article{Du2013, The present study showed that there are two distinctive processes underlying oculomotor capture by abrupt onset. When a visual mask between the cue and the target eliminates the unique luminance transient of an onset, the onset still attracts attention in a top-down fashion. This memory-based prioritization of onset is voluntarily controlled by the knowledge of target location. But when there is no visual mask between the cue and the target, the onset captures attention mainly in a bottom-up manner. This transient-driven capture of onset is involuntary because it occurs even when the onset is completely irrelevant to the target location. In addition, the present study demonstrated distinctive temporal characteristics for these two processes. The involuntary capture driven by luminance transients is rapid and brief, whereas the memory- based voluntary prioritization of onset is more sluggish and long-lived. |
Stéphanie Ducrot; Joël Pynte; Alain Ghio; Bernard Lété Visual and linguistic determinants of the eyes' initial fixation position in reading development Journal Article In: Acta Psychologica, vol. 142, no. 3, pp. 287–298, 2013. @article{Ducrot2013, Two eye-movement experiments with one hundred and seven first- through fifth-grade children were conducted to examine the effects of visuomotor and linguistic factors on the recognition of words and pseudowords presented in central vision (using a variable-viewing-position technique) and in parafoveal vision (shifted to the left or right of a central fixation point). For all groups of children, we found a strong effect of stimulus location, in both central and parafoveal vision. This effect corresponds to the children's apparent tendency, for peripherally located targets, to reach a position located halfway between the middle and the left edge of the stimulus (preferred viewing location, PVL), whether saccading to the right or left. For centrally presented targets, refixation probability and lexical-decision time were the lowest near the word's center, suggesting an optimal viewing position (OVP). The viewing-position effects found here were modulated (1) by print exposure, both in central and parafoveal vision; and (2) by the intrinsic qualities of the stimulus (lexicality and word frequency) for targets in central vision but not for parafoveally presented targets. |
Carolin Dudschig; Jan L. Souman; Martin Lachmair; Irmgard Vega; Barbara Kaup Reading "sun" and looking up: The influence of language on saccadic eye movements in the vertical dimension Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e56872, 2013. @article{Dudschig2013, Traditionally, language processing has been attributed to a separate system in the brain, which supposedly works in an abstract propositional manner. However, there is increasing evidence suggesting that language processing is strongly interrelated with sensorimotor processing. Evidence for such an interrelation is typically drawn from interactions between language and perception or action. In the current study, the effect of words that refer to entities in the world with a typical location (e.g., sun, worm) on the planning of saccadic eye movements was investigated. Participants had to perform a lexical decision task on visually presented words and non-words. They responded by moving their eyes to a target in an upper (lower) screen position for a word (non-word) or vice versa. Eye movements were faster to locations compatible with the word's referent in the real world. These results provide evidence for the importance of linguistic stimuli in directing eye movements, even if the words do not directly transfer directional information. |
Magda L. Dumitru; Gitte H. Joergensen; Alice G. Cruickshank; Gerry T. M. Altmann Language-guided visual processing affects reasoning: The role of referential and spatial anchoring Journal Article In: Consciousness and Cognition, vol. 22, no. 2, pp. 562–571, 2013. @article{Dumitru2013, Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/. or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process. |
Jon Andoni Duñabeitia; María Dimitropoulou; Adelina Estévez; Manuel Carreiras The influence of reading expertise in mirror-letter perception: Evidence from beginning and expert readers Journal Article In: Mind, Brain, and Education, vol. 7, no. 2, pp. 124–135, 2013. @article{Dunabeitia2013, The visual word recognition system recruits neuronal systems originally developed for object perception which are characterized by orientation insensitivity to mirror reversals. It has been proposed that during reading acquisition beginning readers have to "unlearn" this natural tolerance to mirror reversals in order to efficiently discriminate letters and words. Therefore, it is supposed that this unlearning process takes place in a gradual way and that reading expertise modulates mirror-letter discrimination. However, to date no supporting evidence for this has been obtained. We present data from an eye-movement study that investigated the degree of sensitivity to mirror-letters in a group of beginning readers and a group of expert readers. Participants had to decide which of the two strings presented on a screen corresponded to an auditorily presented word. Visual displays always included the correct target word and one distractor word. Results showed that those distractors that were the same as the target word except for the mirror lateralization of two internal letters attracted participants' attention more than distractors created by replacement of two internal letters. Interestingly, the time course of the effects was found to be different for the two groups, with beginning readers showing a greater tolerance (decreased sensitivity) to mirror-letters than expert readers. Implications of these findings are discussed within the framework of preceding evidence showing how reading expertise modulates letter identification. |
Paola E. Dussias; Jorge R. Valdés Kroff; Rosa E. Guzzardo Tamargo; Chip Gerfen When gender and looking go hand in hand Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 2, pp. 353–387, 2013. @article{Dussias2013, In a recent study, Lew-Williams and Fernald ( 2007 ) showed that native Spanish speakers use grammatical gender information encoded in Spanish articles to facilitate the processing of upcoming nouns. In this article, we report the results of a study investigating whether gram- matical gender facilitates noun recognition during second language (L2) processing. Sixteen monolingual Spanish participants (control group) The and 18 English-speaking learners of Spanish (evenly divided into high and low Spanish profi ciency) saw two-picture visual scenes in which items matched or did not match in gender. Participants' eye movements were recorded while they listened to 28 sentences in which masculine and feminine target items were preceded by an article that agreed in gender with the two pictures or agreed only with one of the pictures. An additional group of 15 Italian learners of Spanish was tested to examine whether the presence of gender in the fi rst language (L1) modulates the degree to which gender is used during L2 processing. Data were analyzed by comparing the proportion of eye fi xations on the objects in each condition. Monolingual Spanish speakers looked sooner at the referent on different-gender trials than on same-gender trials, replicating results reported in past literature. Italian-Spanish bilinguals exhibited a gender anticipatory effect, but only for the feminine condition. For the masculine condition, partici- pants waited to hear the noun before identifying the referent. Like the Spanish monolinguals, the highly profi cient English-Spanish speakers showed evidence of using gender information during online process- ing, whereas the less profi cient learners did not. The results suggest that both profi ciency in the L2 and similarities between the L1 and the L2 modulate the usefulness of morphosyntactic information during speech processing. |
R. Becket Ebitz; Karli K. Watson; Michael L. Platt Oxytocin blunts social vigilance in the rhesus macaque Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 28, pp. 11630–11635, 2013. @article{Ebitz2013, Exogenous application of the neuromodulatory hormone oxytocin (OT) promotes prosocial behavior and can improve social function. It is unclear, however, whether OT promotes prosocial behavior per se, or whether it facilitates social interaction by reducing a state of vigilance toward potential social threats. To disambiguate these two possibilities, we exogenously delivered OT to male rhesus macaques, which have a characteristic pattern of species-typical social vigilance, and examined their performance in three social attention tasks. We first determined that, in the absence of competing task demands or goals, OT increased attention to faces and eyes, as in humans. By contrast, OT reduced species typical social vigilance for unfamiliar, dominant, and emotional faces in two additional tasks. OT eliminated the emergence of a typical state of vigilance when dominant face images were available during a social image choice task. Moreover, OT improved performance on a reward-guided saccade task, despite salient social distractors: OT reduced the interference of unfamiliar faces, particularly emotional ones, when these faces were task irrelevant. Together, these results demonstrate that OT suppresses vigilance toward potential social threats in the rhesus macaque. We hypothesize that a basic role for OT in regulating social vigilance may have facilitated the evolution of prosocial behaviors in humans. |
Miguel P. Eckstein; Stephen C. Mack; Dorion B. Liston; Lisa Bogush; Randolf Menzel; Richard J. Krauzlis Rethinking human visual attention: Spatial cueing effects and optimality of decisions by honeybees, monkeys and humans Journal Article In: Vision Research, vol. 85, pp. 5–9, 2013. @article{Eckstein2013, Visual attention is commonly studied by using visuo-spatial cues indicating probable locations of a target and assessing the effect of the validity of the cue on perceptual performance and its neural correlates. Here, we adapt a cueing task to measure spatial cueing effects on the decisions of honeybees and compare their behavior to that of humans and monkeys in a similarly structured two-alternative forced-choice perceptual task. Unlike the typical cueing paradigm in which the stimulus strength remains unchanged within a block of trials, for the monkey and human studies we randomized the contrast of the signal to simulate more real world conditions in which the organism is uncertain about the strength of the signal. A Bayesian ideal observer that weights sensory evidence from cued and uncued locations based on the cue validity to maximize overall performance is used as a benchmark of comparison against the three animals and other suboptimal models: probability matching, ignore the cue, always follow the cue, and an additive bias/single decision threshold model. We find that the cueing effect is pervasive across all three species but is smaller in size than that shown by the Bayesian ideal observer. Humans show a larger cueing effect than monkeys and bees show the smallest effect. The cueing effect and overall performance of the honeybees allows rejection of the models in which the bees are ignoring the cue, following the cue and disregarding stimuli to be discriminated, or adopting a probability matching strategy. Stimulus strength uncertainty also reduces the theoretically predicted variation in cueing effect with stimulus strength of an optimal Bayesian observer and diminishes the size of the cueing effect when stimulus strength is low. A more biologically plausible model that includes an additive bias to the sensory response from the cued location, although not mathematically equivalent to the optimal observer for the case stimulus strength uncertainty, can approximate the benefits of the more computationally complex optimal Bayesian model. We discuss the implications of our findings on the field's common conceptualization of covert visual attention in the cueing task and what aspects, if any, might be unique to humans. |
Jose Ignacio Egaña; Christ Devia; Rocío Mayol; Javiera Parrini; Gricel Orellana; Aida Ruiz; Pedro E. Maldonado Small saccades and image complexity during free viewing of natural images in schizophrenia Journal Article In: Frontiers in Psychiatry, vol. 4, pp. 37, 2013. @article{Egana2013, In schizophrenia, patients display dysfunctions during the execution of simple visual tasks such as antisaccade or smooth pursuit. In more ecological scenarios, such as free viewing of natural images, patients appear to make fewer and longer visual fixations and display shorter scanpaths. It is not clear whether these measurements reflect alterations in their proficiency to perform basic eye movements, such as saccades and fixations, or are related to high-level mechanisms, such as exploration or attention. We utilized free exploration of natural images of different complexities as a model of an ecological context where normally operative mechanisms of visual control can be accurately measured. We quantified visual exploration as Euclidean distance, scanpaths, saccades, and visual fixation, using the standard SR-Research eye tracker algorithm (SR). We then compared this result with a computation that includes microsaccades (EM). We evaluated eight schizophrenia patients and corresponding healthy controls (HC). Next, we tested whether the decrement in the number of saccades and fixations, as well as their increment in duration reported previously in schizophrenia patients, resulted from the increasing occurrence of undetected microsaccades. We found that when utilizing the standard SR algorithm, patients displayed shorter scanpaths as well as fewer and shorter saccades and fixations. When we employed the EM algorithm, the differences in these parameters between patients and HC were no longer significant. On the other hand, we found that image complexity plays an important role in exploratory behaviors, demonstrating that this factor explains most of differences between eye-movement behaviors in schizophrenia patients. These results help elucidate the mechanisms of visual motor control that are affected in schizophrenia and contribute to the finding of adequate markers for diagnosis and treatment for this condition. |
Caroline Ego; Jean-Jacques Orban de Xivry; Marie-Cécile Nassogne; Demet Yüksel; Philippe Lefèvre The saccadic system does not compensate for the immaturity of the smooth pursuit system during visual tracking in children. Journal Article In: Journal of neurophysiology, vol. 110, no. 2, pp. 358–367, 2013. @article{Ego2013, Motor skills improve with age from childhood into adulthood, and this improvement is reflected in the performance of smooth pursuit eye movements. In contrast, the saccadic system becomes mature earlier than the smooth pursuit system. Therefore, the present study investigates whether the early mature saccadic system compensates for the lower pursuit performance during childhood. To answer this question, horizontal eye movements were recorded in 58 children (ages 5-16 yr) and 16 adults (ages 23-36 yr) in a task that required the combination of smooth pursuit and saccadic eye movements. Smooth pursuit performance improved with age. However, children had larger average position error during target tracking compared with adults, but they did not execute more saccades to compensate for their low pursuit performance despite the early maturity of their saccadic system. This absence of error correction suggests that children have a lower sensitivity to visual errors compared with adults. This reduced sensitivity might stem from poor internal models and longer processing time in young children. |
Kirsten A. Dalrymple; Alexander K. Gray; Brielle L. Perler; Elina Birmingham; Walter F. Bischof; Jason J. S. Barton; Alan Kingstone Eyeing the eyes in social scenes: Evidence for top-down control of stimulus selection in simultanagnosia Journal Article In: Cognitive Neuropsychology, vol. 30, no. 1, pp. 25–40, 2013. @article{Dalrymple2013, Simultanagnosia is a disorder of visual attention resulting from bilateral parieto-occipital lesions. Healthy individuals look at eyes to infer people's attentional states, but simultanagnosics allocate abnormally few fixations to eyes in scenes. It is unclear why simultanagnosics fail to fixate eyes, but it might reflect that they are (a) unable to locate and fixate them, or (b) do not prioritize attentional states. We compared eye movements of simultanagnosic G.B. to those of healthy subjects viewing scenes normally or through a restricted window of vision. They described scenes and explicitly inferred attentional states of people in scenes. G.B. and subjects viewing scenes through a restricted window made few fixations on eyes when describing scenes, yet increased fixations on eyes when inferring attention. Thus G.B. understands that eyes are important for inferring attentional states and can exert top-down control to seek out and process the gaze of others when attentional states are of interest. |
Michael Dambacher; Timothy J. Slattery; Jinmian Yang; Reinhold Kliegl; Keith Rayner Evidence for direct control of eye movements during reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1468–1484, 2013. @article{Dambacher2013, It is well established that fixation durations during reading vary with processing difficulty, but there are different views on how oculomotor control, visual perception, shifts of attention, and lexical (and higher cognitive) processing are coordinated. Evidence for a one-to-one translation of input delay into saccadic latency would provide a much needed constraint for current theoretical proposals. Here, we tested predictions of such a direct-control perspective using the stimulus-onset delay (SOD) paradigm. Words in sentences were initially masked and, on fixation, were individually unmasked with a delay (0-, 33-, 66-, 99-ms SODs). In Experiment 1, SODs were constant for all words in a sentence; in Experiment 2, SODs were manipulated on target words, while nontargets were unmasked without delay. In accordance with predictions of direct control, nonzero SODs entailed equivalent increases in fixation durations in both experiments. Yet, a population of short fixations pointed to rapid saccades as a consequence of low-level information at nonoptimal viewing positions rather than of lexical processing. Implications of these results for theoretical accounts of oculomotor control are discussed. |
Natasha Dare; Richard C. Shillcock Serial and parallel processing in reading: Investigating the effects of parafoveal orthographic information on nonisolated word recognition Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 487–504, 2013. @article{Dare2013, We present a novel lexical decision task and three boundary paradigm eye-tracking experiments that clarify the picture of parallel processing in word recognition in context. First, we show that lexical decision is facilitated by associated letter information to the left and right of the word, with no apparent hemispheric specificity. Second, we show that parafoveal preview of a repeat of word n at word n + 1 facilitates reading of word n relative to a control condition with an unrelated word at word n + 1. Third, using a version of the boundary paradigm that allowed for a regressive eye movement, we show no parafoveal ``postview'' effect on reading word n of repeating word n at word n - 1. Fourth, we repeat the second experiment but compare the effects of parafoveal previews consisting of a repeated word n with a transposed central bigram (e.g., caot for coat) and a substituted central bigram (e.g., ceit for coat), showing the latter to have a deleterious effect on processing word n, thereby demonstrating that the parafoveal preview effect is at least orthographic and not purely visual. |
Ido Davidesco; Michal Harel; Michal Ramot; Uri Kramer; Svetlana Kipervasser; Fani Andelman; Miri Y. Neufeld; Gadi Goelman; Itzhak Fried; Rafael Malach Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy Journal Article In: Journal of Neuroscience, vol. 33, no. 3, pp. 1228–1240, 2013. @article{Davidesco2013, One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation ofboth spatial and object-based attention. They were presented with composite stimuli, consisting ofa small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one ofthe objects. We found a consistent increase in broadband high-frequency (30–90Hz) power, but not in visual evoked potentials, associated with spatial attention starting withV1/V2 and continuing throughout the visual hierarchy. The magnitude ofthe attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties. |
C. Hemptinne; Adrian Ivanoiu; Philippe Lefèvre; Marcus Missal How does Parkinson's disease and aging affect temporal expectation and the implicit timing of eye movements? Journal Article In: Neuropsychologia, vol. 51, no. 2, pp. 340–348, 2013. @article{Hemptinne2013, Anticipatory eye movements are often evoked by the temporal expectation of an upcoming event. Temporal expectation is based on implicit timing about when a future event could occur. Implicit timing emerges from observed temporal regularities in a changing stimulus without any voluntary estimate of elapsed time, unlike explicit timing. The neural bases of explicit and implicit timing are likely different. It has been shown that the basal ganglia (BG) play a central role in explicit timing. In order to determine the influence of BG in implicit timing, we investigated the influence of early Parkinson's disease (PD) and aging on the latency of anticipatory eye movements. We hypothesized that a deficit of implicit timing should yield inadequate temporal expectations, and consequently abnormally timed anticipatory eye movements compared with age-matched controls. To test this hypothesis, we used an oculomotor paradigm where anticipation of a salient target event plays a central role. Participants pursued a visual target that moved along a circular path at a constant velocity. After a randomly short (1200. ms) or long (2400. ms) forward path, the target reversed direction, returned to its starting position and stopped. Target motion reversal caused an abrupt 'slip' of the pursued target image on the retina and was a particularly salient event evoking anticipatory eye movements. Anticipatory eye movements were less frequent in PD patients. However, the timing of anticipation of target motion reversal was statistically similar in PD patients and control subjects. Other eye movements showed statistically significant differences between PD and controls, but these differences could be attributed to other factors. We conclude that all anticipatory eye movements are not similarly impaired in PD and that implicit timing of salient events seems largely unaffected by this disease. The results support the hypothesis that implicit and explicit timing are differently affected by BG dysfunction. |
Maria De Luca; Maria Pontillo; Silvia Primativo; Donatella Spinelli; Pierluigi Zoccolotti The eye-voice lead during oral reading in developmental dyslexia Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 696, 2013. @article{DeLuca2013, In reading aloud, the eye typically leads over voice position. In the present study, eye movements and voice utterances were simultaneously recorded and tracked during the reading of a meaningful text to evaluate the eye-voice lead in 16 dyslexic and 16 same-age control readers. Dyslexic children were slower than control peers in reading texts. Their slowness was characterized by a great number of silent pauses and sounding-out behaviors and a small lengthening of word articulation times. Regarding eye movements, dyslexic readers made many more eye fixations (and generally smaller rightward saccades) than controls. Eye movements and voice (which were shifted in time because of the eye-voice lead) were synchronized in dyslexic readers as well as controls. As expected, the eye-voice lead was significantly smaller in dyslexic than control readers, confirming early observations by Buswell (1921) and Fairbanks (1937). The eye-voice lead was significantly correlated with several eye movements and voice parameters, particularly number of fixations and silent pauses. The difference in performance between dyslexic and control readers across several eye and voice parameters was expressed by a ratio of about 2. We propose that referring to proportional differences allows for a parsimonious interpretation of the reading deficit in terms of a single deficit in word decoding. The possible source of this deficit may call for visual or phonological mechanisms, including Goswami's temporal sampling framework. |
Miriam Ellert Resolving ambiguous pronouns in a second language: A visual-world eye-tracking study with dutch learners of German Journal Article In: International Review of Applied Linguistics in Language Teaching, vol. 51, no. 2, pp. 171–197, 2013. @article{Ellert2013, This study examined whether resolving ambiguous pronouns in a second language is guided by the L1 preferences of the learners. Given the fact that the typologically closely related languages, German and Dutch, have both been found to use personal pronouns (German er, Dutch hij; ‘he') to refer to topical antecedents, and d-pronouns (German der,Dutch die; ‘he') for non-topical co- reference (Ellert 2010; Kaiser 2011; Kaiser and Trueswell 2004), it was asked whether Dutch L2 learners of German would exhibit similar preferences when resolving the two pronominal forms in their L2. This was examined with the visual-world eye-tracking paradigm and an off-line referent assignment task. The results showed differences in resolution patterns: the Dutch learners of German showed an overall topic preference across pronouns which became more target-like at higher proficiency levels. This suggests that L2 information organization cannot be merely explained by L1 influences, but needs to take more more general L2 learner effects into account. |
James T. Enns; Sarah C. MacDonald The role of clarity and blur in guiding visual attention in photographs Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 2, pp. 568–578, 2013. @article{Enns2013, Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory and personality ratings for individual people in the photos (Experiments 1-3). The results showed that fixations occurred more rapidly and frequently to a local region of clarity than to a comparable blurred region in all tasks, independent of the content of the photo in the local region, and even under instructions to look equally at both regions. However, this bias was reversed when the content of the photos was no longer task-relevant. In Experiment 4, participants located target regions defined by either clarity or blur. Fixations and manual responses were faster for blurred than for sharp targets. These findings imply that the saliency of both image clarity and image blur depends on viewers' goals. Focusing on photo content prioritizes regions of clarity whereas focusing on photo quality prioritizes attention to regions of blur. |
Lei Cui; Denis Drieghe; Guoli Yan; Xuejun Bai; Hui Chi; Simon P. Liversedge Parafoveal processing across different lexical constituents in Chinese reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 2, pp. 403–416, 2013. @article{Cui2013a, We report a boundary paradigm eye movement experiment to investigate whether the linguistic category of a two-character Chinese string affects how the second character of that string is processed in the parafovea during reading. We obtained clear preview effects in all conditions but, more impor- tantly, found parafoveal-on-foveal effects whereby a nonsense preview of the second character influ- enced fixations on the first character. This effect occurred for monomorphemic words, but not for compound words or phrases. Also, in a word boundary demarcation experiment, we demonstrate that Chinese readers are not always consistent in their judgements of which characters in a sentence constitute words. We conclude that information regarding the combinatorial properties of characters in Chinese is used online to moderate the extent to which parafoveal characters are processed. |
Lei Cui; Guoli Yan; Xuejun Bai; Jukka Hyönä; Suiping Wang; Simon P. Liversedge Processing of compound word characters in reading chinese: An eye movement contingent display change study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 527–547, 2013. @article{Cui2013b, Readers' eyemovements weremonitored as they read Chinese two-constituent compound words in sentence contexts. The first compound-word constituent was either an infrequent character with a highly predictable second constituent or a frequent character with an unpredictable second constituent. The parafoveal preview of the second constituent was manipulated, with four preview conditions: identical to the correct form; a semantically related character to the second constituent; a semantically unrelated character to the second constituent; and a pseudocharacter. An invisible boundary was set between the two constituents; when the eyes moved across the boundary, the previewed character was changed to its intended form. The main findings were that preview effects occurred for the second constituent of the compound word. Providing an incorrect preview of the second constituent affected fixations on the first constituent, but only when the second constituent was predictable from the first. The frequency of the initial character of the compound constrained the identity of the second character, and this in turnmodulated the extent to which the semantic characteristics of the preview influenced processing of the second constituent and the compound word as a whole. The results are considered in relation to current accounts of Chinese compound-word recognition and the constraint hypothesis of Hyönä, Bertram, and Pollatsek (2004). We conclude that word identification in Chinese is flexible, and parafoveal processing of upcoming characters is influenced both by the characteristics of the fixated character and by its relationship with the characters in the parafovea. |
Lei Cui; Guoli Yan; Xuejun Bai; Jukka Hyönä; Suiping Wang; Simon P. Liversedge Processing of compound-word characters in reading Chinese: An eye-movement-contingent display change study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 527–547, 2013. @article{Cui2013, Readers' eye movements were monitored as they read Chinese two-constituent compound words in sentence contexts. The first compound-word constituent was either an infrequent character with a highly predictable second constituent or a frequent character with an unpredictable second constituent. The parafoveal preview of the second constituent was manipulated, with four preview conditions: identical to the correct form; a semantically related character to the second constituent; a semantically unrelated character to the second constituent; and a pseudocharacter. An invisible boundary was set between the two constituents; when the eyes moved across the boundary, the previewed character was changed to its intended form. The main findings were that preview effects occurred for the second constituent of the compound word. Providing an incorrect preview of the second constituent affected fixations on the first constituent, but only when the second constituent was predictable from the first. The frequency of the initial character of the compound constrained the identity of the second character, and this in turn modulated the extent to which the semantic characteristics of the preview influenced processing of the second constituent and the compound word as a whole. The results are considered in relation to current accounts of Chinese compound-word recognition and the constraint hypothesis of Hyönä, Bertram, and Pollatsek ( 2004 ). We conclude that word identification in Chinese is flexible, and parafoveal processing of upcoming characters is influenced both by the characteristics of the fixated character and by its relationship with the characters in the parafovea. |
Yuwei Cui; Liu D. Liu; Farhan A. Khawaja; Christopher C. Pack; Daniel A. Butts Diverse suppressive Iinfluences in area MT and selectivity to complex motion features Journal Article In: Journal of Neuroscience, vol. 33, no. 42, pp. 16715–16728, 2013. @article{Cui2013c, Neuronal selectivity results from both excitatory and suppressive inputs to a given neuron. Suppressive influences can often significantly modulate neuronal responses and impart novel selectivity in the context of behaviorally relevant stimuli. In this work, we use a naturalistic optic flow stimulus to explore the responses of neurons in the middle temporal area (MT) of the alert macaque monkey; these responses are interpreted using a hierarchical model that incorporates relevant nonlinear properties of upstream processing in the primary visual cortex (V1). In this stimulus context, MT neuron responses can be predicted from distinct excitatory and suppressive components. Excitation is spatially localized and matches the measured preferred direction of each neuron. Suppression is typically composed of two distinct components: (1) a directionally untuned component, which appears to play the role of surround suppression and normalization; and (2) a direction-selective component, with comparable tuning width as excitation and a distinct spatial footprint that is usually partially overlapping with excitation. The direction preference of this direction-tuned suppression varies widely across MT neurons: approximately one-third have overlapping suppression in the opposite direction as excitation, and many other neurons have suppression with similar direction preferences to excitation. There is also a population of MT neurons with orthogonally oriented suppression. We demonstrate that direction-selective suppression can impart selectivity of MT neurons to more complex velocity fields and that it can be used for improved estimation of the three-dimensional velocity of moving objects. Thus, considering MT neurons in a complex stimulus context reveals a diverse set of computations likely relevant for visual processing in natural visual contexts. |
Ian Cunnings; Claudia Felser The role of working memory in the processing of reflexives Journal Article In: Language and Cognitive Processes, vol. 28, no. 9, pp. 188–219, 2013. @article{Cunnings2013, We report results from two eye-movement experiments that examined how differences in working memory (WM) capacity affect readers' application of structural constraints on reflexive anaphor resolution during sentence comprehension. We examined whether binding Principle A, a syntactic constraint on the interpretation of reflexives, is reducible to a memory friendly ‘‘recency'' strategy, and whether WM capacity influences the degree to which readers create anaphoric dependencies ruled out by binding theory. Our results indicate that low and high WM span readers applied Principle A early during processing. However, contrary to previous findings, low span readers also showed immediate intrusion effects of a linearly closer but structurally inaccessible competitor antecedent. We interpret these findings as indicating that although the relative prominence of potential antecedents in WM can affect online anaphor resolution, Principle A is not reducible to a processing or linear distance based ‘‘least effort'' constraint. |
Roberta Daini; Andrea Albonico; Manuela Malaspina; Marialuisa Martelli; Silvia Primativo; Lisa S. Arduino Dissociation in optokinetic stimulation sensitivity between omission and substitution reading errors in neglect dyslexia Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 581, 2013. @article{Daini2013, Although omission and substitution errors in neglect dyslexia (ND) patients have always been considered as different manifestations of the same acquired reading disorder, recently, we proposed a new dual mechanism model. While omissions are related to the exploratory disorder which characterizes unilateral spatial neglect (USN), substitutions are due to a perceptual integration mechanism. A consequence of this hypothesis is that specific training for omission-type ND patients would aim at restoring the oculo-motor scanning and should not improve reading in substitution-type ND. With this aim we administered an optokinetic stimulation (OKS) to two brain-damaged patients with both USN and ND, MA and EP, who showed ND mainly characterized by omissions and substitutions, respectively. MA also showed an impairment in oculo-motor behavior with a non-reading task, while EP did not. The two patients presented a dissociation with respect to their sensitivity to OKS, so that, as expected, MA was positively affected, while EP was not. Our results confirm a dissociation between the two mechanisms underlying omission and substitution reading errors in ND patients. Moreover, they suggest that such a dissociation could possibly be extended to the effectiveness of rehabilitative procedures, and that patients who mainly omit contralesional-sided letters would benefit from OKS. |
Jelmer P. De Vries; Ignace T. C. Hooge; Alexander H. Wertheim; Frans A. J. Verstraten Background, an important factor in visual search Journal Article In: Vision Research, vol. 86, pp. 128–138, 2013. @article{DeVries2013, The ability to detect an object depends on the contrast between the object and its background. Despite this, many models of visual search rely solely on the properties of target and distractors, and do not take the background into account. Yet, both target and distractors have their individual contrasts with the background. These contrasts generally differ, because the target and distractors are different in at least one feature. Therefore, background is likely to play an important role in visual search. In three experiments we manipulated the properties of the background (luminance, orientation and spatial frequency, respectively) while keeping the target and distractors constant. In the first experiment, in which target and distractors had a different luminance, changing the background luminance had an extensive effect on search times. When background luminance was in between that of the target and distractors, search times were always short. Interestingly, when the background was darker than both the target and the distractors, search times were much longer than when the background was lighter. Manipulating orientation and spatial frequency of the background, on the other hand, resulted in search times that were longest for small target-background differences. Thus, background plays an important role in search. This role depends on the individual contrast of both target and distractors with the background and the type of feature contrast (luminance, orientation or spatial frequency). |
Thomas Deffieux; Youliana Younan; Nicolas Wattiez; Mickael Tanter; Pierre Pouget; Jean-François Aubry Low-intensity focused ultrasound modulates monkey visuomotor behavior Journal Article In: Current Biology, vol. 23, pp. 2430–2433, 2013. @article{Deffieux2013, In vivo feasibility of using low-intensity focused ultrasound (FUS) to transiently modulate the function of regional brain tissue has been recently tested in anesthetized lagomorphs [1] and rodents [2-4]. Hypothetically, ultrasonic stimulation of the brain possesses several advantages [5]: it does not necessitate surgery or genetic alteration but could ultimately confer spatial resolutions superior to other noninvasive methods. Here, we gauged the ability of noninvasive FUS to causally modulate high-level cognitive behavior. Therefore, we examined how FUS might interfere with prefrontal activity in two awake macaque rhesus monkeys that had been trained to perform an antisaccade (AS) task. We show that ultrasound significantly modulated AS latencies. Such effects proved to be dependent on FUS hemifield of stimulation (relative latency increases most for ipsilateral AS). These results are interpreted in terms of a modulation of saccade inhibition to the contralateral visual field due to the disruption of processing across the frontal eye fields. Our study demonstrates for the first time the feasibility of using FUS stimulation to causally modulate behavior in the awake nonhuman primate brain. This result supports the use of this approach to study brain function. Neurostimulation with ultrasound could be used for exploratory and therapeutic purposes noninvasively, with potentially unprecedented spatial resolution. |
Louis F. Dell'Osso; Jonathan B. Jacobs Normal pursuit-system limitations — first discovered in infantile nystagmus syndrome Journal Article In: Journal of Eye Movement Research, vol. 6, no. 1, pp. 1–24, 2013. @article{DellOsso2013, Infantile nystagmus syndrome (INS) patients occasionally have impaired pursuit. Model and patient data identified relative timing between target motion initiation and INS-waveform saccades as the cause. We used a new stimulus, the “step-pause-ramp” (SPR), to induce saccades proximal to target-velocity onset and test their effect on normal pursuit. Our OMS model predicted that proximal saccades impaired normal ramp responses, as in INS. Eye movements of subjects were calibrated monocularly and recorded binocularly; data were analyzed using OMtools software. Proximal saccades caused lengthened target acquisition times and steady-state position errors, confirming the model's predictions. Spontaneous pursuit oscillation supported the hypothesis that INS is caused by loss of smooth-pursuit damping. Snooth pursuit may be impaired by saccades overlapping targetmotion onset. |
Alixia Demichelis; Gérard Olivier; Alain Berthoz Motor transfer from map ocular exploration to locomotion during spatial navigation from memory Journal Article In: Experimental Brain Research, vol. 224, no. 4, pp. 605–611, 2013. @article{Demichelis2013, Spatial navigation from memory can rely on two different strategies: a mental simulation of a kinesthetic spatial navigation (egocentric route strategy) or visual-spatial memory using a mental map (allocentric survey strategy). We hypothesized that a previously performed "oculomotor navigation" on a map could be used by the brain to perform a locomotor memory task. Participants were instructed to (1) learn a path on a map through a sequence of vertical and horizontal eyes movements and (2) walk on the slabs of a "magic carpet" to recall this path. The main results showed that the anisotropy of ocular movements (horizontal ones being more efficient than vertical ones) influenced performances of participants when they change direction on the central slab of the magic carpet. These data suggest that, to find their way through locomotor space, subjects mentally repeated their past ocular exploration of the map, and this visuo-motor memory was used as a template for the locomotor performance. |
Virginie Desestret; Nathalie Streichenberger; Muriel T. N. Panouillères; Denis Pélisson; B. Plus; Charles Duyckaerts; Dennis K. Burns; Christian Scheiber; Alain Vighetto; Caroline Tilikete An elderly woman with difficulty reading and abnormal eye movements Journal Article In: Journal of Neuro-Ophthalmology, vol. 33, no. 3, pp. 296–301, 2013. @article{Desestret2013, A 73-year-old woman was evaluated in our neuro- ophthalmology clinic with a 1-year history of progressive difficulty reading. The patient's visual acuity, pupillary reactions to light and near stimulation, visual fields, and fundi were normal. Examination of her eye movements revealed a supranuclear vertical gaze abnormality, charac- terized by lack of upward saccades but intact downward saccades. The patient also had had difficulty initiating voluntary, especially leftward horizontal saccades on command, but reactive horizontal saccades were relatively well preserved. She was able to follow a pencil light moved by the examiner using small saccades (saccadic smooth pursuit) and her vestibulo-ocular reflex (VOR) was intact. She had apraxia of lid closure. The patient had no cognitive deficit, behavioral or social disturbance, aphasia, alexia, limb apraxia, postural ataxia, pyramidal signsorparkinsonism. |
Joost C. Dessing; Michael Vesia; J. Douglas Crawford The role of areas MT+/V5 and SPOC in spatial and temporal control of manual interception: An rTMS study Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 15, 2013. @article{Dessing2013, Manual interception, such as catching or hitting an approaching ball, requires the hand to contact a moving object at the right location and at the right time. Many studies have examined the neural mechanisms underlying the spatial aspects of goal-directed reaching, but the neural basis of the spatial and temporal aspects of manual interception are largely unknown. Here, we used repetitive transcranial magnetic stimulation (rTMS) to investigate the role of the human middle temporal visual motion area (MT+/V5) and superior parieto-occipital cortex (SPOC) in the spatial and temporal control of manual interception. Participants were required to reach-to-intercept a downward moving visual target that followed an unpredictably curved trajectory, presented on a screen in the vertical plane. We found that rTMS to MT+/V5 influenced interceptive timing and positioning, whereas rTMS to SPOC only tended to increase the spatial variance in reach end points for selected target trajectories. These findings are consistent with theories arguing that distinct neural mechanisms contribute to spatial, temporal, and spatiotemporal control of manual interception. |
Saurabh Dhawan; Heiner Deubel; Donatas Jonikaitis Inhibition of saccades elicits attentional suppression Journal Article In: Journal of Vision, vol. 13, no. 6, pp. 1–12, 2013. @article{Dhawan2013, Visuospatial attention has been shown to have a central role in planning and generation of saccades but what role, if any, it plays in inhibition of saccades remains unclear. In this study, we used an oculomotor delayed match- or nonmatch-to-sample task in which a cued location has to be encoded and memorized for one of two very different goals-to plan a saccade to it or to avoid making a saccade to it. We measured the spatial allocation of attention during the delay and found that while marking a location as a future saccade target resulted in an attentional benefit at that location, marking it as forbidden to saccades led to an attentional cost. Additionally, saccade trajectories were found to deviate away more from the "don't look" location than from a saccade-irrelevant distractor confirming greater inhibition of an actively forbidden location in oculomotor programming. Our finding that attention is suppressed at locations forbidden to saccades confirms and complements the claim of a selective and obligatory coupling between saccades and attention-saccades at the memorized location could neither be planned nor suppressed independent of a corresponding effect on attentional performance. |
L. L. Di Stasi; M. Marchitto; A. Antolí; J. J. Cañas Saccadic peak velocity as an alternative index of operator attention: A short review Journal Article In: European Review of Applied Psychology, vol. 63, no. 6, pp. 335–343, 2013. @article{DiStasi2013, Introduction Automation research has identified the need to monitor operator attentional states in real time as a basis for determining the most appropriate type and level of automated assistance for operators doing complex tasks. Objective The development of a methodology that is able to detect on-line operator attentional state variations could represent a good starting point to solve this critical issue. Results We present a short review of the literature on different indices of attentional state and discuss a series of experiments that demonstrates the validity and sensitivity of a specific eye movement index: saccadic peak velocity (PV). PV was able to detect variations in mental state while doing complex and ecological tasks, ranging from air traffic control simulated tasks to driving simulator sessions. Conclusion This research could provide several guidelines for designing adaptive systems (able to allocate tasks between operators and machine in a dynamic way) and early fatigue-and-distraction warning systems to reduce accident risk. © 2013 Elsevier Masson SAS. All rights reserved. |
Leandro Luigi Di Stasi; Adoración Antolí; José J. Cañas Evaluating mental workload while interacting with computer-generated artificial environments Journal Article In: Entertainment Computing, vol. 4, no. 1, pp. 63–69, 2013. @article{DiStasi2013a, The need to evaluate user behaviour and cognitive efforts when interacting with complex simulations plays a crucial role in many information and communications technologies. The aim of this paper is to propose the use of eye-related measures as indices of mental workload in complex tasks. An experiment was conducted using the FireChief® microworld in which user mental workload was manipulated by changing the interaction strategy required to perform a common task. There were significant effects of the attentional state of users on visual scanning behavior. Longer fixations were found for the more demanding strategy, slower saccades were found as the time-on-task increased, and pupil diameter decreased when an environmental change was introduced. Questionnaire and performance data converged with the psychophysiological ones. These results provide additional empirical support for the ability of some eye-related indices to discriminate variations in the attentional state of the user in visual-dynamic complex tasks and show their potential diagnostic capacity in the field of applied ergonomics. |
Leandro Luigi Di Stasi; Andrés Catena; José J. Cañas; Stephen L. Macknik; Susana Martinez-Conde Saccadic velocity as an arousal index in naturalistic tasks Journal Article In: Neuroscience and Biobehavioral Reviews, vol. 37, no. 5, pp. 968–975, 2013. @article{DiStasi2013b, Experimental evidence indicates that saccadic metrics vary with task difficulty and time-on-task in naturalistic scenarios. We explore historical and recent findings on the correlation of saccadic velocity with task parameters in clinical, military, and everyday situations, and its potential role in ergonomics. We moreover discuss the hypothesis that changes in saccadic velocity indicate variations in sympathetic nervous system activation; that is, variations in arousal. |
Leandro Luigi Di Stasi; Michael B. Mccamy; Andrés Catena; Stephen L. Macknik; José J. Cañas; Susana Martinez-Conde Microsaccade and drift dynamics reflect mental fatigue Journal Article In: European Journal of Neuroscience, vol. 38, no. 3, pp. 2389–2398, 2013. @article{DiStasi2013c, Our eyes are always in motion. Even during periods of relative fixation we produce so-called 'fixational eye movements', which include microsaccades, drift and tremor. Mental fatigue can modulate saccade dynamics, but its effects on microsaccades and drift are unknown. Here we asked human subjects to perform a prolonged and demanding visual search task (a simplified air traffic control task), with two difficulty levels, under both free-viewing and fixation conditions. Saccadic and microsaccadic velocity decreased with time-on-task whereas drift velocity increased, suggesting that ocular instability increases with mental fatigue. Task difficulty did not influence eye movements despite affecting reaction times, performance errors and subjective complexity ratings. We propose that variations in eye movement dynamics with time-on-task are consistent with the activation of the brain's sleep centers in correlation with mental fatigue. Covariation of saccadic and microsaccadic parameters moreover supports the hypothesis of a common generator for microsaccades and saccades. We conclude that changes in fixational and saccadic dynamics can indicate mental fatigue due to time-on-task, irrespective of task complexity. These findings suggest that fixational eye movement dynamics have the potential to signal the nervous system's activation state. |
Francisco M. Costela; Michael B. McCamy; Stephen L. Macknik; Jorge Otero-Millan; Susana Martinez-Conde Microsaccades restore the visibility of minute foveal targets Journal Article In: PeerJ, vol. 1, pp. 1–14, 2013. @article{Costela2013, Stationary targets can fade perceptually during steady visual fixation, a phenomenon known as Troxler fading. Recent research found that microsaccades-small, involuntary saccades produced during attempted fixation-can restore the visibility of faded targets, both in the visual periphery and in the fovea. Because the targets tested previously extended beyond the foveal area, however, the ability of microsaccades to restore the visibility of foveally-contained targets remains unclear. Here, subjects reported the visibility of low-to-moderate contrast targets contained entirely within the fovea during attempted fixation. The targets did not change physically, but their visibility varied intermittently during fixation, in an illusory fashion (i.e., foveal Troxler fading). Microsaccade rates increased significantly before the targets became visible, and decreased significantly before the targets faded, for a variety of target contrasts. These results support previous research linking microsaccade onsets to the visual restoration of peripheral and foveal targets, and extend the former conclusions to minute targets contained entirely within the fovea. Our findings suggest that the involuntary eye movements produced during attempted fixation do not always prevent fading-in either the fovea or the periphery-and that microsaccades can restore perception, when fading does occur. Therefore, microsaccades are relevant to human perception of foveal stimuli. |
M. Gabriela Costello; Dantong Zhu; Emilio Salinas; Terrence R. Stanford Perceptual modulation of motor-but not visual-responses in the frontal eye field during an urgent-decision task Journal Article In: Journal of Neuroscience, vol. 33, no. 41, pp. 16394–16408, 2013. @article{Costello2013, Neuronal activity in the frontal eye field (FEF) ranges from purely motor (related to saccade production) to purely visual (related to stimulus presence). According to numerous studies, visual responses correlate strongly with early perceptual analysis of the visual scene, including the deployment of spatial attention, whereas motor responses do not. Thus, functionally, the consensus is that visually responsive FEF neurons select a target among visible objects, whereas motor-related neurons plan specific eye movements based on such earlier target selection. However, these conclusions are based on behavioral tasks that themselves promote a serial arrangement of perceptual analysis followed by motor planning. So, is the presumed functional hierarchy in FEF an intrinsic property of its circuitry or does it reflect just one possible mode of operation? We investigate this in monkeys performing a rapid-choice task in which, crucially, motor planning always starts ahead of task-critical perceptual analysis, and the two relevant spatial locations are equally informative and equally likely to be target or distracter. We find that the choice is instantiated in FEF as a competition between oculomotor plans, in agreement with model predictions. Notably, although perception strongly influences the motor neurons, it has little if any measurable impact on the visual cells; more generally, the more dominant the visual response, the weaker the perceptual modulation. The results indicate that, contrary to expectations, during rapid saccadic choices perceptual information may directly modulate ongoing saccadic plans, and this process is not contingent on prior selection of the saccadic goal by visually driven FEF responses. |
Christopher D. Cowper-Smith; Gail A. Eskes; David A. Westwood Motor inhibition of return can affect prepared reaching movements Journal Article In: Neuroscience Letters, vol. 541, pp. 83–86, 2013. @article{CowperSmith2013a, Inhibition of return (IOR) is a widely studied phenomenon that is thought to affect attention, eye movements, or reaching movements, in order to promote orienting responses toward novel stimuli. Previous research in our laboratory demonstrated that the motor form of saccadic IOR can arise from late-stage response execution processes. In the present study, we were interested in whether the same is true of reaching responses. If IOR can emerge from processes operating at or around the time of response execution, then IOR should be observed even when participants have fully prepared their responses in advance of the movement initiation signal. Similar to the saccadic system, our results reveal that IOR can be implemented as a late-stage execution bias in the reaching control system. |
Christopher D. Cowper-Smith; Jonathan W. Harris; Gail A. Eskes; David A. Westwood Spatial interactions between successive eye and arm movements: Signal type matters Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e58850, 2013. @article{CowperSmith2013b, Spatial interactions between consecutive movements are often attributed to inhibition of return (IOR), a phenomenon in which responses to previously signalled locations are slower than responses to unsignalled locations. In two experiments using peripheral target signals offset by 0°, 90°, or 180°, we show that consecutive saccadic (Experiment 1) and reaching (Experiment 3) responses exhibit a monotonic pattern of reaction times consistent with the currently established spatial distribution of IOR. In contrast, in two experiments with central target signals (i.e., arrowheads pointing at target locations), we find a non-monotonic pattern of reaction times for saccades (Experiment 2) and reaching movements (Experiment 4). The difference in the patterns of results observed demonstrates different behavioral effects that depend on signal type. The pattern of results observed for central stimuli are consistent with a model in which neural adaptation is occurring within motor networks encoding movement direction in a distributed manner. |
Christopher D. Cowper-Smith; David A. Westwood Motor IOR revealed for reaching Journal Article In: Attention, Perception, & Psychophysics, vol. 75, no. 8, pp. 1914–1922, 2013. @article{CowperSmith2013, Inhibition of return (IOR) is a spatial phenomenon that is thought to promote visual search functions by biasing attention and eye movements toward novel locations. Considerable research suggests distinct sensory and motor flavors of IOR, but it is not clear whether the motor type can affect responses other than eye movements. Most studies claiming to reveal motor IOR in the reaching control system have been confounded by their use of peripheral signals, which can invoke sensory rather than motor-based inhibitory effects. Other studies have used central signals to focus on motor, rather than sensory, effects in arm movements but have failed to observe IOR and have concluded that the motor form of IOR is restricted to the oculomotor system. Here, we show the first clear evidence that motor IOR can be observed for reaching movements when participants respond to consecutive central stimuli. This observation suggests that motor IOR serves a more general function than the facilitation of visual search, perhaps reducing the likelihood of engaging in repetitive behavior. |
Michele A. Cox; Michael C. Schmid; Andrew J. Peters; Richard C. Saunders; David A. Leopold; Alexander Maier Receptive field focus of visual area V4 neurons determines responses to illusory surfaces Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 42, pp. 17095–17100, 2013. @article{Cox2013, Illusory figures demonstrate the visual system's ability to infer surfaces under conditions of fragmented sensory input. To investigate the role of midlevel visual area V4 in visual surface completion, we used multielectrode arrays to measure spiking responses to two types of visual stimuli: Kanizsa patterns that induce the perception of an illusory surface and physically similar control stimuli that do not. Neurons in V4 exhibited stronger and sometimes rhythmic spiking responses for the illusion-promoting configurations compared with controls. Moreover, this elevated response depended on the precise alignment of the neuron's peak visual field sensitivity (receptive field focus) with the illusory surface itself. Neurons whose receptive field focus was over adjacent inducing elements, less than 1.5° away, did not show response enhancement to the illusion. Neither receptive field sizes nor fixational eye movements could account for this effect, which was present in both single-unit signals and multiunit activity. These results suggest that the active perceptual completion of surfaces and shapes, which is a fundamental problem in natural visual experience, draws upon the selective enhancement of activity within a distinct subpopulation of neurons in cortical area V4. |
Abbie L. Coy; Samuel B. Hutton Lateral asymmetry in saccadic eye movements during face processing: The role of individual differences in schizotypy Journal Article In: Cognitive Neuroscience, vol. 4, no. 2, pp. 66–72, 2013. @article{Coy2013, Healthy individuals with high as compared to low levels of schizotypal personality traits make more first saccades to the left side of faces, suggesting increased right hemisphere (RH) dominance for face processing. Patients with schizophrenia, however, show attenuated or reversed RH dominance for face processing. It is unclear whether the increased RH dominance found in high schizotypes is specific to face processing or whether it is also observable for other stimuli matched in terms of low-level visual properties. We measured gaze to faces and symmetrical fractal patterns and found higher Magical Ideation (MI) is associated with an increased left-side bias for initial saccade landing points and dwell times when free-viewing faces. These laterality biases were unaffected by facial emotion. Schizotypy scores were not related to laterality biases when viewing fractals. Our results provide further evidence that high schizotypy is associated with an increase in RH dominance for face processing. |
Abbie L. Coy; Samuel B. Hutton The influence of hallucination proneness and social threat on time perception Journal Article In: Cognitive Neuropsychiatry, vol. 18, no. 6, pp. 463–476, 2013. @article{Coy2013a, Introduction. Individuals with schizophrenia frequently report disturbances in time perception, but the precise nature of such deficits and their relation to specific symptoms of the disorder is unclear. We sought to determine the relationship between hallucination proneness and time perception in healthy individuals, and whether this relationship is moderated by hypervigilance to threat-related stimuli. Methods. 206 participants completed the Revised Launay-Slade Hallucination Scale (LSHS-R) and a time reproduction task in which, on each trial, participants viewed a face (happy, angry, neutral, or fearful) for between 1 and 5 s and then reproduced the time period with a spacebar press. Results. High LSHS-R scores were associated with longer time estimates, but only during exposure to angry faces. A factor analysis of LSHS-R scores identified a factor comprising items related to reality monitoring, and this factor was most associated with the longer time estimates. Conclusions. During exposure to potential threat in the environment, duration estimates increase with hallucination proneness. The experience of feeling exposed to threat for longer may serve to maintain a state of hypervigilance which has been shown previously to be associated with positive symptoms of schizophrenia. |
Joao C. Dias; Paul Sajda; J. P. Dmochowski; Lucas C. Parra EEG precursors of detected and missed targets during free-viewing search Journal Article In: Journal of Vision, vol. 13, no. 13, pp. 1–19, 2013. @article{Dias2013, When scanning a scene, the target of our search may be in plain sight and yet remain unperceived. Conversely, at other times the target may be perceived in the periphery prior to fixation. There is ample behavioral and neurophysiological evidence to suggest that in some constrained visual-search tasks, targets are detected prior to fixational eye movements. However, limited human data are available during unconstrained search to determine the time course of detection, the brain areas involved, and the neural correlates of failures to detect a foveated target. Here, we recorded and analyzed electroencephalographic (EEG) activity during free-viewing visual search, varying the task difficulty to compare neural signatures for detected and unreported ("missed") targets. When carefully controlled to remove eye-movement-related potentials, saccade-locked EEG shows that: (a) "Easy" targets may be detected as early as 150 ms prior to foveation, as indicated by a premotor potential associated with a button response; (b) object-discriminating occipital activity emerges during the saccade to target; and (c) success and failures to detect a target are accompanied by a modulation in alpha-band power over fronto-central areas as well as altered saccade dynamics. Taken together, these data suggest that target detection during free viewing can begin prior to and continue during a saccade, with failure or success in reporting a target possibly resulting from inhibition or activation of fronto-central processing areas associated with saccade control. |
Christopher A. Dickinson; Gregory J. Zelinsky New evidence for strategic differences between static and dynamic search tasks: An individual observer analysis of eye movements Journal Article In: Frontiers in Psychology, vol. 4, pp. 8, 2013. @article{Dickinson2013, Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers' oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d' values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays. |
Brian W. Dillon; Alan Mishler; Shayne Sloggett; Colin Phillips Contrasting interference profiles for agreement and anaphora: Experimental and modeling evidence Journal Article In: Journal of Memory and Language, vol. 69, no. 2, pp. 85–103, 2013. @article{Dillon2013, We investigated the relationship between linguistic representation and memory access by comparing the processing of two linguistic dependencies that require comprehenders to check that the subject of the current clause has the correct morphological features: subject–verb agreement and reflexive anaphors in English. In two eye-tracking experiments we examined the impact of structurally illicit noun phrases on the computation of reflexive and subject–verb agreement. Experiment 1 directly compared the two dependencies within participants. Results show a clear difference in the intrusion profile associated with each dependency: agreement resolution displays clear intrusion effects in comprehension (as found by Pearlmutter et al., 1999, Wagers et al., 2009), but reflexives show no such intrusion effect from illicit antecedents (Sturt, 2003, Xiang et al., 2009). Experiment 2 replicated the lack of intrusion for reflexives, confirming the reliability of the pattern and examining a wider range of feature combinations. In addition, we present modeling evidence that suggests that the reflexive results are best captured by a memory retrieval mechanism that uses primarily syntactic information to guide retrievals for the anaphor's antecedent, in contrast to the mixed morphological and syntactic cues used resolve subject–verb agreement dependencies. Despite the fact that agreement and reflexive dependencies are subject to a similar morphological agreement constraint, in online processing comprehenders appear to implement this constraint in distinct ways for the two dependencies. |
Steve Dipaola; Caitlin Riebe; James T. Enns Following the masters: Portrait viewing and appreciation is guided by selective detail Journal Article In: Perception, vol. 42, no. 6, pp. 608–630, 2013. @article{Dipaola2013, A painted portrait differs from a photo in that selected regions are often rendered in much sharper detail than other regions. Artists believe these choices guide viewer gaze and influence their appreciation of the portrait, but these claims are difficult to test because increased portrait detail is typically associated with greater meaning, stronger lighting, and a more central location in the composition. In three experiments we monitored viewer gaze and recorded viewer preferences for portraits rendered with a parameterised non-photorealistic technique to mimic the style of Rembrandt (DiPaola, 2009 International Journal of Art and Technology 2 82-93). Results showed that viewer gaze was attracted to and held longer by regions of relatively finer detail (experiment 1), and also by textural highlighting (experiment 2), and that artistic appreciation increased when portraits strongly biased gaze (experiment 3). These findings have implications for understanding both human vision science and visual art. |
Michael Dorr; Peter J. Bex Peri-saccadic natural vision Journal Article In: Journal of Neuroscience, vol. 33, no. 3, pp. 1211–1217, 2013. @article{Dorr2013, The fundamental role of the visual system is to guide behavior in natural environments. To optimize information transmission, many animals have evolved a non-homogeneous retina and serially sample visual scenes by saccadic eye movements. Such eye movements, however, introduce high-speed retinal motion and decouple external and internal reference frames. Until now, these processes have only been studied with unnatural stimuli, eye movement behavior, and tasks. These experiments confound retinotopic and geotopic coordinate systems and may probe a non-representative functional range. Here we develop a real-time, gaze-contingent display with precise spatiotemporal control over high-definition natural movies. In an active condition, human observers freely watched nature documentaries and indicated the location of periodic narrow-band contrast increments relative to their gaze position. In a passive condition under central fixation, the same retinal input was replayed to each observer by updating the video's screen position. Comparison of visual sensitivity between conditions revealed three mechanisms that the visual system has adapted to compensate for peri-saccadic vision changes. Under natural conditions we show that reduced visual sensitivity during eye movements can be explained simply by the high retinal speed during a saccade without recourse to an extra-retinal mechanism of active suppression; we give evidence for enhanced sensitivity immediately after an eye movement indicative of visual receptive fields remapping in anticipation of forthcoming spatial structure; and we demonstrate that perceptual decisions can be made in world rather than retinal coordinates. |
Trafton Drew; Melissa L. -H. Võ; Alex Olwal; Francine Jacobson; Steven E. Seltzer; Jeremy M. Wolfe Scanners and drillers: Characterizing expert visual search through volumetric images Journal Article In: Journal of Vision, vol. 13, no. 10, pp. 1–13, 2013. @article{Drew2013, Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a "stack" of 2-D chest CT "slices." At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: "drilling" and "scanning." Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated. |
C. Cavina-Pratesi; Constanze Hesse Why do the eyes prefer the index finger? Simultaneous recording of eye and hand movements during precision grasping Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–15, 2013. @article{CavinaPratesi2013, Previous research investigating eye movements when grasping objects with precision grip has shown that we tend to fixate close to the contact position of the index finger on the object. It has been hypothesized that this behavior is related to the fact that the index finger usually describes a more variable trajectory than the thumb and therefore requires a higher amount of visual monitoring. We wished to directly test this prediction by creating a grasping task in which either the index finger or the thumb described a more variable trajectory. Experiment 1 showed that the trajectory variability of the digits can be manipulated by altering the direction from which the hand approaches the object. If the start position is located in front of the object (hand-before), the index finger produces a more variable trajectory. In contrast, when the hand approaches the object from a starting position located behind it (hand-behind), the thumb produces a more variable movement path. In Experiment 2, we tested whether the fixation pattern during grasping is altered in conditions in which the trajectory variability of the two digits is reversed. Results suggest that regardless of the trajectory variability, the gaze was always directed toward the contact position of the index finger. Notably, we observed that regardless of our starting position manipulation, the index finger was the first digit to make contact with the object. Hence, we argue that time to contact (and not movement variability) is the crucial parameter which determines where we look during grasping. |
Cindy Chamberland; Jean Saint-Aubin; Marie Andrée Légère The impact of text repetition on content and function words during reading: Further evidence from eye movements Journal Article In: Canadian Journal of Experimental Psychology, vol. 67, no. 2, pp. 94–99, 2013. @article{Chamberland2013, There is ample evidence that reading speed increases when participants read the same text more than once. However, less is known about the impact of text repetition as a function of word class. Some authors suggested that text repetition would mostly benefit content words with little or no effect on function words. In the present study, we examined the effect of multiple readings on the processing of content and function words. Participants were asked to read a short text two times in direct succession. Eye movement analyses revealed the typical multiple readings effect: Repetition decreased the time readers spent fixating words and the probability of fixating critical words. Most importantly, we found that the effect of multiple readings was of the same magnitude for content and function words, and for low- and high-frequency words. Such findings suggest that lexical variables have additive effects on eye movement measures in reading. |
Myriam Chanceaux; Jonathan Grainger Constraints on letter-in-string identification in peripheral vision: Effects of number of flankers and deployment of attention Journal Article In: Frontiers in Psychology, vol. 4, pp. 119, 2013. @article{Chanceaux2013, Effects of non-adjacent flanking elements on crowding of letter stimuli were examined in experiments manipulating the number of flanking elements and the deployment of spatial attention. To this end, identification accuracy of single letters was compared with identification of letter targets surrounded by two, four, or six flanking elements placed symmetrically left and right of the target. Target stimuli were presented left or right of a central fixation, and appeared either unilaterally or with an equivalent number of characters in the contralateral visual field (bilateral presentation). Experiment 1A tested letter targets with random letter flankers, and Experiments 1B and 2 tested letter targets with Xs as flanking stimuli. The results revealed a number of flankers effect that extended beyond standard two-flanker crowding. Flanker interference was stronger with random letter flankers compared with homogeneous Xs, and performance was systematically better under unilateral presentation conditions compared with bilateral presentation. Furthermore, the difference between the zero-flanker and two-flanker conditions was significantly greater under bilateral presentation, whereas the difference between two-flankers and four-flankers did not differ across unilateral and bilateral presentation. The complete pattern of results can be captured by the independent contributions of excessive feature integration and deployment of spatial attention to letter-in-string visibility. |
Myriam Chanceaux; Sebastiaan Mathôt; Jonathan Grainger Flank to the left, flank to the right: Testing the modified receptive field hypothesis of letter-specific crowding Journal Article In: Journal of Cognitive Psychology, vol. 25, no. 6, pp. 774–780, 2013. @article{Chanceaux2013a, The present study tested for effects of number of flankers positioned to the left and to the right of target characters as a function of visual field and stimulus type (letters or shapes). On the basis of the modified receptive field hypothesis (Chanceaux & Grainger, 2012), we predicted that the greatest effects of flanker interference would occur for leftward flankers with letter targets in the left visual field. Target letters and simple shape stimuli were briefly presented and accompanied by either 1, 2, or 3 flankers of the same category either to the left or to the right of the target, and in all conditions with a single flanker on the opposite side. Targets were presented in the left or right visual field at a fixed eccentricity, such that targets and flankers always fell into the same visual field. Results showed greatest interference for leftward flankers associated with letter targets in the left visual field, as predicted by the modified receptive field hypothesis. |
Steve W. C. Chang; Jean-François Gariépy; Michael L. Platt Neuronal reference frames for social decisions in primate frontal cortex Journal Article In: Nature Neuroscience, vol. 16, no. 2, pp. 243–250, 2013. @article{Chang2013, Social decisions are crucial for the success of individuals and the groups that they comprise. Group members respond vicariously to benefits obtained by others, and impairments in this capacity contribute to neuropsychiatric disorders such as autism and sociopathy. We examined the manner in which neurons in three frontal cortical areas encoded the outcomes of social decisions as monkeys performed a reward-allocation task. Neurons in the orbitofrontal cortex (OFC) predominantly encoded rewards that were delivered to oneself. Neurons in the anterior cingulate gyrus (ACCg) encoded reward allocations to the other monkey, to oneself or to both. Neurons in the anterior cingulate sulcus (ACCs) signaled reward allocations to the other monkey or to no one. In this network of received (OFC) and foregone (ACCs) reward signaling, ACCg emerged as an important nexus for the computation of shared experience and social reward. Individual and species-specific variations in social decision-making might result from the relative activation and influence of these areas. |
Chih-Yang Chen; Ziad M. Hafed Postmicrosaccadic enhancement of slow eye movements Journal Article In: Journal of Neuroscience, vol. 33, no. 12, pp. 5375–5386, 2013. @article{Chen2013, Active sensation poses unique challenges to sensory systems because moving the sensor necessarily alters the input sensory stream. Sensory input quality is additionally compromised if the sensor moves rapidly, as during rapid eye movements, making the period immediately after the movement critical for recovering reliable sensation. Here, we studied this immediate postmovement interval for the case of microsaccades during fixation, which rapidly jitter the "sensor" exactly when it is being voluntarily stabilized to maintain clear vision. We characterized retinal-image slip in monkeys immediately after microsaccades by analyzing postmovement ocular drifts. We observed enhanced ocular drifts by up to ~28% relative to premicrosaccade levels, and for up to ~50 ms after movement end. Moreover, we used a technique to trigger full-field image motion contingent on real-time microsaccade detection, and we used the initial ocular following response to this motion as a proxy for changes in early visual motion processing caused by microsaccades. When the full-field image motion started during microsaccades, ocular following was strongly suppressed, consistent with detrimental retinal effects of the movements. However, when the motion started after microsaccades, there was up to ~73% increase in ocular following speed, suggesting an enhanced motion sensitivity. These results suggest that the interface between even the smallest possible saccades and "fixation" includes a period of faster than usual image slip, as well as an enhanced responsiveness to image motion, and that both of these phenomena need to be considered when interpreting the pervasive neural and perceptual modulations frequently observed around the time of microsaccades. |
Mitchell J. Callan; Heather J. Ferguson; Markus Bindemann Eye movements to audiovisual scenes reveal expectations of a just world Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 1, pp. 34–40, 2013. @article{Callan2013, When confronted with bad things happening to good people, observers often engage reactive strategies, such as victim derogation, to maintain a belief in a just world. Although such reasoning is usually made retrospectively, we investigated the extent to which knowledge of another person's good or bad behavior can also bias people's online expectations for subsequent good or bad outcomes. Using a fully crossed design, participants listened to auditory scenarios that varied in terms of whether the characters engaged in morally good or bad behavior while their eye movements were tracked around concurrent visual scenes depicting good and bad outcomes. We found that the good (bad) behavior of the characters influenced gaze preferences for good (bad) outcomes just prior to the actual outcomes being revealed. These findings suggest that beliefs about a person's moral worth encourage observers to foresee a preferred deserved outcome as the event unfolds. We include evidence to show that this effect cannot be explained in terms of affective priming or matching strategies. |
Manuel G. Calvo; Andrés Fernández-Martín Can the eyes reveal a person's emotions? Biasing role of the mouth expression Journal Article In: Motivation and Emotion, vol. 37, no. 1, pp. 202–211, 2013. @article{Calvo2013, In this study we investigated how perception of the eye expression in a face is influenced by the mouth expression, even when only the eyes are directly looked at. The same eyes appeared in a face with either an incongruent smiling, angry, or sad mouth, a congruent mouth, or no mouth. Attention was directed to the eyes by means of cueing and there were no fixations on the mouth. Participants evaluated whether the eyes were happy (or angry, or sad) or not. Results indicated that the smile biased the evaluation of the eyes towards happiness to a greater extent than an angry or a sad mouth did towards anger or sadness. The smiling mouth was also more visually salient than the angry and the sad mouths. We conclude that the role of the eyes as a 'window' to a person's emotional and motivational state is constrained and distorted by the configural projection of an expressive mouth, and that this effect is enhanced by the high visual saliency of the smile. |
Manuel G. Calvo; Andrés Fernández-Martín; Lauri Nummenmaa A smile biases the recognition of eye expressions: Configural projection from a salient mouth Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 6, pp. 1159–1181, 2013. @article{Calvo2013a, A smile is visually highly salient and grabs attention automatically. We investigated how extrafoveally seen smiles influence the viewers' perception of non-happy eyes in a face. A smiling mouth appeared in composite faces with incongruent non-happy (fearful, neutral, etc.) eyes, thus producing blended expressions, or it appeared in intact faces with genuine expressions. Attention to the eye region was spatially cued while foveal vision of the mouth was blocked by gaze-contingent masking. Participants judged whether the eyes were happy or not. Results indicated that the smile biased the evaluation of the eye expression: The same non-happy eyes were more likely to be judged as happy and categorized more slowly as not happy in a face with a smiling mouth than in a face with a non-smiling mouth or with no mouth. This bias occurred when the mouth and the eyes appeared simultaneously and aligned, but also to some extent when they were misaligned and when the mouth appeared after the eyes. We conclude that the highly salient smile projects to other facial regions, thus influencing the perception of the eye expression. Projection serves spatial and temporal integration of face parts and changes. |
Manuel G. Calvo; Aida Gutiérrez-García; Pedro Avero; Daniel Lundqvist Attentional mechanisms in judging genuine and fake smiles: Eye-movement patterns Journal Article In: Emotion, vol. 13, no. 4, pp. 792–802, 2013. @article{Calvo2013b, We investigated the visual attention patterns (i.e., where, when, how frequently, and how long viewers look at each face region) for faces with (a) genuine, enjoyment smiles (i.e., a smiling mouth and happy eyes with the Duchenne marker), (b) fake, nonenjoyment smiles (a smiling mouth but nonhappy eyes: neutral, surprised, fearful, sad, disgusted, or angry), or (c) no smile (and nonhappy eyes). Viewers evaluated whether the faces conveyed happiness ("felt happy") or not, while eye movements were monitored. Results indicated, first, that the smiling mouth captured the first fixation more likely and faster than the eyes, regardless of type of eyes. This reveals similar attentional orienting to genuine and fake smiles. Second, the mouth and, especially, the eyes of faces with fake smiles received more fixations and longer dwell times than those of faces with genuine smiles. This reveals attentional engagement, with a processing cost for fake smiles. Finally, when the mouth of faces with fake smiles was fixated earlier than the eyes, the face was likely to be judged as genuinely happy. This suggests that the first fixation on the smiling mouth biases the viewer to misinterpret the emotional state underlying blended expressions. |
E. Camara; Sanjay G. Manohar; Masud Husain Past rewards capture spatial attention and action choices Journal Article In: Experimental Brain Research, vol. 230, no. 3, pp. 291–300, 2013. @article{Camara2013, The desire to increase rewards and minimize punishing events is a powerful driver in behaviour. Here, we assess how the value of a location affects subsequent deployment of goal-directed attention as well as involuntary capture of attention on a trial-to-trial basis. By tracking eye position, we investigated whether the ability of an irrelevant, salient visual stimulus to capture gaze (stimulus-driven attention) is modulated by that location's previous value. We found that distractors draw attention to them significantly more if they appear at a location previously associated with a reward, even when gazing towards them now leads to punishments. Within the same experiment, it was possible to demonstrate that a location associated with a reward can also bias subsequent goal-directed attention (indexed by action choices) towards it. Moreover, individuals who were vulnerable to being distracted by previous reward history, as indexed by oculomotor capture, were also more likely to direct their actions to those locations when they had a free choice. Even when the number of initial responses was made to be rewarded and punished stimuli were equalized, the effects of previous reward history on both distractibility and action choices remained. Finally, a covert attention task requiring button-press responses rather than overt gaze shifts demonstrated the same pattern of findings. Thus, past rewards can act to modulate both subsequent stimulus-driven as well as goal-directed attention. These findings reveal that there can be surprising short-term costs of using reward cues to regulate behaviour. They show that current valence information, if maintained inappropriately, can have negative subsequent effects, with attention and action choices being vulnerable to capture and bias, mechanisms that are of potential importance in understanding distractibility and abnormal action choices. |