全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2013 |
Jingxin Wang; Jing Tian; Rong Wang; Valerie Benson Increased attentional focus modulates eye movements in a mixed antisaccade task for younger and older adults Journal Article In: PLoS ONE, vol. 8, no. 4, pp. e61566, 2013. @article{Wang2013b, We examined performance in the antisaccade task for younger and older adults by comparing latencies and errors in what we defined as high attentional focus (mixed antisaccades and prosaccades in the same block) and low attentional focus (antisaccades and prosaccades in separate blocks) conditions. Shorter saccade latencies for correctly executed eye movements were observed for both groups in mixed, compared to blocked, antisaccade tasks, but antisaccade error rates were higher for older participants across both conditions. The results are discussed in relation to the inhibitory hypothesis, the goal neglect theory and attentional control theory. |
Suiping Wang; Deyuan Mo; Ming Xiang; Ruiping Xu; Hsuan-Chih Chen The time course of semantic and syntactic processing in reading Chinese: Evidence from ERPs Journal Article In: Language and Cognitive Processes, vol. 28, no. 4, pp. 577–596, 2013. @article{Wang2013c, The time course of semantic and syntactic processing in reading Chinese was examined by recording event-related brain potentials (ERPs) as native Chinese speakers read individually presented sentences for comprehension and performed semantic plausibility judgments. The transitivity of the verbs in Chinese ba/bei constructions was manipulated to form three types of stimuli: Congruent sentences (CON), sentences with semantic violation (SEM), and sentences with combined semantic and syntactic violation (SEM'SYN). Compared with the critical words in CON, those in SEM and SEM'SYN elicited an N400-P600 biphasic pattern. The N400 effects in both violation conditions were of similar size and distribution, but the P600 in SEM'SYN was bigger than that in SEM. Overall, the lack of a difference between SEM and SEM'SYN in the earlier time window (i.e., N400 window) suggested that syntactic processing in Chinese does not necessarily occur earlier than semantic processing. |
David E. Warren; Matthew J. Thurtell; Joy N. Carroll; Michael Wall In: Investigative Ophthalmology & Visual Science, vol. 54, no. 8, pp. 5778–5787, 2013. @article{Warren2013, PURPOSE. Using a novel automated perimetry technique, we tested the hypothesis that older adults will have increased latency and decreased accuracy of saccades, as well as higher visual thresholds, to peripheral visual stimuli when compared with younger adults. METHODS. We tested 20 healthy subjects aged 18 to 30 years (‘‘young'') and 21 healthy subjects at least 60 years old (‘‘older'') for detection of briefly flashed peripheral stimuli of differing sizes in eight locations along the horizontal meridian (648, 6128, 6208, and 6288). With the left eye occluded, subjects were instructed to look quickly toward any seen stimuli. Right eye movements were recorded with an EyeLink 1000 infrared camera system. Limiting our analysis to the four stimulus positions in the nasal hemifield (?48, ?128, ?208, and ?288), we evaluated for group-level differences in saccadic latency, accuracy, and visual threshold at each stimulus location. RESULTS. Saccadic latency increased as stimulus size decreased in both groups. Older subjects had significantly increased saccadic latencies (at all locations; P < 0.05), decreased accuracies (at all locations; P < 0.05), and higher visual thresholds (at the ?128, ?208, and ?288 locations; P < 0.05). Additionally, there were significant relationships between visual threshold and latency, visual threshold and accuracy, and latency and accuracy (P < 0.0001). CONCLUSIONS. Older adults have increased latency and decreased accuracy of saccades, as well as higher visual thresholds, to peripheral visual stimuli when compared with younger adults. Saccadic latency and accuracy are related to visual threshold, suggesting that saccadic latency and accuracy could be useful as perimetric outcome measures. |
S. V. Wass; T. J. Smith; M. H. Johnson Parsing eye-tracking data of variable quality to provide accurate fixation duration estimates in infants and adults Journal Article In: Behavior Research Methods, vol. 45, no. 1, pp. 229–250, 2013. @article{Wass2013, Researchers studying infants' spontaneous allocation of attention have traditionally relied on hand-coding infants' direction of gaze from videos; these techniques have low temporal and spatial resolution and are labor intensive. Eye-tracking technology potentially allows for much more precise measurement of how attention is allocated at the subsecond scale, but a number of technical and methodological issues have given rise to caution about the quality and reliability of high temporal resolution data obtained from infants. We present analyses suggesting that when standard dispersal-based fixation detection algorithms are used to parse eye-tracking data obtained from infants, the results appear to be heavily influenced by interindividual variations in data quality. We discuss the causes of these artifacts, including fragmentary fixations arising from flickery or unreliable contact with the eyetracker and variable degrees of imprecision in reported position of gaze. We also present new algorithms designed to cope with these problems by including a number of new post hoc verification checks to identify and eliminate fixations that may be artifactual. We assess the results of our algorithms by testing their reliability using a variety of methods and on several data sets. We contend that, with appropriate data analysis methods, fixation duration can be a reliable and stable measure in infants. We conclude by discussing ways in which studying fixation durations during unconstrained orienting may offer insights into the relationship between attention and learning in naturalistic settings. |
Ralph Radach Monitoring local comprehension monitoring in sentence reading Journal Article In: School Psychology Review, vol. 42, no. 2, pp. 191–206, 2013. @article{Radach2013, Comprehension monitoring is considered a key issue in current debates on ways to improve children' reading comprehension. However, processes and mechanisms underlying this skill are currently not well understood. This article describes one of the first attempts to study comprehension monitoring using eye-tracking methodology. Students in fifth grade were asked to read sentences for comprehension while also checking whether the meaning of the sentence was generally correct or incorrect. Items required the processing of conjunctive relations between two clauses that were either causally consistent or inconsistent. In addition, the polarity of the relation was varied by replacing the conjunction “because” with “although, ” creating an additional level of processing difficulty. Inconsistency played a minor role and was dominated by polarity effects that were also modulated by the correctness of the answer. The present task represents an effective tool to study local comprehension monitoring and highlights the importance of conjunctive relations for maintaining textual coherence during reading. |
Ralph Radach; Albrecht W. Inhoff; Lisa Glover; Christian Vorstius Contextual constraint and N+2 preview effects in reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 619–633, 2013. @article{Radach2013a, Extracting linguistic information from locations beyond the currently fixated word is a core component of skilled reading. Recent debate on this topic is focused on the question of whether useful linguistic information can be extracted from more than one (parafoveally visible) word to the right of a fixated word (N). The current study examined this issue through the use parafoveal previews with a short and high-frequency next (N + 1) word, as this should increase the opportunity for the extraction of useful information from the subsequent (N + 2) word. Pairs of N + 2 words were selected so that contextual constraint was either high or low. Using saccade contingent display manipulations, preview of a N + 2 target word during word N viewing consisted of either a visually dissimilar nonword or a word. The results revealed a substantial drop in fixation probability for word N + 1 when the N + 2 preview was masked with a nonword. Furthermore, the masking of word N + 2 influenced its viewing duration even when word N + 1 was fixated prior to word N + 2 viewing. These results provide compelling evidence for the view that the linguistic processing can encompass more than one word at a time. |
Pavan Ramkumar; Mainak Jas; Sebastian Pannasch; Riitta Hari; Lauri Parkkonen Feature-specific information processing precedes concerted activation in human visual cortex Journal Article In: Journal of Neuroscience, vol. 33, no. 18, pp. 7691–7699, 2013. @article{Ramkumar2013, Current knowledge about the precise timing of visual input to the cortex relies largely on spike timings in monkeys and evoked-response latencies in humans. However, quantifying the activation onset does not unambiguously describe the timing of stimulus-feature-specific information processing. Here, we investigated the information content of the early human visual cortical activity by decoding low-level visual features from single-trial magnetoencephalographic (MEG) responses. MEG was measured from nine healthy subjects as they viewed annular sinusoidal gratings (spanning the visual field from 2 to 10° for a duration of 1 s), characterized by spatial frequency (0.33 cycles/degree or 1.33 cycles/degree) and orientation (45° or 135°); gratings were either static or rotated clockwise or anticlockwise from 0 to 180°. Time-resolved classifiers using a 20 ms moving window exceeded chance level at 51 ms (the later edge of the window) for spatial frequency, 65 ms for orientation, and 98 ms for rotation direction. Decoding accuracies of spatial frequency and orientation peaked at 70 and 90 ms, respectively, coinciding with the peaks of the onset evoked responses. Within-subject time-insensitive pattern classifiers decoded spatial frequency and orientation simultaneously (mean accuracy 64%, chance 25%) and rotation direction (mean 82%, chance 50%). Classifiers trained on data from other subjects decoded the spatial frequency (73%), but not the orientation, nor the rotation direction. Our results indicate that unaveraged brain responses contain decodable information about low-level visual features already at the time of the earliest cortical evoked responses, and that representations of spatial frequency are highly robust across individuals. |
Sarah J. Rappaport; Glyn W. Humphreys; M. Jane Riddoch The attraction of yellow corn: Reduced attentional constraints on coding learned conjunctive relations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 1016–1031, 2013. @article{Rappaport2013, Physiological evidence indicates that different visual features are computed quasi-independently. The subsequent step of binding features, to generate coherent perception, is typically considered a major rate-limiting process, confined to one location at a time and taking 25 ms per item or longer (A. Treisman & S. Gormican, 1988, Feature analysis in early vision: Evidence from search asymmetries, Psychological Review, Vol. 95, pp. 15-48). We examined whether these processing limitations remain once bindings are learned for familiar objects. Participants searched for objects that could appear either in familiar or unfamiliar colors. Objects in familiar colors were detected efficiently at rates consistent with simultaneous binding across multiple stimuli. Processing limitations were evident for objects in unfamiliar colors. The advantage for the learned color for known targets was eliminated when participants searched for geometric shapes carrying the object colors and when the colors fell in local background areas around the shapes. The effect occurred irrespective of whether the nontargets had familiar colors, but was largest when nontargets had incorrect colors. The efficient search for targets in familiar colors held, even when the search was biased to favor objects in unfamiliar colors. The data indicate that learned bindings can be computed with minimal attentional limitations, consistent with the direct activation of learned conjunctive representations in vision. |
Keith Rayner; Bernhard Angele; Elizabeth R. Schotter; Klinton Bicknell On the processing of canonical word order during eye fixations in reading: Do readers process transposed word previews? Journal Article In: Visual Cognition, vol. 21, no. 3, pp. 353–381, 2013. @article{Rayner2013, Whether readers always identify words in the order they are printed is subject to considerable debate. In the present study, we used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the preview for a two-word target region (e.g. white walls in My neighbor painted the white walls black). Readers received an identical (white walls), transposed (walls white), or unrelated preview (vodka clubs). We found that there was a clear cost of having a transposed preview compared to an identical preview, indicating that readers cannot or do not identify words out of order. However, on some measures, the transposed preview condition did lead to faster processing than the unrelated preview condition, suggesting that readers may be able to obtain some useful information from a transposed preview. Implications of the results for models of eye movement control in reading are discussed. |
Keith Rayner; Jinmian Yang; Susanne Schuett; Timothy J. Slattery Eye movements of older and younger readers when reading unspaced text Journal Article In: Experimental Psychology, vol. 60, no. 5, pp. 354–361, 2013. @article{Rayner2013a, Older and younger readers read normal and unspaced text as their eye movements were monitored. A high or low frequency word was embedded in each sentence. Global analyses yielded large effects of spacing with unspaced text leading to much longer reading times for both groups, but the older readers had much more difficulty with unspaced text than younger readers. Local analyses of the target word revealed large main effects due to age, spacing, and frequency. In general, the older readers had more difficulty with the unspaced text than younger readers and some reasons why they did so are suggested. |
Jason Satel; Matthew D. Hilchey; Zhiguo Wang; Ross Story; Raymond M. Klein The effects of ignored versus foveated cues upon inhibition of return: An event-related potential study Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 1, pp. 29–40, 2013. @article{Satel2013, Taylor and Klein (Journal of Experimental Psychology: Human Perception and Performance 26:1639-1656, 2000) discovered two mutually exclusive "flavors" of inhibition of return (IOR): When the oculomotor system is "actively suppressed," IOR affects input processes (the perception/attention flavor), whereas when the oculomotor system is "engaged," IOR affects output processes (the motor flavor). Studies of brain activity with ignored cues have typically reported that IOR reduces an early sensory event-related potential (ERP) component (i.e., the P1 component) of the brain's response to the target. Since eye movements were discouraged in these experiments, the P1 reduction might be a reflection of the perception/attention flavor of IOR. If, instead of ignoring the cue, participants made a prosaccade to the cue (and then returned to fixation) before responding to the target, the motor flavor of IOR should then be generated. We compared these two conditions while monitoring eye position and recording ERPs to the targets. If the P1 modulation is related to the perceptual/attentional flavor of IOR, we hypothesized that it might be absent when the motoric flavor of IOR was generated by a prosaccade to the cue. Our results demonstrated that target-related P1 reductions and behavioral IOR were similar, and significant, in both conditions. However, P1 modulations were significantly correlated with behavioral IOR only when the oculomotor system was actively suppressed, suggesting that P1 modulations may only affect behaviorally exhibited IOR when the attentional/perceptual flavor of IOR is recruited. |
Steven W. Savage; Douglas D. Potter; Benjamin W. Tatler Does preoccupation impair hazard perception? A simultaneous EEG and eye tracking study Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 17, pp. 52–62, 2013. @article{Savage2013, The aim of this current study was to test the hypothesis that contemplating a recent mobile telephone conversation has a detrimental effect on measures of attentional processing in a driving situation. In this within-subjects design, hazard perception performance was compared between high and no cognitive load conditions (with or without a puzzle to solve). We tested 17 participants, all of whom were required to be in possession of a DVLA approved driving license and had completed the hazard perception portion of the British driving test. A novel dual-task paradigm, which did not require subjects to process or produce verbal information during the primary task, was employed to increase participants' cognitive load. Participants were assessed on three categories of performance measures: behavioural, eye movements and cortical activity between both high and no cognitive load conditions whilst watching 20 clips from a hazard perception test. This study was run in a laboratory of the Psychology Research Wing at the University of Dundee. Behavioural findings from the hazard perception test indicate significantly increased reaction times to hazardous stimuli and significantly increased false alarm rates to non-hazardous stimuli in the high cognitive load condition (when contemplating a previous conversation). Analyses of eye movements indicated significant increases in blink frequencies, higher saccade peak velocities and a significant reduction in the spread of fixations along the horizontal axis. Results from EEG recordings showed a significant increase in frontal and a significant decrease in occipital theta activity within the high load condition. Findings were interpreted within the framework of Corbetta, Patel and Schulmann's (2008) networks model of attention control. Our findings suggest that preoccupation with a recent conversation negatively influences the modulatory effect of the central executive on both the stimulus as well as goal-driven networks of the brain. |
Alec Scharff; John Palmer; Cathleen M. Moore Divided attention limits perception of 3-D object shapes Journal Article In: Journal of Vision, vol. 13, no. 2, pp. 1–24, 2013. @article{Scharff2013, Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited- capacity and a fixed-capacity model. Under unlimited- capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes. |
Christoph Scheepers; Sibylle Mohr; Martin H. Fischer; Andrew M. Roberts Listening to limericks: A pupillometry investigation of perceivers' expectancy Journal Article In: PLoS ONE, vol. 8, no. 9, pp. e74986, 2013. @article{Scheepers2013, What features of a poem make it captivating, and which cognitive mechanisms are sensitive to these features? We addressed these questions experimentally by measuring pupillary responses of 40 participants who listened to a series of Limericks. The Limericks ended with either a semantic, syntactic, rhyme or metric violation. Compared to a control condition without violations, only the rhyme violation condition induced a reliable pupillary response. An anomaly-rating study on the same stimuli showed that all violations were reliably detectable relative to the control condition, but the anomaly induced by rhyme violations was perceived as most severe. Together, our data suggest that rhyme violations in Limericks may induce an emotional response beyond mere anomaly detection. |
Anne Schmechtig; Jane Lees; Lois Grayson; Kevin J. Craig; Rukiya Dadhiwala; Gerard R. Dawson; J. F. William Deakin; Colin T. Dourish; Ivan Koychev; Katrina McMullen; Ellen M. Migo; Charlotte Perry; Lawrence Wilkinson; Robin Morris; Steve C. R. Williams; Ulrich Ettinger Effects of risperidone, amisulpride and nicotine on eye movement control and their modulation by schizotypy Journal Article In: Psychopharmacology, vol. 227, no. 2, pp. 331–345, 2013. @article{Schmechtig2013a, RATIONALE: The increasing demand to develop more efficient compounds to treat cognitive impairments in schizophrenia has led to the development of experimental model systems. One such model system combines the study of surrogate populations expressing high levels of schizotypy with oculomotor biomarkers. OBJECTIVES: We aimed (1) to replicate oculomotor deficits in a psychometric schizotypy sample and (2) to investigate whether the expected deficits can be remedied by compounds shown to ameliorate impairments in schizophrenia. METHODS: In this randomized double-blind, placebo-controlled study 233 healthy participants performed prosaccade (PS), antisaccade (AS) and smooth pursuit eye movement (SPEM) tasks after being randomly assigned to one of four drug groups (nicotine, risperidone, amisulpride, placebo). Participants were classified into medium- and high-schizotypy groups based on their scores on the Schizotypal Personality Questionnaire (SPQ, Raine (Schizophr Bull 17:555-564, 1991)). RESULTS: AS error rate showed a main effect of Drug (p < 0.01), with nicotine improving performance, and a Drug by Schizotypy interaction (p = 0.04), indicating higher error rates in medium schizotypes (p = 0.01) but not high schizotypes under risperidone compared to placebo. High schizotypes had higher error rates than medium schizotypes under placebo (p = 0.03). There was a main effect of Drug for saccadic peak velocity and SPEM velocity gain (both p </= 0.01) indicating impaired performance with risperidone. CONCLUSIONS: We replicate the observation of AS impairments in high schizotypy under placebo and show that nicotine enhances performance irrespective of group status. Caution should be exerted in applying this model as no beneficial effects of antipsychotics were seen in high schizotypes. |
Anne Schmechtig; Jane Lees; Adam M. Perkins; A. Altavilla; Kevin J. Craig; G. R. Dawson; J. F. William Deakin; Colin T. Dourish; L. H. Evans; Ivan Koychev; K. Weaver; R. Smallman; J. Walters; L. S. Wilkinson; R. Morris; Steve C. R. Williams; Ulrich Ettinger The effects of ketamine and risperidone on eye movement control in healthy volunteers Journal Article In: Translational Psychiatry, vol. 3, pp. e334, 2013. @article{Schmechtig2013, The non-competitive N-methyl-D-aspartate receptor antagonist ketamine leads to transient psychosis-like symptoms and impairments in oculomotor performance in healthy volunteers. This study examined whether the adverse effects of ketamine on oculomotor performance can be reversed by the atypical antipsychotic risperidone. In this randomized double-blind, placebo-controlled study, 72 healthy participants performed smooth pursuit eye movements (SPEM), prosaccades (PS) and antisaccades (AS) while being randomly assigned to one of four drug groups (intravenous 100 ng ml(-1) ketamine, 2 mg oral risperidone, 100 ng ml(-1) ketamine plus 2 mg oral risperidone, placebo). Drug administration did not lead to harmful adverse events. Ketamine increased saccadic frequency and decreased velocity gain of SPEM (all P < 0.01) but had no significant effects on PS or AS (all P > or = 0.07). An effect of risperidone was observed for amplitude gain and peak velocity of PS and AS, indicating hypometric gain and slower velocities compared with placebo (both P < or = 0.04). No ketamine by risperidone interactions were found (all P > or = 0.26). The results confirm that the administration of ketamine produces oculomotor performance deficits similar in part to those seen in schizophrenia. The atypical antipsychotic risperidone did not reverse ketamine-induced deteriorations. These findings do not support the cognitive enhancing potential of risperidone on oculomotor biomarkers in this model system of schizophrenia and point towards the importance of developing alternative performance-enhancing compounds to optimise pharmacological treatment of schizophrenia. |
Paul Roux; Christine Passerieux; Franck Ramus Kinematics matters: A new eye-tracking investigation of animated triangles Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 2, pp. 229–244, 2013. @article{Roux2013, Eye movements have been recently recorded in participants watching animated triangles in short movies that normally evoke mentalizing (Frith-Happé animations). Authors have found systematic differences in oculomotor behaviour according to the degree of mental state attribution to these triangles: Participants made longer fixations and looked longer at intentional triangles than at triangles moving randomly. However, no study has yet explored kinematic characteristics of Frith-Happé animations and their influence on eye movements. In a first experiment, we have run a quantitative kinematic analysis of Frith-Happé animations and found that the time triangles spent moving and the distance between them decreased with the mentalistic complexity of their movements. In a second experiment, we have recorded eye movements in 17 participants watching Frith-Happé animations and found that some differences in fixation durations and in the proportion of gaze allocated to triangles between the different kinds of animations were entirely explained by low-level kinematic confounds. We finally present a new eye-tracking measure of visual attention, triangle pursuit duration, which does differentiate the different types of animations even after taking into account kinematic cofounds. However, some idiosyncratic kinematic properties of the Frith-Happé animations prevent an entirely satisfactory interpretation of these results. The different eye-tracking measures are interpreted as implicit and line measures of the processing of animate movements. |
Donghyun Ryu; Bruce Abernethy; David L. Mann; Jamie M. Poolton; Adam D. Gorman The role of central and peripheral vision in expert decision making Journal Article In: Perception, vol. 42, no. 6, pp. 591–607, 2013. @article{Ryu2013, The purpose of this study was to investigate the role of central and peripheral vision in expert decision making. A gaze-contingent display was used to selectively present information to the central and peripheral areas of the visual field while participants performed a decision-making task. Eleven skilled and eleven less-skilled male basketball players watched video clips of basketball scenarios in three different viewing conditions: full-image control, moving window (central vision only), and moving mask (peripheral vision only). At the conclusion of each clip participants were required to decide whether it was more appropriate for the ball-carrier to pass the ball or to drive to the basket. The skilled players showed significantly higher response accuracy and faster response times compared with their lesser-skilled counterparts in all three viewing conditions, demonstrating superiority in information extraction that held irrespective of whether they were using central or peripheral vision. The gaze behaviour of the skilled players was less influenced by the gaze-contingent manipulations, suggesting they were better able to use the remaining information to sustain their normal gaze behaviour. The superior capacity of experts to interpret dynamic visual information is evident regardless of whether the visual information is presented across the whole visual field or selectively to either central or peripheral vision alone. |
Nuria Sagarra; Nick C. Ellis From seeing adverbs to seeing verbal morphology Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 2, pp. 261–290, 2013. @article{Sagarra2013, Adult learners have persistent difficulty processing second language (L2) inflectional morphology. We investigate associative learning explanations that involve the blocking of later experienced cues by earlier learned ones in the first language (L1; i.e., transfer) and the L2 (i.e., proficiency). Sagarra (2008 ) and Ellis and Sagarra (2010b ) found that, unlike Spanish monolinguals, intermediate English-Spanish learners rely more on salient adverbs than on less salient verb inflections, but it is not clear whether this preference is a result of a default or a L1-based strategy. To address this question, 120 English (poor morphology) and Romanian (rich morphology) learners of Spanish (rich morphology) and 98 English, Romanian, and Spanish monolinguals read sentences in L2 Spanish (or their L1 in the case of the monolinguals) containing adverb-verb and verb-adverb congruencies or incongruencies and chose one of four pictures after each sentence (i.e., two that competed for meaning and two for form). Eye-tracking data revealed signifi cant effects for (a) sensitivity (all participants were sensitive to tense incongruencies), (b) cue location in the sentence (participants spent more time at their preferred cue, regardless of its position), (c) L1 experience (morphologically rich L1 learners and monolinguals looked longer at verbs than morphologically poor L1 learners and monolinguals), and (d) L2 experience (low-proficiency learners read more slowly and regressed longer than high-proficiency learners). We conclude that intermediate and advanced learners are sensitive to tense incongruencies and—like native speakers—tend to rely more heavily on verbs if their L1 is morphologically rich. These findings reinforce theories that support transfer effects such as the unifi ed competition model and the associative learning model but do not contradict Clahsen and Felser's ( 2006a ) shallow structure hypothesis because the target structure was morphological agreement rather than syntactic agreement. |
Eryl O. Roberts; Frank A. Proudlock; Kate Martin; Michael A. Reveley; Mohammed Al-Uzri; Irene Gottlob Reading in schizophrenic subjects and their nonsymptomatic first-degree relatives Journal Article In: Schizophrenia Bulletin, vol. 39, no. 4, pp. 896–907, 2013. @article{Roberts2013, Previous studies have demonstrated eye movement abnormalities during smooth pursuit and antisaccadic tasks in schizophrenia. However, eye movements have not been investigated during reading. The purpose of this study was to determine whether schizophrenic subjects and their nonsymptomatic first-degree relatives show eye movement abnormalities during reading. Reading rate, number of saccades per line, amplitudes of saccades, percentage regressions (reverse saccades), and fixation durations were measured using an eye tracker (EyeLink, SensoMotoric Instruments, Germany) in 38 schizophrenic volunteers, 14 nonaffected first-degree relatives, and 57 control volunteers matched for age and National Adult Reading Test scores. Parameters were examined when volunteers read full pages of text and text was limited to progressively smaller viewing areas around the point of fixation using a gaze-contingent window. Schizophrenic volunteers showed significantly slower reading rates (P = .004), increase in total number of saccades (P ≤ .001), and a decrease in saccadic amplitude (P = .025) while reading. Relatives showed a significant increase in total number of saccades (P = .013) and decrease in saccadic amplitude (P = .020). Limitation of parafoveal information by reducing the amount of visible characters did not change the reading rate of schizophrenics but controls showed a significant decrease in reading rate with reduced parafoveal information (P < .001). Eye movement abnormalities during reading of schizophrenic volunteers and their first-degree relatives suggest that visual integration of foveal and parafoveal information may be reduced in schizophrenia. Reading abnormalities in relatives suggest a genetic influence in reading ability in schizophrenia and rule out confounding effects of medication. |
James A. Roberts; Guy Wallis; Michael Breakspear Fixational eye movements during viewing of dynamic natural scenes Journal Article In: Frontiers in Psychology, vol. 4, pp. 797, 2013. @article{Roberts2013a, Even during periods of fixation our eyes undergo small amplitude movements. These movements are thought to be essential to the visual system because neural responses rapidly fade when images are stabilized on the retina. The considerable recent interest in fixational eye movements (FEMs) has thus far concentrated on idealized experimental conditions with artificial stimuli and restrained head movements, which are not necessarily a suitable model for natural vision. Natural dynamic stimuli, such as movies, offer the potential to move beyond restrictive experimental settings to probe the visual system with greater ecological validity. Here, we study FEMs recorded in humans during the unconstrained viewing of a dynamic and realistic visual environment, revealing that drift trajectories exhibit the properties of a random walk with memory. Drifts are correlated at short time scales such that the gaze position diverges from the initial fixation more quickly than would be expected for an uncorrelated random walk. We propose a simple model based on the premise that the eye tends to avoid retracing its recent steps to prevent photoreceptor adaptation. The model reproduces key features of the observed dynamics and enables estimation of parameters from data. Our findings show that FEM correlations thought to prevent perceptual fading exist even in highly dynamic real-world conditions. |
Leah Roberts; Ayumi Matsuo; Nigel Duffield Processing VP-ellipsis and VP-anaphora with structurally parallel and nonparallel antecedents: An eye-tracking study Journal Article In: Language and Cognitive Processes, vol. 28, no. 1-2, pp. 29–47, 2013. @article{Roberts2013b, In this paper, we report on an eye-tracking study investigating the processing of English VP-ellipsis (John took the rubbish out. Fred did [] too) (VPE) and VP- anaphora (John took the rubbish out. Fred did it too) (VPA) constructions, with syntactically parallel versus nonparallel antecedent clauses (e.g., The rubbish was taken out by John. Fred did [] too/Fred did it too). The results show first that VPE involves greater processing costs than VPA overall. Second, although the structural nonparallelism of the antecedent clause elicited a processing cost for both anaphor types, there was a difference in the timing and the strength of this parallelism effect: it was earlier and more fleeting for VPA, as evidenced by regression path times, whereas the effect occurred later with VPE completions, showing up in second and total fixation times measures, and continuing on into the reading of the adjacent text. Taking the observed differences between the processing of the two anaphor types together with other research findings in the literature, we argue that our data support the idea that in the case of VPE, the VP from the antecedent clause necessitates more computation at the elision site before it is linked to its antecedent than is the case for VPA. |
Linsey Roijendijk; Jason Farquhar; Marcel A. J. Gerven; Ole Jensen; Stan Gielen In: PLoS ONE, vol. 8, no. 12, pp. e80489, 2013. @article{Roijendijk2013, OBJECTIVE: Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. APPROACH: We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. MAIN RESULTS: Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. SIGNIFICANCE: Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research. |
Maria C. Romero; Ilse C. Van Dromme; Peter Janssen The role of binocular disparity in stereoscopic images of objects in the macaque anterior intraparietal area Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e55340, 2013. @article{Romero2013, Neurons in the macaque Anterior Intraparietal area (AIP) encode depth structure in random-dot stimuli defined by gradients of binocular disparity, but the importance of binocular disparity in real-world objects for AIP neurons is unknown. We investigated the effect of binocular disparity on the responses of AIP neurons to images of real-world objects during passive fixation. We presented stereoscopic images of natural and man-made objects in which the disparity information was congruent or incongruent with disparity gradients present in the real-world objects, and images of the same objects where such gradients were absent. Although more than half of the AIP neurons were significantly affected by binocular disparity, the great majority of AIP neurons remained image selective even in the absence of binocular disparity. AIP neurons tended to prefer stimuli in which the depth information derived from binocular disparity was congruent with the depth information signaled by monocular depth cues, indicating that these monocular depth cues have an influence upon AIP neurons. Finally, in contrast to neurons in the inferior temporal cortex, AIP neurons do not represent images of objects in terms of categories such as animate-inanimate, but utilize representations based upon simple shape features including aspect ratio. |
Joost Rommers; Antje S. Meyer; Peter Praamstra; Falk Huettig The contents of predictions in sentence comprehension: Activation of the shape of objects before they are referred to Journal Article In: Neuropsychologia, vol. 51, no. 3, pp. 437–447, 2013. @article{Rommers2013, When comprehending concrete words, listeners and readers can activate specific visual information such as the shape of the words' referents. In two experiments we examined whether such information can be activated in an anticipatory fashion. In Experiment 1, listeners' eye movements were tracked while they were listening to sentences that were predictive of a specific critical word (e.g., "moon" in "In 1969 Neil Armstrong was the first man to set foot on the moon"). 500. ms before the acoustic onset of the critical word, participants were shown four-object displays featuring three unrelated distractor objects and a critical object, which was either the target object (e.g., moon), an object with a similar shape (e.g., tomato), or an unrelated control object (e.g., rice). In a time window before shape information from the spoken target word could be retrieved, participants already tended to fixate both the target and the shape competitors more often than they fixated the control objects, indicating that they had anticipatorily activated the shape of the upcoming word's referent. This was confirmed in Experiment 2, which was an ERP experiment without picture displays. Participants listened to the same lead-in sentences as in Experiment 1. The sentence-final words corresponded to the predictable target, the shape competitor, or the unrelated control object (yielding, for instance, "In 1969 Neil Armstrong was the first man to set foot on the moon/tomato/rice"). N400 amplitude in response to the final words was significantly attenuated in the shape-related compared to the unrelated condition. Taken together, these results suggest that listeners can activate perceptual attributes of objects before they are referred to in an utterance. |
Clive R. Rosenthal; Tammy W. C. Ng; Christopher Kennard Generalisation of new sequence knowledge depends on response modality Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e53990, 2013. @article{Rosenthal2013, New visuomotor skills can guide behaviour in novel situations. Prior studies indicate that learning a visuospatial sequence via responses based on manual key presses leads to effector- and response-independent knowledge. Little is known, however, about the extent to which new sequence knowledge can generalise, and, thereby guide behaviour, outside of the manual response modality. Here, we examined whether learning a visuospatial sequence either via manual (key presses, without eye movements), oculomotor (obligatory eye movements), or perceptual (covert reorienting of visuospatial attention) responses supported generalisation to direct and indirect tests administered either in the same (baseline conditions) or a novel response modality (transfer conditions) with respect to initial study. Direct tests measured the use of conscious knowledge about the studied sequence, whereas the indirect tests did not ostensibly draw on the study phase and measured response priming. Oculomotor learning supported the use of conscious knowledge on the manual direct tests, whereas manual learning supported generalisation to the oculomotor direct tests but did not support the conscious use of knowledge. Sequence knowledge acquired via perceptual responses did not generalise onto any of the manual tests. Manual, oculomotor, and perceptual sequence learning all supported generalisation in the baseline conditions. Notably, the manual baseline condition and the manual to oculomotor transfer condition differed in the magnitude of general skill acquired during the study phase; however, general skill did not predict performance on the post-study tests. The results demonstrated that generalisation was only affected by the responses used to initially code the visuospatial sequence when new knowledge was applied to a novel response modality. We interpret these results in terms of response-effect distinctiveness, the availability of integrated effector- and motor-plan based information, and discuss their implications for neurocognitive accounts of sequence learning. |
Nicholas M. Ross; Eileen Kowler Eye movements while viewing narrated, captioned, and silent videos Journal Article In: Journal of Vision, vol. 13, no. 4, pp. 1–19, 2013. @article{Ross2013, Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. |
Clare M. Press; James M. Kilner The time course of eye movements during action observation reflects sequence learning Journal Article In: NeuroReport, vol. 24, no. 14, pp. 822–826, 2013. @article{Press2013, When we observe object-directed actions such as grasping, we make predictive eye movements. However, eye movements are reactive when observing similar actions without objects. This reactivity may reflect a lack of attribution of intention to observed actors when they perform actions without 'goals'. Alternatively, it may simply signal that there is no cue present that has been predictive of the subsequent trajectory in the observer's experience. To test this hypothesis, the present study investigated how the time course of eye movements changes as a function of visual experience of predictable, but arbitrary, actions without objects. Participants observed a point-light display of a model performing sequential finger actions in a serial reaction time task. Eye movements became less reactive across blocks. In addition, participants who exhibited more predictive eye movements subsequently demonstrated greater learning when required either to execute, or to recognize, the sequence. No measures were influenced by whether participants had been instructed that the observed movements were human or lever generated. The present data indicate that eye movements when observing actions without objects reflect the extent to which the trajectory can be predicted through experience. The findings are discussed with reference to the implications for the mechanisms supporting perception of actions both with and without objects as well as those mediating inanimate object processing. |
Tim J. Preston; Fei Guo; Koel Das; Barry Giesbrecht; Miguel P. Eckstein Neural representations of contextual guidance in visual search of real-world scenes Journal Article In: Journal of Neuroscience, vol. 33, no. 18, pp. 7846–7855, 2013. @article{Preston2013, Exploiting scene context and object– object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. |
Silvia Primativo; Lisa S. Arduino; Maria De Luca; Roberta Daini; Marialuisa Martelli Neglect dyslexia: A matter of "good looking" Journal Article In: Neuropsychologia, vol. 51, no. 11, pp. 2109–2119, 2013. @article{Primativo2013, Brain-damaged patients with right-sided unilateral spatial neglect (USN) often make left-sided errors in reading single words or pseudowords (neglect dyslexia, ND). We propose that both left neglect and low fixation accuracy account for reading errors in neglect dyslexia.Eye movements were recorded in USN patients with (ND+) and without (ND-) neglect dyslexia and in a matched control group of right brain-damaged patients without neglect (USN-). Unlike ND- and controls, ND+ patients showed left lateralized omission errors and a distorted eye movement pattern in both a reading aloud task and a non-verbal saccadic task. During reading, the total number of fixations was larger in these patients independent of visual hemispace, and most fixations were inaccurate. Similarly, in the saccadic task only ND+ patients were unable to reach the moving dot. A third experiment addressed the nature of the left lateralization in reading error distribution by simulating neglect dyslexia in ND- patients. ND- and USN- patients had to perform a speeded reading-at-threshold task that did not allow for eye movements. When stimulus exploration was prevented, ND- patients, but not controls, produced a pattern of errors similar to that of ND+ with unlimited exposure time (e.g., left-sided errors).We conclude that neglect dyslexia reading errors may arise in USN patients as a consequence of an additional and independent deficit unrelated to the orthographic material. In particular, the presence of an altered oculo-motor pattern, preventing the automatic execution of the fine saccadic eye movements involved in reading, uncovers, in USN patients, the attentional bias also in reading single centrally presented words. |
Steven L. Prime; Jonathan J. Marotta Gaze strategies during visually-guided versus memory-guided grasping Journal Article In: Experimental Brain Research, vol. 225, no. 2, pp. 291–305, 2013. @article{Prime2013, Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream. |
Benjamin Reichelt; Sina Kühnel; Dennis E. Dal Mas The influence of explicit and implicit memory processes on experience-dependent eye movements Journal Article In: Procedia Social and Behavioral Sciences, vol. 82, pp. 455–460, 2013. @article{Reichelt2013, In some studies experience-dependent eye movements have been reported with as well as without conscious awareness. Thus, our study aims to clarify if experience-dependent eye movements are influenced by mainly implicit or explicit memory processes. Participants saw in experiment 1 photographed scenes that were novel, repeated or repeated with a manipulation (object added /removed). In experiment 2, participants viewed novel and repeated scenes distributed over three days. Participants subsequently had to recognize whether the scenes were novel, repeated or manipulated. In both experiments, experience-dependent eye movements were observed when participants were aware of the manipulation or repetition as well as when they were unaware. In contrast to previous studies, our results suggest that explicit as well as implicit memory processes have an influence on experience-dependent eye movements. |
Eva Reinisch; Matthias J. Sjerps The uptake of spectral and temporal cues in vowel perception is rapidly influenced by context Journal Article In: Journal of Phonetics, vol. 41, no. 2, pp. 101–116, 2013. @article{Reinisch2013, Speech perception is dependent on auditory information within phonemes such as spectral or temporal cues. The perception of those cues, however, is affected by auditory information in surrounding context (e.g., a fast context sentence can make a target vowel sound subjectively longer). In a two-by-two design the current experiments investigated when these different factors influence vowel perception. Dutch listeners categorized minimal word pairs such as /tak/-/ta:k/ ("branch"-"task") embedded in a context sentence. Critically, the Dutch /a/-/a:/ contrast is cued by spectral and temporal information. We varied the second formant (F2) frequencies and durations of the target vowels. Independently, we also varied the F2 and duration of all segments in the context sentence. The timecourse of cue uptake on the targets was measured in a printed-word eye-tracking paradigm. Results show that the uptake of spectral cues slightly precedes the uptake of temporal cues. Furthermore, acoustic manipulations of the context sentences influenced the uptake of cues in the target vowel immediately. That is, listeners did not need additional time to integrate spectral or temporal cues of a target sound with auditory information in the context. These findings argue for an early locus of contextual influences in speech perception. |
Brian A. Richardson; Tyler Cluff; James Lyons; Ramesh Balasubramaniam An eye-to-hand magnet effect reveals distinct spatial interference in motor planning and execution Journal Article In: Experimental Brain Research, vol. 225, no. 3, pp. 443–454, 2013. @article{Richardson2013, An important question in oculomanual control is whether motor planning and execution modulate interference between motion of the eyes and hands. Here we investigated oculomanual interference using a novel paradigm that required saccadic eye movements and unimanual finger tapping. We examined finger trajectories for spatial interference caused by concurrent saccades. The first experiment used synchronous cues so that saccades and taps shared a common timekeeping goal. We found that finger trajectories showed bilateral interference where either finger was attracted in the direction of the accompanying saccade. The second experiment avoided interference due to shared planning resources by examining interference caused by reactive saccades. Here, we observed a lesser degree of execution-dependent coupling where the finger trajectory deviated only when reactive saccades were directed toward the hemifield of the responding hand. Our results show that distinct forms of eye-to-hand coupling emerge according to the demands of the task. |
Fabio Richlan; Benjamin Gagl; Sarah Schuster; Stefan Hawelka; Josef Humenberger; Florian Hutzler A new high-speed visual stimulation method for gaze-contingent eye movement and brain activity studies Journal Article In: Frontiers in Systems Neuroscience, vol. 7, pp. 24, 2013. @article{Richlan2013, Approaches using eye movements as markers of ongoing brain activity to investigate perceptual and cognitive processes were able to implement highly sophisticated paradigms driven by eye movement recordings. Crucially, these paradigms involve display changes that have to occur during the time of saccadic blindness, when the subject is unaware of the change. Therefore, a combination of high-speed eye tracking and high-speed visual stimulation is required in these paradigms. For combined eye movement and brain activity studies (e.g., fMRI, EEG, MEG), fast and exact timing of display changes is especially important, because of the high susceptibility of the brain to visual stimulation. Eye tracking systems already achieve sampling rates up to 2000 Hz, but recent LCD technologies for computer screens reduced the temporal resolution to mostly 60 Hz, which is too slow for gaze-contingent display changes. We developed a high-speed video projection system, which is capable of reliably delivering display changes within the time frame of < 5 ms. This could not be achieved even with the fastest cathode ray tube (CRT) monitors available (< 16 ms). The present video projection system facilitates the realization of cutting-edge eye movement research requiring reliable high-speed visual stimulation (e.g., gaze-contingent display changes, short-time presentation, masked priming). Moreover, this system can be used for fast visual presentation in order to assess brain activity using various methods, such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). The latter technique was previously excluded from high-speed visual stimulation, because it is not possible to operate conventional CRT monitors in the strong magnetic field of an MRI scanner. Therefore, the present video projection system offers new possibilities for studying eye movement-related brain activity using a combination of eye tracking and fMRI. |
Gerulf Rieger; Allen M. Rosenthal; Brian M. Cash; Joan A. W. Linsenmeier; J. Michael Bailey; Ritch C. Savin-Williams Male bisexual arousal: A matter of curiosity? Journal Article In: Biological Psychology, vol. 94, no. 3, pp. 479–489, 2013. @article{Rieger2013, Conflicting evidence exists regarding whether bisexual-identified men are sexually aroused to both men and women. We hypothesized that a distinct characteristic, level of curiosity about sexually diverse acts, distinguishes bisexual-identified men with and without bisexual arousal. Study 1 assessed men's (n= 277) sexual arousal via pupil dilation to male and female sexual stimuli. Bisexual men were, on average, higher in their sexual curiosity than other men. Despite this general difference, only bisexual-identified men with elevated sexual curiosity showed bisexual arousal. Those lower in curiosity had responses resembling those of homosexual men. Study 2 assessed men's (n= 72) sexual arousal via genital responses and replicated findings of Study 1. Study 3 provided information on the validity on our measure of sexual curiosity by relating it to general curiosity and sexual sensation seeking (n= 83). Based on their sexual arousal and personality, at least two groups of men identify as bisexual. |
Hector Rieiro; Susana Martinez-Conde; Stephen L. Macknik Perceptual elements in Penn & Teller's “Cups and Balls” magic trick Journal Article In: PeerJ, vol. 1, pp. 1–12, 2013. @article{Rieiro2013, Magic illusions provide the perceptual and cognitive scientist with a toolbox of experimental manipulations and testable hypotheses about the building blocks of conscious experience. Here we studied several sleight-of-hand manipulations in the performance of the classic "Cups and Balls" magic trick (where balls appear and disappear inside upside-down opaque cups). We examined a version inspired by the entertainment duo Penn & Teller, conducted with three opaque and subsequently with three transparent cups. Magician Teller used his right hand to load (i.e. introduce surreptitiously) a small ball inside each of two upside-down cups, one at a time, while using his left hand to remove a different ball from the upside-down bottom of the cup. The sleight at the third cup involved one of six manipulations: (a) standard maneuver, (b) standard maneuver without a third ball, (c) ball placed on the table, (d) ball lifted, (e) ball dropped to the floor, and (f) ball stuck to the cup. Seven subjects watched the videos of the performances while reporting, via button press, whenever balls were removed from the cups/table (button "1") or placed inside the cups/on the table (button "2"). Subjects' perception was more accurate with transparent than with opaque cups. Perceptual performance was worse for the conditions where the ball was placed on the table, or stuck to the cup, than for the standard maneuver. The condition in which the ball was lifted displaced the subjects' gaze position the most, whereas the condition in which there was no ball caused the smallest gaze displacement. Training improved the subjects' perceptual performance. Occlusion of the magician's face did not affect the subjects' perception, suggesting that gaze misdirection does not play a strong role in the Cups and Balls illusion. Our results have implications for how to optimize the performance of this classic magic trick, and for the types of hand and object motion that maximize magic misdirection. |
Chris A. Rishel; Gang Huang; David J. Freedman Independent category and spatial encoding in parietal cortex Journal Article In: Neuron, vol. 77, no. 5, pp. 969–979, 2013. @article{Rishel2013, The posterior parietal cortex plays a central role in spatial functions, such as spatial attention and saccadic eye movements. However, recent work has increasingly focused on the role of parietal cortex in encoding nonspatial cognitive factors such as visual categories, learned stimulus associations, and task rules. The relationship between spatial encoding and nonspatial cognitive signals in parietal cortex, and whether cognitive signals are robustly encoded in the presence of strong spatial neuronal responses, is unknown. We directly compared nonspatial cognitive and spatial encoding in the lateral intraparietal (LIP) area by training monkeys to perform a visual categorization task during which they made saccades toward or away from LIP response fields (RFs). Here we show that strong saccade-related responses minimally influence robustly encoded category signals in LIP. This suggests that cognitive and spatial signals are encoded independently in LIP and underscores the role of parietal cortex in nonspatial cognitive functions. |
Evan F. Risko; Erin A. Maloney; Jonathan A. Fugelsang Paying attention to attention: Evidence for an attentional contribution to the size congruity effect Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 6, pp. 1137–1147, 2013. @article{Risko2013, Understanding the mechanisms supporting our comprehension of magnitude information represents a key goal in cognitive psychology. A major phenomenon employed in the pursuit of this goal has been the physical size congruity effect-namely, the observation that comparing the relative numerical sizes of two numbers is influenced by their relative physical sizes. The standard account of the physical size congruity effect attributes it to the automatic influence of the comparison of irrelevant physical magnitudes on numerical judgments. Here we develop an alternative account of this effect on the basis of the operation of attention in the typical size congruity display and the temporal dynamics of number comparison. We also provide a test of a number of predictions derived from this alternative account by combining a physical size congruity manipulation with a manipulation designed to alter the operation of attention within the typical size congruity display (i.e., a manipulation of the relative onsets of the digits). This test provides evidence consistent with an attentional contribution to the size congruity effect. Implications for our understanding of magnitude and the interactions between attention and magnitude are discussed. |
Dana Schneider; Virginia P. Slaughter; Andrew P. Bayliss; Paul E. Dux A temporally sustained implicit theory of mind deficit in autism spectrum disorders Journal Article In: Cognition, vol. 129, no. 2, pp. 410–417, 2013. @article{Schneider2013, Eye movements during false-belief tasks can reveal an individual's capacity to implicitly monitor others' mental states (theory of mind - ToM). It has been suggested, based on the results of a single-trial-experiment, that this ability is impaired in those with a high-functioning autism spectrum disorder (ASD), despite neurotypical-like performance on explicit ToM measures. However, given there are known attention differences and visual hypersensitivities in ASD it is important to establish whether such impairments are evident over time. In addition, investigating implicit ToM using a repeated trial approach allows an assessment of whether learning processes can reduce the ASD impairment in this ability, as is the case with explicit ToM. Here we investigated the temporal profile of implicit ToM in individuals with ASD and a control group. Despite similar performance on explicit ToM measures, ASD-diagnosed individuals showed no evidence of implicit false-belief tracking even over a one-hour period and many trials, whereas control participants did. These findings demonstrate that the systems involved in implicit and explicit ToM are distinct and hint that impaired implicit false-belief tracking may play an important role in ASD. Further, they indicate that learning processes do not alleviate this impairment across the presentation of multiple trials. |
Matthew H. Schneps; Jenny M. Thomson; Gerhard Sonnert; Marc Pomplun; Chen Chen; Amanda Heffner-Wong Shorter lines facilitate reading in those who struggle Journal Article In: PLoS ONE, vol. 8, no. 8, pp. e71161, 2013. @article{Schneps2013, People with dyslexia, who ordinarily struggle to read, sometimes remark that reading is easier when e-readers are used. Here, we used eye tracking to observe high school students with dyslexia as they read using these devices. Among the factors investigated, we found that reading using a small device resulted in substantial benefits, improving reading speeds by 27%, reducing the number of fixations by 11%, and importantly, reducing the number of regressive saccades by more than a factor of 2, with no cost to comprehension. Given that an expected trade-off between horizontal and vertical regression was not observed when line lengths were altered, we speculate that these effects occur because sluggish attention spreads perception to the left as the gaze shifts during reading. Short lines eliminate crowded text to the left, reducing regression. The effects of attention modulation by the hand, and of increased letter spacing to reduce crowding, were also found to modulate the oculomotor dynamics in reading, but whether these factors resulted in benefits or costs depended on characteristics, such as visual attention span, that varied within our sample. |
Casey A. Schofield; Albrecht W. Inhoff; Meredith E. Coles Time-course of attention biases in social phobia Journal Article In: Journal of Anxiety Disorders, vol. 27, no. 7, pp. 661–669, 2013. @article{Schofield2013, Theoretical models of social phobia implicate preferential attention to social threat in the maintenance of anxiety symptoms, though there has been limited work characterizing the nature of these biases over time. The current study utilized eye-movement data to examine the time-course of visual attention over 1500. ms trials of a probe detection task. Nineteen participants with a primary diagnosis of social phobia based on DSM-IV criteria and 20 non-clinical controls completed this task with angry, fearful, and happy face trials. Overt visual attention to the emotional and neutral faces was measured in 50. ms segments across the trial. Over time, participants with social phobia attend less to emotional faces and specifically less to happy faces compared to controls. Further, attention to emotional relative to neutral expressions did not vary notably by emotion for participants with social phobia, but control participants showed a pattern after 1000. ms in which over time they preferentially attended to happy expressions and avoided negative expressions. Findings highlight the importance of considering attention biases to positive stimuli as well as the pattern of attention between groups. These results suggest that attention "bias" in social phobia may be driven by a relative lack of the biases seen in non-anxious participants. |
Jörg Schorer; Rebecca Rienhoff; Lennart Fischer; Joseph Baker Foveal and peripheral fields of vision influences perceptual skill in anticipating opponents' attacking position in volleyball Journal Article In: Applied Psychophysiology Biofeedback, vol. 38, no. 3, pp. 185–192, 2013. @article{Schorer2013, The importance of perceptual-cognitive expertise in sport has been repeatedly demonstrated. In this study we examined the role of different sources of visual information (i.e., foveal versus peripheral) in anticipating volleyball attack positions. Expert (n = 11), advanced (n = 13) and novice (n = 16) players completed an anticipation task that involved predicting the location of volleyball attacks. Video clips of volleyball attacks (n = 72) were spatially and temporally occluded to provide varying amounts of information to the participant. In addition, participants viewed the attacks under three visual conditions: full vision, foveal vision only, and peripheral vision only. Analysis of variance revealed significant between group differences in prediction accuracy with higher skilled players performing better than lower skilled players. Additionally, we found significant differences between temporal and spatial occlusion conditions. Both of those factors interacted separately, but not combined with expertise. Importantly, for experts the sum of both fields of vision was superior to either source in isolation. Our results suggest different sources of visual information work collectively to facilitate expert anticipation in time-constrained sports and reinforce the complexity of expert perception. |
Elizabeth R. Schotter Synonyms provide semantic preview benefit in English Journal Article In: Journal of Memory and Language, vol. 69, no. 4, pp. 619–633, 2013. @article{Schotter2013a, While orthographic and phonological preview benefits in reading are uncontroversial (see Schotter, Angele, & Rayner, 2012 for a review), researchers have debated the existence of semantic preview benefit with positive evidence in Chinese and German, but no support in English. Two experiments, using the gaze-contingent boundary paradigm (Rayner, 1975), show that semantic preview benefit can be observed in English when the preview and target are synonyms (share the same or highly similar meaning, e.g., curlers-rollers). However, no semantic preview benefit was observed for semantic associates (e.g., curlers-styling). These different preview conditions represent different degrees to which the meaning of the sentence changes when the preview is replaced by the target. When this continuous variable (determined by a norming procedure) was used as the predictor in the analyses, there was a significant relationship between it and all reading time measures, suggesting that similarity in meaning between what is accessed parafoveally and what is processed foveally may be an important influence on the presence of semantic preview benefit. Why synonyms provide semantic preview benefit in reading English is discussed in relation to (1) previous failures to find semantic preview benefit in English and (2) the fact that semantic preview benefit is observed in other languages even for non-synonymous words. Semantic preview benefit is argued to depend on several factors-attentional resources, depth of orthography, and degree of similarity between preview and target. |
Elizabeth R. Schotter; Victor S. Ferreira; Keith Rayner Parallel object activation and attentional gating of information: Evidence from eye movements in the multiple object naming paradigm Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 2, pp. 365–374, 2013. @article{Schotter2013, Do we access information from any object we can see, or do we access information only from objects that we intend to name? In 3 experiments using a modified multiple object naming paradigm, subjects were required to name several objects in succession when previews appeared briefly and simultaneously in the same location as the target as well as at another location. In Experiment 1, preview benefit-faster processing of the target when the preview was related (a mirror image of the target) compared to unrelated (semantically and phonologically)-was found for the preview in the target location but not a location that was never to be named. In Experiment 2, preview benefit was found if a related preview appeared in either the target location or the third-to-be-named location. Experiment 3 showed the difference between results from the first 2 experiments was not due to the number of objects on the screen. These data suggest that attention serves to gate visual input about objects based on the intention to name them and that information from one intended-to-be-named object can facilitate processing of an object in another location. |
Alexander C. Schutz; Felix Lossin; Dirk Kerzel Temporal stimulus properties that attract gaze to the periphery and repel gaze from fixation Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–17, 2013. @article{Schutz2013, Humans use saccadic eye movements to fixate different parts of their visual environment. While stimulus features that determine the location of the next fixation in static images have been extensively studied, temporal stimulus features that determine the timing of the gaze shifts received less attention. It is also unclear if stimulus features at the present gaze location can trigger gaze shifts to another location. To investigate these questions, we asked observers to switch their gaze between two blobs. In three different conditions, either the fixated blob, the peripheral blob, or both blobs were flickering. A time-frequency analysis of the flickering noise values, time locked to the gaze shifts, revealed significant phase locking in a time window 300 to 100 ms before the gaze shift at temporal frequencies below 20 Hz. The average phase angles at these time-frequency points indicated that observer's gaze was repelled by decreasing contrast of the fixated blob and attracted by increasing contrast of the peripheral blob. These results show that temporal properties of both, fixated, and peripheral stimuli are capable of triggering gaze shifts. |
Immo Schütz; Denise Y. P. Henriques; K. Fiehler Gaze-centered spatial updating in delayed reaching even in the presence of landmarks Journal Article In: Vision Research, vol. 87, pp. 46–52, 2013. @article{Schuetz2013, Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12. s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets. |
Yin Su; Li-Lin Rao; Hong-Yue Sun; Xue-Lei Du; Xingshan Li; Shu Li Is making a risky choice based on a weighting and adding process? An eye-tracking investigation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 6, pp. 1765–1780, 2013. @article{Su2013, The debate about whether making a risky choice is based on a weighting and adding process has a long history and is still unresolved. To address this long-standing controversy, we developed a comparative paradigm. Participants' eye movements in 2 risky choice tasks that required participants to choose between risky options in single-play and multiple-play conditions were separately compared with those in a baseline task in which participants naturally performed a deliberate calculation following a weighting and adding process. The results showed that, when participants performed the multiple-play risky choice task, their eye movements were similar to those in the baseline task, suggesting that participants may use a weighting and adding process to make risky choices in multiple-play conditions. In contrast, participants' eye movements were different in the single-play risky choice task versus the baseline task, suggesting that participants were not likely to use a weighting and adding process to make risky choices in single-play conditions and were more likely to use a heuristic process. We concluded that an expectation-based index for predicting risk preferences is applicable in multiple-play conditions but not in single-play conditions, implying the need to improve current theories that postulate the use of a heuristic process. |
Pei Sun; Justin L. Gardner; Mauro Costagli; Kenichi Ueno; R. Allen Waggoner; Keiji Tanaka; Kang Cheng In: Cerebral Cortex, vol. 23, no. 7, pp. 1618–1629, 2013. @article{Sun2013, Cells in the animal early visual cortex are sensitive to contour orientations and form repeated structures known as orientation columns. At the behavioral level, there exist 2 well-known global biases in orientation perception (oblique effect and radial bias) in both animals and humans. However, their neural bases are still under debate. To unveil how these behavioral biases are achieved in the early visual cortex, we conducted high-resolution functional magnetic resonance imaging experiments with a novel continuous and periodic stimulation paradigm. By inserting resting recovery periods between successive stimulation periods and introducing a pair of orthogonal stimulation conditions that differed by 90 degrees continuously, we focused on analyzing a blood oxygenation level-dependent response modulated by the change in stimulus orientation and reliably extracted orientation preferences of single voxels. We found that there are more voxels preferring horizontal and vertical orientations, a physiological substrate underlying the oblique effect, and that these over-representations of horizontal and vertical orientations are prevalent in the cortical regions near the horizontal- and vertical-meridian representations, a phenomenon related to the radial bias. Behaviorally, we also confirmed that there exists perceptual superiority for horizontal and vertical orientations around horizontal and vertical meridians, respectively. Our results, thus, refined the neural mechanisms of these 2 global biases in orientation perception. |
Megumi Suzuki; Jeremy M. Wolfe; Todd S. Horowitz; Yasuki Noguchi Apparent color-orientation bindings in the periphery can be influenced by feature binding in central vision Journal Article In: Vision Research, vol. 82, pp. 58–65, 2013. @article{Suzuki2013, A previous study reported the misbinding illusion in which visual features belonging to overlapping sets of items were erroneously integrated (Wu, Kanai, & Shimojo, 2004, Nature, 429, 262). In this illusion, central and peripheral portions of a transparent motion field combined color and motion in opposite fashions. When observers saw such stimuli, their perceptual color-motion bindings in the periphery were re-arranged in such a way as to accord with the bindings in the central region, resulting in erroneous color-motion pairings (misbinding) in peripheral vision. Here we show that this misbinding illusion is also seen in the binding of color and orientation. When the central field of a stimulus array was composed of objects that had coherent (regular) color-orientation pairings, subjective color-orientation bindings in the peripheral stimuli were automatically altered to match the coherent pairings of the central stimuli. Interestingly, the illusion was induced only when all items in the central field combined color and orientation in an orthogonal fashion (e.g. all red bars were horizontal and all green bars were vertical). If this orthogonality was disrupted (e.g. all red and green bars were horizontal), the central field lost its power to induce the misbinding illusion in the peripheral stimuli. The original misbinding illusion study proposed that the illusion stemmed from a perceptual extrapolation that resolved peripheral ambiguity with clear central vision. However, our present results indicate that visual analyses of the correlational structure between two features (color and orientation) are critical for the illusion to occur, suggesting a rapid integration of multiple featural cues in the human visual system. |
Sruthi K. Swaminathan; Nicolas Y. Masse; David J. Freedman A comparison of lateral and medial intraparietal areas during a visual categorization task Journal Article In: Journal of Neuroscience, vol. 33, no. 32, pp. 13157–13170, 2013. @article{Swaminathan2013, Categorization is essential for interpreting sensory stimuli and guiding our actions. Recent studies have revealed robust neuronal category representations in the lateral intraparietal area (LIP). Here, we examine the specialization of LIP for categorization and the roles of other parietal areas by comparing LIP and the medial intraparietal area (MIP) during a visual categorization task. MIP is involved in goal-directed arm movements and visuomotor coordination but has not been implicated in non-motor cognitive functions, such as categorization. As expected, we found strong category encoding in LIP. Interestingly, we also observed category signals in MIP. However, category signals were stronger and appeared with a shorter latency in LIP than MIP. In this task, monkeys indicated whether a test stimulus was a category match to a previous sample with a manual response. Test-period activity in LIP showed category encoding and distinguished between matches and non-matches. In contrast, MIP primarily reflected the match/non-match status of test stimuli, with a strong preference for matches (which required a motor response). This suggests that, although category representations are distributed across parietal cortex, LIP and MIP play distinct roles: LIP appears more involved in the categorization process itself, whereas MIP is more closely tied to decision-related motor actions. |
Bernard Marius Hart; Hannah Claudia Elfriede; Fanny Schmidt; Ingo Klein-Harmeyer; Wolfgang Einhäuser Attention in natural scenes: Contrast affects rapid visual processing and fixations alike Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–10, 2013. @article{tHart2013a, For natural scenes, attention is frequently quantified either by performance during rapid presentation or by gaze allocation during prolonged viewing. Both paradigms operate on different time scales, and tap into covert and overt attention, respectively. To compare these, we ask some observers to detect targets (animals/vehicles) in rapid sequences, and others to freely view the same target images for 3 s, while their gaze is tracked. In some stimuli, the target's contrast is modified (increased/decreased) and its back- ground modified either in the same or in the opposite way. We find that increasing target contrast relative to the background increases fixations and detection alike, whereas decreasing target contrast and simultaneously increasing background contrast has little effect. Contrast increase for the whole image (target þ background) improves detection, decrease worsens detection, whereas fixation probability remains unaffected by whole-image modifications. Object-unrelated local increase or decrease of contrast attracts gaze, but less than actual objects, supporting a precedence of objects over low-level features. Detection and fixation probability are correlated: the more likely a target is detected in one paradigm, the more likely it is fixated in the other. Hence, the link between overt and covert attention, which has been established in simple stimuli, transfers to more naturalistic scenarios. |
Bernard Marius Hart; Hannah C. E. F. Schmidt; Christine Roth; Wolfgang Einhäuser Fixations on objects in natural scenes: Dissociating importance from salience Journal Article In: Frontiers in Psychology, vol. 4, pp. 455, 2013. @article{tHart2013, The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object's "importance" for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named ("common"/"important") or a rarely named ("rare"/"unimportant") object, track the observers' eye movements during scene viewing and ask them to provide WABBLE describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object's importance suggests an analogy to the effects of word frequency on landing positions in reading. |
Karine Tadros; Nicolas Dupuis-Roy; Daniel Fiset; Martin Arguin; Frédéric Gosselin Reading laterally: The cerebral hemispheric use of spatial frequencies in visual word recognition Journal Article In: Journal of Vision, vol. 13, no. 1, pp. 1–12, 2013. @article{Tadros2013, It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding. |
Durk Talsma; Brian J. White; Sebastiaan Mathôt; Douglas P. Munoz; Jan Theeuwes A retinotopic attentional trace after saccadic eye movements: Evidence from event-related potentials Journal Article In: Journal of Cognitive Neuroscience, vol. 25, no. 9, pp. 1563–1577, 2013. @article{Talsma2013, Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts. |
Heng Ru May Tan; Hartmut Leuthold; Joachim Gross Gearing up for action: Attentive tracking dynamically tunes sensory and motor oscillations in the alpha and beta band Journal Article In: NeuroImage, vol. 82, pp. 634–644, 2013. @article{Tan2013, Allocation of attention during goal-directed behavior entails simultaneous processing of relevant and attenuation of irrelevant information. How the brain delegates such processes when confronted with dynamic (biological motion) stimuli and harnesses relevant sensory information for sculpting prospective responses remains unclear. We analyzed neuromagnetic signals that were recorded while participants attentively tracked an actor's pointing movement that ended at the location where subsequently the response-cue indicated the required response. We found the observers' spatial allocation of attention to be dynamically reflected in lateralized parieto-occipital alpha (8-12. Hz) activity and to have a lasting influence on motor preparation. Specifically, beta (16-25. Hz) power modulation reflected observers' tendency to selectively prepare for a spatially compatible response even before knowing the required one. We discuss the observed frequency-specific and temporally evolving neural activity within a framework of integrated visuomotor processing and point towards possible implications about the mechanisms involved in action observation. |
Matthew J. Stainer; Kenneth C. Scott-Brown; Benjamin W. Tatler Behavioral biases when viewing multiplexed scenes: Scene structure and frames of reference for inspection Journal Article In: Frontiers in Psychology, vol. 4, pp. 624, 2013. @article{Stainer2013, Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate "sub-scenes." Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers. |
Adrian Staub; Ashley Benatar Individual differences in fixation duration distributions in reading Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 6, pp. 1304–1311, 2013. @article{Staub2013, The present study investigated the relationship between the location and skew of an individual reader's fixation duration distribution. The ex-Gaussian distribution was fit to eye fixation data from 153 subjects in five experiments, four previously presented and one new. The τ parameter was entirely uncorrelated with the μ and σ parameters; by contrast, there was a modest positive correlation between these parameters for lexical decision and speeded pronunciation response times. The conclusion that, for fixation durations, the degree of skew is uncorrelated with the location of the distribution's central tendency was also confirmed nonparametrically, by examining vincentile plots for subgroups of subjects. Finally, the stability of distributional parameters for a given subject was demonstrated to be relatively high. Taken together with previous findings of selective influence on the μ parameter of the fixation duration distribution, the present results suggest that in reading, the location and the skew of the fixation duration distribution may reflect functionally distinct processes. The authors speculate that the skew parameter may specifically reflect the frequency of processing disruption. |
Michael Stengel; Martin Eisemann; Stephan Wenger; Benjamin Hell; Marcus Magnor Optimizing apparent display resolution enhancement for arbitrary videos Journal Article In: IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3604–3613, 2013. @article{Stengel2013, Display resolution is frequently exceeded by available image resolution. Recently, apparent display resolution enhancement (ADRE) techniques show how characteristics of the human visual system can be exploited to provide super-resolution on high refresh rate displays. In this paper, we address the problem of generalizing the ADRE technique to conventional videos of arbitrary content. We propose an optimization-based approach to continuously translate the video frames in such a way that the added motion enables apparent resolution enhancement for the salient image region. The optimization considers the optimal velocity, smoothness, and similarity to compute an appropriate trajectory. In addition, we provide an intuitive user interface that allows to guide the algorithm interactively and preserves important compositions within the video. We present a user study evaluating apparent rendering quality and show versatility of our method on a variety of general test scenes. |
Denise Nadine Stephan; Iring Koch; Jessica Hendler; Lynn Huestegge Task switching, modality compatibility, and the supra-modal function of eye movements Journal Article In: Experimental Psychology, vol. 60, no. 2, pp. 90–99, 2013. @article{Stephan2013, Previous research suggested that specific pairings of stimulus and response modalities (visual-manual and auditory-vocal tasks) lead to better dual-task performance than other pairings (visual-vocal and auditory-manual tasks). In the present task-switching study, we further examined this modality compatibility effect and investigated the role of response modality by additionally studying oculomotor responses as an alternative to manual responses. Interestingly, the switch cost pattern revealed a much stronger modality compatibility effect for groups in which vocal and manual responses were combined as compared to a group involving vocal and oculomotor responses, where the modality compatibility effect was largely abolished. We suggest that in the vocal-manual response groups the modality compatibility effect is based on cross-talk of central processing codes due to preferred stimulus-response modality processing pathways, whereas the oculomotor response modality may be shielded against cross-talk due to the supra-modal functional importance of visual orientation. |
Julia M. Stephen; Brian A. Coffman; David B. Stone; Piyadasa Kodituwakku Differences in MEG gamma oscillatory power during performance of a prosaccade task in adolescents with FASD Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 900, 2013. @article{Stephen2013, Fetal alcohol spectrum disorder (FASD) is characterized by a broad range of behavioral and cognitive deficits that impact the long-term quality of life for affected individuals. However, the underlying changes in brain structure and function associated with these cognitive impairments are not well-understood. Previous studies identified deficits in behavioral performance of prosaccade tasks in children with FASD. In this study, we investigated group differences in gamma oscillations during performance of a prosaccade task. We collected magnetoencephalography (MEG) data from 15 adolescents with FASD and 20 age-matched healthy controls (HC) with a mean age of 15.9 ± 0.4 years during performance of a prosaccade task. Eye movement was recorded and synchronized to the MEG data using an MEG compatible eye-tracker. The MEG data were analyzed relative to the onset of the visual saccade. Time-frequency analysis was performed using Fieldtrip with a focus on group differences in gamma-band oscillations. Following left target presentation, we identified four clusters over right frontal, right parietal, and left temporal/occipital cortex, with significantly different gamma-band (30-50 Hz) power between FASD and HC. Furthermore, visual M100 latencies described in Coffman etal. (2012) corresponded with increased gamma power over right central cortex in FASD only. Gamma-band differences were not identified for stimulus-averaged responses implying that these gamma-band differences were related to differences in saccade network functioning. These differences in gamma-band power may provide indications of atypical development of cortical networks in individuals with FASD. |
Andrew J. Stewart; Matthew Haigh; Heather J. Ferguson Sensitivity to speaker control in the online comprehension of conditional tips and promises: An eye-tracking study Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 4, pp. 1022–1036, 2013. @article{Stewart2013, Statements of the form if… then… can be used to communicate conditional speech acts such as tips and promises. Conditional promises require the speaker to have perceived control over the outcome event, whereas conditional tips do not. In an eye-tracking study, we examined whether readers are sensitive to information about perceived speaker control during processing of conditionals embedded in context. On a number of eye-tracking measures, we found that readers are sensitive to whether or not the speaker of a conditional has perceived control over the consequent event; conditional promises (which require the speaker to have perceived control over the consequent) result in processing disruption for contexts where this control is absent. Conditional tips (which do not require perceived control) are processed equivalently easily regardless of context. These results suggest that readers rapidly utilize pragmatic information related to perceived control in order to represent conditional speech acts as they are read. |
Mallory C. Stites; Kara D. Federmeier; Elizabeth A. L. Stine-Morrow Cross-age comparisons reveal multiple strategies for lexical ambiguity resolution during natural reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 6, pp. 1823–1841, 2013. @article{Stites2013a, Eye tracking was used to investigate how younger and older (60 or more years) adults use syntactic and semantic information to disambiguate noun/verb (NV) homographs (e.g., park). In event-related potential (ERP) work using the same materials, Lee and Federmeier (2009, 2011) found that young adults elicited a sustained frontal negativity to NV homographs when only syntactic cues were available (i.e., in syntactic prose); this effect was eliminated by semantic constraints. The negativity was only present in older adults with high verbal fluency. The current study shows parallel findings: Young adults exhibit inflated first fixation durations to NV homographs in syntactic prose, but not semantically congruent sentences. This effect is absent in older adults as a group. Verbal fluency modulates the effect in both age groups: High fluency is associated with larger first fixation effects in syntactic prose. Older, but not younger, adults also show significantly increased rereading of the NV homographs in syntactic prose. Verbal fluency modulates this effect as well: High fluency is associated with a reduced tendency to reread, regardless of age. This relationship suggests a trade-off between initial and downstream processing costs for ambiguity during natural reading. Together the eye-tracking and ERP data suggest that effortful meaning selection recruits mechanisms important for suppressing contextually inappropriate meanings, which also slow eye movements. Efficacy of frontotemporal circuitry, as captured by verbal fluency, predicts the success of engaging these mechanisms in both young and older adults. Failure to recruit these processes requires compensatory rereading or leads to comprehension failures (Lee & Federmeier, 2012). |
Mallory C. Stites; Steven G. Luke; Kiel Christianson The psychologist said quickly, "Dialogue descriptions modulate reading speed!" Journal Article In: Memory & Cognition, vol. 41, no. 1, pp. 137–151, 2013. @article{Stites2013, In the present study, we investigated whether the semantic content of a dialogue description can affect reading times on an embedded quote, to determine whether the speed at which a character is described as saying a quote influences how quickly it is read. Yao and Scheepers (Cognition, 121:447-453, 2011) previously found that readers were faster to read direct quotes when the preceding context implied that the talker generally spoke quickly, an effect attributed to perceptual simulation of talker speed. For the present study, we manipulated the speed of a physical action performed by the speaker independently from character talking rate to determine whether these sources have separable effects on perceptual simulation of a direct quote. The results showed that readers spent less time reading direct quotes described as being said quickly, as compared to those described as being said slowly (e.g., John walked/bolted into the room and said energetically/nonchalantly, "I finally found my car keys."), an effect that was not present when a nearly identical phrase was presented as an indirect quote (e.g., John . . . said energetically that he finally found his car keys.). The speed of the character's movement did not affect direct-quote reading times. Furthermore, fast adverbs were themselves read significantly faster than slow adverbs, an effect that we attribute to implicit effects on the eye movement program stemming from automatically activated semantic features of the adverbs. Our findings add to the literature on perceptual simulation by showing that these effects can be instantiated with only a single adverb and are strong enough to override the effects of global sentence speed. |
Kyeong Jin Tark; Clayton E. Curtis Deciding where to look based on visual, auditory, and semantic information Journal Article In: Brain Research, vol. 1525, pp. 26–38, 2013. @article{Tark2013, Neurons in the dorsal frontal and parietal cortex are thought to transform incoming visual signals into the spatial goals of saccades, a process known as target selection. Here, we used functional magnetic resonance imaging (fMRI) to test how target selection may generalize beyond visual transformations when auditory and semantic information is used for selection. We compared activity in the frontal and parietal cortex when subjects made visually, aurally, and semantically guided saccades to one of four differently colored dots. Selection was based on a visual cue (i.e., one of the dots blinked), an auditory cue (i.e., a white noise burst was emitted at one of the dots' location), or a semantic cue (i.e., the color of one of the dots was spoken). Although neural responses in frontal and parietal cortex were robust, they were non-specific with regard to the type of information used for target selection. Decoders, however, trained on the patterns of activity in the intraparietal sulcus could classify both the type of cue used for target selection and the direction of the saccade. Therefore, we find evidence that the posterior parietal cortex is involved in transforming multimodal inputs into general spatial representations that can be used to guide saccades. |
Benjamin W. Tatler; Yoriko Hirose; Sarah K. Finnegan; Riina Pievilainen; Clare Kirtley; Alan Kennedy Priorities for selection and representation in natural tasks Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 368, pp. 1–10, 2013. @article{Tatler2013, Selecting and remembering visual information is an active and competitive process. In natural environments, representations are tightly coupled to task. Objects that are task-relevant are remembered better due to a combination of increased selection for fixation and strategic control of encoding and/or retaining viewed information. However, it is not understood how physically manipulating objects when performing a natural task influences priorities for selection and memory. In this study, we compare priorities for selection and memory when actively engaged in a natural task with first-person observation of the same object manipulations. Results suggest that active manipulation of a task-relevant object results in a specific prioritization for object position information compared with other properties and compared with action observation of the same manipulations. Experiment 2 confirms that this spatial prioritization is likely to arise from manipulation rather than differences in spatial representation in real environments and the movies used for action observation. Thus, our findings imply that physical manipulation of task relevant objects results in a specific prioritization of spatial information about task-relevant objects, possibly coupled with strategic de-prioritization of colour memory for irrelevant objects. |
Shuichiro Taya; David Windridge; Magda Osman Trained eyes: Experience promotes adaptive gaze control in dynamic and uncertain visual environments Journal Article In: PLoS ONE, vol. 8, no. 8, pp. e71371, 2013. @article{Taya2013, Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around 'events,' which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. |
Yasuo Terao; Hideki Fukuda; Yuichiro Shirota; Akihiro Yugeta; Masayuki Yoshioka; Masahiko Suzuki; Ritsuko Hanajima; Yoshiko Nomura; Masaya Segawa; Shoji Tsuji; Yoshikazu Ugawa Deterioration of horizontal saccades in progressive supranuclear palsy Journal Article In: Clinical Neurophysiology, vol. 124, no. 2, pp. 354–363, 2013. @article{Terao2013, Objective: To investigate horizontal saccade changes according to disease stage in patients with progressive supranuclear palsy (PSP). Methods: We studied visually and memory guided saccades (VGS and MGS) in 36 PSP patients at various disease stages, and compared results with those in 66 Parkinson's disease (PD) patients and 58 age-matched normal controls. Results: Both vertical and horizontal saccades were affected in PSP patients, usually manifesting as " slow saccades" but sometimes as a sequence of small amplitude saccades with relatively well preserved velocities. Disease progression caused saccade amplitude reduction in PSP but not PD patients. In contrast, VGS and MGS latencies were comparable between PSP and PD patients, as were the frequencies of saccades to cue, suggesting that voluntary initiation and inhibitory control of saccades are similar in both disorders. Hypermetria was rarely observed in PSP patients with cerebellar ataxia (PSPc patients). Conclusions: The progressively reduced accuracy of horizontal saccades in PSP suggests a brainstem oculomotor pathology that includes the superior colliculus and/or paramedian pontine reticular formation. In contrast, the functioning of the oculomotor system above the brainstem was similar between PSP and PD patients. Significance: These findings may reflect a brainstem oculomotor pathology. |
Lore Thaler; Alexander C. Schütz; Melvyn A. Goodale; Karl R. Gegenfurtner What is the best fixation target? The effect of target shape on stability of fixational eye movements Journal Article In: Vision Research, vol. 76, pp. 31–42, 2013. @article{Thaler2013, People can direct their gaze at a visual target for extended periods of time. Yet, even during fixation the eyes make small, involuntary movements (e.g. tremor, drift, and microsaccades). This can be a problem during experiments that require stable fixation. The shape of a fixation target can be easily manipulated in the context of many experimental paradigms. Thus, from a purely methodological point of view, it would be good to know if there was a particular shape of a fixation target that minimizes involuntary eye movements during fixation, because this shape could then be used in experiments that require stable fixation. Based on this methodological motivation, the current experiments tested if the shape of a fixation target can be used to reduce eye movements during fixation. In two separate experiments subjects directed their gaze at a fixation target for 17. s on each trial. The shape of the fixation target varied from trial to trial and was drawn from a set of seven shapes, the use of which has been frequently reported in the literature. To determine stability of fixation we computed spatial dispersion and microsaccade rate. We found that only a target shape which looks like a combination of bulls eye and cross hair resulted in combined low dispersion and microsaccade rate. We recommend the combination of bulls eye and cross hair as fixation target shape for experiments that require stable fixation. |
Tom Theys; Pierpaolo Pani; Johannes Loon; Jan Goffin; Peter Janssen Three-dimensional shape coding in grasping circuits: A comparison between the anterior intraparietal area and ventral premotor area F5a Journal Article In: Journal of Cognitive Neuroscience, vol. 25, no. 3, pp. 352–364, 2013. @article{Theys2013, Depth information is necessary for adjusting the hand to the three-dimensional (3-D) shape of an object to grasp it. The transformation of visual information into appropriate distal motor commands is critically dependent on the anterior intraparietal area (AIP) and the ventral premotor cortex (area F5), particularly the F5p sector. Recent studies have demonstrated that both AIP and the F5a sector of the ventral premotor cortex contain neurons that respond selectively to disparity-defined 3-D shape. To investigate the neural coding of 3-D shape and the behavioral role of 3-D shape-selective neurons in these two areas, we recorded single-cell activity in AIP and F5a during passive fixation of curved surfaces and during grasping of real-world objects. Similar to those in AIP, F5a neurons were either first- or second-order disparity selective, frequently showed selectivity for discrete approximations of smoothly curved surfaces that contained disparity discontinuities, and exhibited mostly monotonic tuning for the degree of disparity variation. Furthermore, in both areas, 3-D shape-selective neurons were colocalized with neurons that were active during grasping of real-world objects. Thus, area AIP and F5a contain highly similar representations of 3-D shape, which is consistent with the proposed transfer of object information from AIP to the motor system through the ventral premotor cortex. |
Charmaine L. Thomas; Lauren D. Goegan; Kristin R. Newman; Jody E. Arndt; Christopher R. Sears Attention to threat images in individuals with clinical and subthreshold symptoms of post-traumatic stress disorder Journal Article In: Journal of Anxiety Disorders, vol. 27, no. 5, pp. 447–455, 2013. @article{Thomas2013, Attention to general and trauma-relevant threat was examined in individuals with clinical and subthreshold symptoms of post-traumatic stress disorder (PTSD). Participants' eye gaze was tracked and recorded while they viewed sets of four images over a 6-s presentation (one negative, positive, and neutral image, and either a general threat image or a trauma-relevant threat image). Two trauma-exposed groups (a clinical and a subthreshold PTSD symptom group) were compared to a non-trauma-exposed group. Both the clinical and subthreshold PTSD symptom groups attended to trauma-relevant threat images more than the no-trauma-exposure group, whereas there were no group differences for general threat images. A time course analysis of attention to trauma-relevant threat images revealed different attentional profiles for the trauma-exposed groups. Participants with clinical PTSD symptoms exhibited immediate heightened attention to the images relative to participants with no-trauma-exposure, whereas participants with subthreshold PTSD symptoms did not. In addition, participants with subthreshold PTSD symptoms attended to trauma-relevant threat images throughout the 6-s presentation, whereas participants with clinical symptoms of PTSD exhibited evidence of avoidance. The theoretical and clinical implications of these distinct attentional profiles are discussed. |
Laura E. Thomas Spatial working memory is necessary for actions to guide thought Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 6, pp. 1974–1981, 2013. @article{Thomas2013a, Directed actions can play a causal role in cognition, shaping thought processes. What drives this cross-talk between action and thought? I investigated the hypothesis that representations in spatial working memory mediate interactions between directed actions and problem solving. Participants attempted to solve an insight problem while occasionally either moving their eyes in a pattern embodying the problem's solution or maintaining fixation. They simultaneously held either a spatial or verbal stimulus in working memory. Participants who moved their eyes in a pattern that embodied the solution were more likely to solve the problem, but only while also performing a verbal working memory task. Embodied guidance of insight was eliminated when participants were instead engaged in a spatial working memory task while moving their eyes, implying that loading spatial working memory prevented movement representations from influencing problem solving. These results point to spatial working memory as a mechanism driving embodied guidance of insight, suggesting that actions do not automatically influence problem solving. Instead, cross-talk between action and higher order cognition requires representations in spatial working memory. |
Yoshihito Shigihara; Semir Zeki Parallelism in the brain's visual form system Journal Article In: European Journal of Neuroscience, vol. 38, no. 12, pp. 3712–3720, 2013. @article{Shigihara2013, We used magnetoencephalography (MEG) to determine whether increasingly complex forms constituted from the same elements (lines) activate visual cortex with the same or different latencies. Twenty right-handed healthy adult volunteers viewed two different forms, lines and rhomboids, representing two levels of complexity. Our results showed that the earliest responses produced by lines and rhomboids in both striate and prestriate cortex had similar peak latencies (40 ms) although lines produced stronger responses than rhomboids. Dynamic causal modeling (DCM) showed that a parallel multiple input model to striate and prestriate cortex accounts best for the MEG response data. These results lead us to conclude that the perceptual hierarchy between lines and rhomboids is not mirrored by a temporal hierarchy in latency of activation and thus that a strategy of parallel processing appears to be used to construct forms, without implying that a hierarchical strategy may not be used in separate visual areas, in parallel. |
Masanori Shimono; Kazuhisa Niki Global mapping of the whole-brain network underlining binocular rivalry Journal Article In: Brain Connectivity, vol. 3, no. 2, pp. 212–221, 2013. @article{Shimono2013, We investigated how the structure of the brain network relates to the stability of perceptual alternation in binocular rivalry. Historically, binocular rivalry has provided important new insights to our understandings in neuroscience. Although various relationships between the local regions of the human brain structure and perceptual switching phenomena have been shown in previous researches, the global organization of the human brain structural network relating to this phenomenon has not yet been addressed. To approach this issue, we reconstructed fiber-tract bundles using diffusion tensor imaging and then evaluated the correlations between the speeds of perceptual alternation and fractional anisotropy (FA) values in each fiber-tract bundle integrating among 84 brain regions. The resulting comparison revealed that the distribution of the global organization of the structural brain network showed positive or negative correlations between the speeds of perceptual alternation and the FA values. First, the connections between the subcortical regions stably were negatively correlated. Second, the connections between the cortical regions mainly showed positive correlations. Third, almost all other cortical connections that showed negative correlations were located in one central cluster of the subcortical connections. This contrast between the contribution of the cortical regions to destabilization and the contribution of the subcortical regions to stabilization of perceptual alternation provides important information as to how the global architecture of the brain structural network supports the phenomenon of binocular rivalry. |
Alisha Siebold; Wieske Zoest; Martijn Meeter; Mieke Donk In defense of the salience map: Salience rather than visibility determines selection Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 6, pp. 1516–1524, 2013. @article{Siebold2013, The aim of the present study was to investigate whether time-dependent biases of oculomotor selection as typically observed during visual search are better accounted for by an absolute-processing-speed account (J. P. de Vries, I. T. C. Hooge, M. A. Wiering, & F. A. J. Verstraten, 2011, How longer saccade latencies lead to a competition for salience. Psychological Science, 22, 916-923) or a relative-salience account (e.g., M. Donk, & W. van Zoest, 2008, Effects of salience are short-lived. Psychological Science, 19, 733-739; M. Donk & W. van Zoest, 2011, No control in orientation search: The effects of instruction on oculomotor selection in visual search. Vision Research, 51, 2156-2166). In order to test these two models, we performed an experiment in which participants were instructed to make a speeded eye movement to any of two orientation singletons presented among a homogeneous set of vertically oriented background lines. One singleton, the fixed singleton, remained identical across conditions, whereas the other singleton, the variable singleton, varied such that its orientation contrast relative to the background lines was either smaller or larger than that of the fixed singleton. The results showed that the proportion of eye movements directed toward the fixed singleton varied substantially depending on the orientation contrast of the variable singleton. A model assuming selection behavior to be determined by relative salience provided a better fit to the individual data than the absolute processing speed model. These findings suggest that relative salience rather than the visibility of an element is crucial in determining temporal variations in oculomotor selection behavior and that an explanation of visual selection behavior is insufficient without the concept of a salience map. |
Massimo Silvetti; Ruth Seurinck; Marlies E. Bochove; Tom Verguts The influence of the noradrenergic system on optimal control of neural plasticity Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 160, 2013. @article{Silvetti2013, Decision making under uncertainty is challenging for any autonomous agent. The challenge increases when the environment's stochastic properties change over time, i.e., when the environment is volatile. In order to efficiently adapt to volatile environments, agentsmust primarily rely on recent outcomes to quickly change their decision strategies; in otherwords, they need to increase their knowledge plasticity. On the contrary, in stable environments, knowledge stability must be preferred to preserve useful information against noise. Here we propose that in mammalian brain, the locus coeruleus (LC) is one of the nuclei involved in volatility estimation and in the subsequent control of neural plasticity. During a reinforcement learning task, LC activation, measured bymeans of pupil diameter, coded both for environmental volatility and learning rate. We hypothesize that LC could be responsible, through norepinephrinic modulation, for adaptations to optimize decisionmaking in volatile environments.We also suggest a computational model on the interaction between the anterior cingulate cortex (ACC) and LC for volatility estimation. |
Timothy J. Slattery; Keith Rayner Effects of intraword and interword spacing on eye movements during reading: Exploring the optimal use of space in a line of text Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 6, pp. 1275–1292, 2013. @article{Slattery2013, Two eye movement experiments investigated intraword spacing (the space between letters within words) and interword spacing (the space between words) to explore the influence these variables have on eye movement control during reading. Both variables are important factors in determining the optimal use of space in a line of text, and fonts differ widely in how they employ these spaces. Prior research suggests that the proximity of flanking letters influences the identification of a central letter via lateral inhibition or crowding. If so, decrements in intraword spacing may produce inhibition in word processing. Still other research suggests that increases in intraword spacing can disrupt the integrity of word units. In English, interword spacing has a large influence on word segmentation and is important for saccade target selection. The results indicate an interplay between intra- and interword spacing that influences a font's readability. Additionally, these studies highlight the importance of word segmentation processes and have implications for the nature of lexical processing (serial vs. parallel). |
Timothy J. Slattery; Patrick Sturt; Kiel Christianson; Masaya Yoshida; Fernanda Ferreira Lingering misinterpretations of garden path sentences arise from competing syntactic representations Journal Article In: Journal of Memory and Language, vol. 69, no. 2, pp. 104–120, 2013. @article{Slattery2013a, Recent work has suggested that readers' initial and incorrect interpretation of temporarily ambiguous ("garden path") sentences (e.g., Christianson, Hollingworth, Halliwell, & Ferreira, 2001) sometimes lingers even after attempts at reanalysis. These lingering effects have been attributed to incomplete reanalysis. In two eye tracking experiments, we distinguish between two types of incompleteness: the language comprehension system might not build a faithful syntactic structure, or it might not fully erase the structure built during an initial misparse. The first experiment used reflexive binding and the gender mismatch paradigm to show that a complete and faithful structure is built following processing of the garden-path. The second experiment used two-sentence texts to examine the extent to which the garden-path meaning from the first sentence interferes with reading of the second. Together, the results indicate that misinterpretation effects are attributable not to failure in building a proper structure, but rather to failure in cleaning up all remnants of earlier attempts to build that syntactic representation. |
Yasuhiro Seya; Hidetoshi Nakayasu; Tadasu Yagi Useful field of view in simulated driving: Reaction times and eye movements of drivers Journal Article In: i-Perception, vol. 4, no. 4, pp. 285–298, 2013. @article{Seya2013, To examine the spatial distribution of a useful field of view (UFOV) in driving, reaction times (RTs) and eye movements were measured in simulated driving. In the experiment, a normal or mirror-reversed letter "E" was presented on driving images with different eccentricities and directions from the current gaze position. The results showed significantly slower RTs in the upper and upper left directions than in the other directions. The RTs were significantly slower in the left directions than in the right directions. These results suggest that the UFOV in driving may be asymmetrical among the meridians in the visual field. |
Timothy J. Shakespeare; Keir X. X. Yong; Chris Frost; Lois G. Kim; Elizabeth K. Warrington; Sebastian J. Crutch Scene perception in posterior cortical atrophy: Categorization, description and fixation patterns Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 621, 2013. @article{Shakespeare2013, Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes. |
Diego E. Shalom; Maximiliano G. Sousa Serro; Maximiliano Giaconia; Luis M. Martinez; Andres Rieznik; Mariano Sigman Choosing in freedom or forced to choose? Introspective blindness to psychological forcing in stage-magic Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e58254, 2013. @article{Shalom2013, We investigated an individual ability to identify whether choices were made freely or forced by external parameters. We capitalized on magical setups where the notion of psychological forcing constitutes a well trodden path. In live stage magic, a magician guessed cards from spectators while inquiring how freely they thought they had made the choice. Our data showed a marked blindness in the introspection of free choice. Spectators assigned comparable ratings when choosing the card that the magician deliberately forced them compared to any other card, even in classical forcing, where the magician literally handles a card to the participant This observation was paralleled by a laboratory experiment where we observed modest changes in subjective reports by factors with drastic effect in choice. Pupil dilatation, which is known to tag slow cognitive events related to memory and attention, constitutes an efficient fingerprint to index subjective and objective aspects of choice. |
Diego E. Shalom; Mariano Sigman Freedom and rules in human sequential performance: A refractory period in eye-hand coordination Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–13, 2013. @article{Shalom2013a, In action sequences, the eyes and hands ought to be coordinated in precise ways. The mechanisms governing the architecture of encoding and action of several effectors remain unknown. Here we study hand and eye movements in a sequential task in which letters have to be typed while they move down through the screen. We observe a strict refractory period of about 200 ms between the initiation of manual and eye movements. Subjects do not initiate a saccade just after typing and do not type just after making the saccade. This refractory period is observed ubiquitously in every subject and in each step of the sequential task, even when keystrokes and saccades correspond to different items of the sequence-for instance when a subject types a letter that has been gazed at in a preceding fixation. These results extend classic findings of dual-task paradigms, of a bottleneck tightly locked to the response selection process, to unbounded serial routines. Interestingly, while the bottleneck is seemingly inevitable, better performing subjects can adopt a strategy to minimize the cost of the bottleneck, overlapping the refractory period with the encoding of the next item in the sequence. |
K. M. Sharika; Sebastiaan F. W. Neggers; Tjerk P. Gutteling; Stefan Van der Stigchel; Hendrik Chris Dijkerman; A. Murthy Proactive control of sequential saccades in the human supplementary eye field Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 14, pp. E1311–E1320, 2013. @article{Sharika2013, Our ability to regulate behavior based on past experience has thus far been examined using single movements. However, natural behavior typically involves a sequence of movements. Here, we examined the effect of previous trial type on the concurrent planning of sequential saccades using a unique paradigm. The task consisted of two trial types: no-shift trials, which implicitly encouraged the concurrent preparation of the second saccade in a subsequent trial; and target-shift trials, which implicitly discouraged the same in the next trial. Using the intersaccadic interval as an index of concurrent planning, we found evidence for context-based preparation of sequential saccades. We also used functional MRI-guided, single-pulse, transcranial magnetic stimulation on human subjects to test the role of the supplementary eye field (SEF) in the proactive control of sequential eye movements. Results showed that (i) stimulating the SEF in the previous trial disrupted the previous trial type-based preparation of the second saccade in the nonstimulated current trial, (ii) stimulating the SEF in the current trial rectified the disruptive effect caused by stimulation in the previous trial, and (iii) stimulating the SEF facilitated the preparation of second saccades based on previous trial type even when the previous trial was not stimulated. Taken together, we show how the human SEF is causally involved in proactive preparation of sequential saccades. |
Madeleine E. Sharp; Jayalakshmi Viswanathan; Martin J. McKeown; Silke Appel-Cresswell; A. Jon Stoessl; Jason J. S. Barton Decisions under risk in Parkinson's disease: Preserved evaluation of probability and magnitude Journal Article In: Neuropsychologia, vol. 51, pp. 2679–2689, 2013. @article{Sharp2013, Introduction: Unmedicated Parkinson's disease patients tend to be risk-averse while dopaminergic treatment causes a tendency to take risks. While dopamine agonists may result in clinically apparent impulse control disorders, treatment with levodopa also causes shift in behaviour associated with an enhanced response to rewards. Two important determinants in decision-making are how subjects perceive the magnitude and probability of outcomes. Our objective was to determine if patients with Parkinson's disease on or off levodopa showed differences in their perception of value when making decisions under risk. Methods: The Vancouver Gambling task presents subjects with a choice between one prospect with larger outcome and a second with higher probability. Eighteen age-matched controls and eighteen patients with Parkinson's disease before and after levodopa were tested. In the Gain Phase subjects chose between one prospect with higher probability and another with larger reward to maximize their gains. In the Loss Phase, subjects played to minimize their losses. Results: Patients with Parkinson's disease, on or off levodopa, were similar to controls when evaluating gains. However, in the Loss Phase before levodopa, they were more likely to avoid the prospect with lower probability but larger loss, as indicated by the steeper slope of their group psychometric function (t(24=2.21 |
Tracey A. Shaw; Melanie A. Porter Emotion recognition and visual-scan paths in fragile X syndrome Journal Article In: Journal of Autism and Developmental Disorders, vol. 43, no. 5, pp. 1119–1139, 2013. @article{Shaw2013, This study investigated emotion recognition abilities and visual scanning of emotional faces in 16 Fragile X syndrome (FXS) individuals compared to 16 chronological-age and 16 mental-age matched controls. The relationships between emotion recognition, visual scan-paths and symptoms of social anxiety, schizotypy and autism were also explored. Results indicated that, com- pared to both control groups, the FXS group displayed specific emotion recognition deficits for angry and neutral (but not happy or fearful) facial expressions. Despite these evident emotion recognition deficits, the visual scanning of emotional faces was found to be at developmentally appropriate levels in the FXS group. Significant relation- ships were also observed between visual scan-paths, emo- tion recognition performance and symptomology in the FXS group. |
Naveed A. Sheikh; Debra A. Titone Sensorimotor and linguistic information attenuate emotional word processing benefits: An eye-movement study Journal Article In: Emotion, vol. 13, no. 6, pp. 1107–1121, 2013. @article{Sheikh2013, Recent studies have reported that emotional words are processed faster than neutral words, though emotional benefits may not depend solely on words' emotionality. Drawing on an embodied approach to representation, we examined interactions between emotional, sensorimotor, and linguistic sources of information for target words embedded in sentential contexts. Using eye-movement measures for 43 native English speakers, we observed emotional benefits for negative and positive words and sensorimotor benefits for words high in concreteness, but only when target words were low in frequency. Moreover, emotional words were maximally faster than neutral words when words were low in concreteness (i.e., highly abstract), and sensorimotor benefits occurred only when words were not emotionally charged (i.e., emotionally neutral). Furthermore, emotional and concreteness benefits were attenuated by individual differences that attenuate and amplify emotional and sensorimotor information, respectively. Our results suggest that behavior is functionally modulated by embodied information (i.e., emotional and sensorimotor) when linguistic contributions to representation are not enhanced by high frequency. Furthermore, emotional benefits are maximal when words are not already embodied by sensorimotor contributions to representation (and vice versa). Our work is consistent with recent studies that have suggested that abstract words are grounded in emotional experiences, analogous to how concrete words are grounded in sensorimotor experiences. |
Jing Shen; Diana Deutsch; Keith Rayner On-line perception of Mandarin Tones 2 and 3: Evidence from eye movements Journal Article In: The Journal of the Acoustical Society of America, vol. 133, no. 5, pp. 3016–3029, 2013. @article{Shen2013, Using the visual world paradigm, the present study investigated on-line processing of fine-grained pitch information prior to lexical access in a tone language; specifically how lexical tone perception of Mandarin Tones 2 and 3 was influenced by the pitch height of the tone at onset, turning point, and offset. Native speakers of Mandarin listened to manipulated tone tokens and selected the corresponding word from four visually presented words (objects in Experiment 1 and characters in Experiment 2) while their eye movements were monitored. The results showed that 87% of ultimate tone judgments were made according to offset pitch height. Tokens with high offset pitch were identified as Tone 2, and low offset pitch as Tone 3. A low turning point pitch served as a pivotal cue for Tone 3, and prompted more eye fixations on Tone 3 items, until the offset pitch directed significantly more fixations to the final tone choice. The findings support the view that lexical tone perception is an incremental process, in which pitch height at critical points serves as an important cue. |
Heather Sheridan; Keith Rayner; Eyal M. Reingold Unsegmented text delays word identification: Evidence from a survival analysis of fixation durations Journal Article In: Visual Cognition, vol. 21, no. 1, pp. 38–60, 2013. @article{Sheridan2013b, The present study employed distributional analyses of fixation times to examine the impact of removing spaces between words during reading. Specifically, we presented high and low frequency target words in a normal text condition that contained spaces (e.g., "John decided to sell the table in the garage sale") and in an unsegmented text condition that contained random numbers instead of spaces (e.g., "John4decided8to5sell9the7table2in3the9garage6sale"). The unsegmented text con- dition produced larger word frequency effects relative to the normal text condition for the gaze duration and total time measures (for similar findings, see Rayner, Fischer, & Pollatsek, 1998), which indicates that removing spaces can impact the word identification stage of reading. To further examine the effect of spacing on word identification, we used distributional analyses of first-fixation durations to contrast the time course of word frequency effects in the normal versus the unsegmented text conditions. In replication of prior findings (Reingold, Reichle, Glaholt, & Sheridan, 2012; Staub, White, Drieghe, Hollway, & Rayner, 2010), ex-Gaussian fitting revealed that the word frequency variable impacted both the shift and the skew of the distributions, and this pattern of results occurred for both the normal and unsegmented text conditions. In addition, a survival analysis technique revealed a later time course ofword frequency effects in the unsegmented relative to the normal condition, such that the earliest discernible influence of word frequency was 112 ms from the start of fixation in the normal text condition, and 152 ms in the unsegmented text condition. This delay in the temporal onset of word frequency effects in the unsegmented text condition strongly suggests that removing spaces delays the word identification stage of reading. Possible underlying mechanisms are discussed, including lateral masking and word segmentation. |
Heather Sheridan; Eyal M. Reingold A further examination of the lexical-processing stages hypothesized by the E-Z Reader model Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 3, pp. 407–414, 2013. @article{Sheridan2013, Participants' eye movements were monitored while they read sentences in which high- and low-frequency target words were presented normally (i.e., the normal condition) or with either reduced stimulus quality (i.e., the faint condition) or alternating lower- and uppercase letters (i.e., the case-alternated condition). Both the stimulus quality and case alternation manipulations interacted with word frequency for the gaze duration measure, such that the magnitude of word frequency effects was increased relative to the normal condition. However, stimulus quality (but not case alternation) interacted with word frequency for the early fixation time measures (i.e., first fixation, single fixation), whereas case alternation (but not stimulus quality) interacted with word frequency for the later fixation time measures (i.e., total time, go-past time). We interpret this pattern of results as evidence that stimulus quality influences an earlier stage of lexical processing than does case alternation, and we discuss the implications of our results for models of eye movement control during reading. |
Heather Sheridan; Eyal M. Reingold The mechanisms and boundary conditions of the Einstellung Effect in chess: Evidence from eye movements Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75796, 2013. @article{Sheridan2013a, In a wide range of problem-solving settings, the presence of a familiar solution can block the discovery of better solutions (i.e., the Einstellung effect). To investigate this effect, we monitored the eye movements of expert and novice chess players while they solved chess problems that contained a familiar move (i.e., the Einstellung move), as well as an optimal move that was located in a different region of the board. When the Einstellung move was an advantageous (but suboptimal) move, both the expert and novice chess players who chose the Einstellung move continued to look at this move throughout the trial, whereas the subset of expert players who chose the optimal move were able to gradually disengage their attention from the Einstellung move. However, when the Einstellung move was a blunder, all of the experts and the majority of the novices were able to avoid selecting the Einstellung move, and both the experts and novices gradually disengaged their attention from the Einstellung move. These findings shed light on the boundary conditions of the Einstellung effect, and provide convergent evidence for Bilalić, McLeod, & Gobet (2008)'s conclusion that the Einstellung effect operates by biasing attention towards problem features that are associated with the familiar solution rather than the optimal solution. |
Viral Sheth; Irene Gottlob; Sarim Mohammad; Rebecca J. McLean; Gail D. E. Maconachie; Anil Kumar; Christopher Degg; Frank A. Proudlock Diagnostic potential of iris cross-sectional imaging in albinism using optical coherence tomography Journal Article In: Ophthalmology, vol. 120, no. 10, pp. 2082–2090, 2013. @article{Sheth2013, Purpose: To characterize in vivo anatomic abnormalities of the iris in albinism compared with healthy controls using anterior segment optical coherence tomography (AS-OCT) and to explore the diagnostic potential of this technique for albinism. We also investigated the relationship between iris abnormalities and other phenotypical features of albinism. Design: Prospective cross-sectional study. Participants: A total of 55 individuals with albinism and 45 healthy controls. Methods: We acquired 4.37×4.37-mm volumetric scans (743 A-scans, 50 B-scans) of the nasal and temporal iris in both eyes using AS-OCT (3-μm axial resolution). Iris layers were segmented and thicknesses were measured using ImageJ software. Iris transillumination grading was graded using Summers and colleagues' classification. Retinal OCT, eye movement recordings, best-corrected visual acuity (BCVA), visual evoked potential (VEP), and grading of skin and hair pigmentation were used to quantify other phenotypical features associated with albinism. Main Outcome Measures: Iris AS-OCT measurements included (1) total iris thickness, (2) stroma/anterior border (SAB) layer thickness, and (3) posterior epithelial layer (PEL) thickness. Correlation with other phenotypical measurements, including (1) iris transillumination grading, (2) retinal layer measurements at the fovea, (3) nystagmus intensity, (4) BCVA, (5) VEP asymmetry, (6) skin pigmentation, and (7) hair pigmentation (of head hair, lashes, and brows). Results: The mean iris thickness was 10.7% thicker in controls (379.3±44.0 μm) compared with the albinism group (342.5±52.6 μm; P > 0.001), SAB layers were 5.8% thicker in controls (315.1±43.8 μm) compared with the albinism group (297.7±50.0 μm; P=0.044), and PEL was 44.0% thicker in controls (64.1±11.7 μm) compared with the albinism group (44.5±13.9 μm; P < 0.0001). The most ciliary quartile of the PEL yielded a sensitivity of 85% and specificity of 78% for detecting albinism. Phenotypic features of albinism, such as skin and hair pigmentation, BCVA, and nystagmus intensity, were significantly correlated to AS-OCT iris thickness measurements. Conclusions: We have characterized in vivo abnormalities of the iris associated with albinism for the first time and show that PEL thickness is particularly affected. We demonstrate that PEL thickness has diagnostic potential for detecting iris abnormalities in albinism. Anterior segment OCT iris measurements are significantly correlated to BCVA and nystagmus intensity in contrast to iris transillumination grading measurements that were not. |
Veronica Shi; Jie Cui; Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde Effect of stimulus width on simultaneous contrast Journal Article In: PeerJ, vol. 1, pp. 1–13, 2013. @article{Shi2013, Perceived brightness of a stimulus depends on the background against which the stimulus is set, a phenomenon known as simultaneous contrast. For instance, the same gray stimulus can look light against a black background or dark against a white background. Here we quantified the perceptual strength of simultaneous contrast as a function of stimulus width. Previous studies have reported that wider stimuli result in weaker simultaneous contrast, whereas narrower stimuli result in stronger simultaneous contrast. However, no previous research has quantified this relationship. Our results show a logarithmic relationship between stimulus width and perceived brightness. This relationship is well matched by the normalized output of a Difference-of-Gaussians (DOG) filter applied to stimuli of varied widths. |
Tim J. Smith; Parag K. Mital Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes Journal Article In: Journal of Vision, vol. 13, no. 8, pp. 1–24, 2013. @article{Smith2013, Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion. |
Adam C. Snyder; Michael J. Morais; Matthew A. Smith Variance in population firing rate as a measure of slow time-scale correlation Journal Article In: Frontiers in Computational Neuroscience, vol. 7, pp. 176, 2013. @article{Snyder2013, Correlated variability in the spiking responses of pairs of neurons, also known as spike count correlation, is a key indicator of functional connectivity and a critical factor in population coding. Underscoring the importance of correlation as a measure for cognitive neuroscience research is the observation that spike count correlations are not fixed, but are rather modulated by perceptual and cognitive context. Yet while this context fluctuates from moment to moment, correlation must be calculated over multiple trials. This property undermines its utility as a dependent measure for investigations of cognitive processes which fluctuate on a trial-to-trial basis, such as selective attention. A measure of functional connectivity that can be assayed on a moment-to-moment basis is needed to investigate the single-trial dynamics of populations of spiking neurons. Here, we introduce the measure of population variance in normalized firing rate for this goal. We show using mathematical analysis, computer simulations and in vivo data how population variance in normalized firing rate is inversely related to the latent correlation in the population, and how this measure can be used to reliably classify trials from different typical correlation conditions, even when firing rate is held constant. We discuss the potential advantages for using population variance in normalized firing rate as a dependent measure for both basic and applied neuroscience research. |
Hiroyuki Sogo GazeParser: An open-source and multiplatform library for low-cost eye tracking and analysis Journal Article In: Behavior Research Methods, vol. 45, no. 3, pp. 684–695, 2013. @article{Sogo2013, Eye movement analysis is an effective method for research on visual perception and cognition. However, recordings of eye movements present practical difficulties related to the cost of the recording devices and the programming of device controls for use in experiments. GazeParser is an open-source library for low-cost eye tracking and data analysis; it consists of a video-based eyetracker and libraries for data recording and analysis. The libraries are written in Python and can be used in conjunction with PsychoPy and VisionEgg experimental control libraries. Three eye movement experiments are reported on performance tests of GazeParser. These showed that the means and standard deviations for errors in sampling intervals were less than 1 ms. Spatial accuracy ranged from 0.7° to 1.2°, depending on participant. In gap/overlap tasks and antisaccade tasks, the latency and amplitude of the saccades detected by GazeParser agreed with those detected by a commercial eyetracker. These results showed that the GazeParser demonstrates adequate performance for use in psychological experiments. |
Maria Solé Puig; Laura Pérez Zapata; J. Antonio Aznar-Casanova; Hans Supèr; Maria Solé Puig; Laura Perez Zapata; Hans Super A role of eye vergence in covert attention Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e52955, 2013. @article{SolePuig2013, Covert spatial attention produces biases in perceptual and neural responses in the absence of overt orienting movements. The neural mechanism that gives rise to these effects is poorly understood. Here we report the relation between fixational eye movements, namely eye vergence, and covert attention. Visual stimuli modulate the angle of eye vergence as a function of their ability to capture attention. This illustrates the relation between eye vergence and bottom-up attention. In visual and auditory cue/no-cue paradigms, the angle of vergence is greater in the cue condition than in the no-cue condition. This shows a top-down attention component. In conclusion, observations reveal a close link between covert attention and modulation in eye vergence during eye fixation. Our study suggests a basis for the use of eye vergence as a tool for measuring attention and may provide new insights into attention and perceptual disorders. |
Chen Song; D. Samuel Schwarzkopf; Antoine Lutti; Baojuan Li; Ryota Kanai; Geraint Rees Effective connectivity within human primary visual cortex predicts interindividual diversity in illusory perception Journal Article In: Journal of Neuroscience, vol. 33, no. 48, pp. 18781–18791, 2013. @article{Song2013c, Visual perception depends strongly on spatial context. A classic example is the tilt illusion where the perceived orientation of a central stimulus differs from its physical orientation when surrounded by tilted spatial contexts. Here we show that such contextual modulation of orientation perception exhibits trait-like interindividual diversity that correlates with interindividual differences in effective connectivity within human primary visual cortex. We found that the degree to which spatial contexts induced illusory orientation perception, namely, the magnitude of the tilt illusion, varied across healthy human adults in a trait-like fashion independent of stimulus size or contrast. Parallel to contextual modulation of orientation perception, the presence of spatial contexts affected effective connectivity within human primary visual cortex between peripheral and foveal representations that responded to spatial context and central stimulus, respectively. Importantly, this effective connectivity from peripheral to foveal primary visual cortex correlated with interindividual differences in the magnitude of the tilt illusion. Moreover, this correlation with illusion perception was observed for effective connectivity under tilted contextual stimulation but not for that under iso-oriented contextual stimulation, suggesting that it reflected the impact of orientation-dependent intra-areal connections. Our findings revealed an interindividual correlation between intra-areal connectivity within primary visual cortex and contextual influence on orientation perception. This neurophysiological-perceptual link provides empirical evidence for theoretical proposals that intra-areal connections in early visual cortices are involved in contextual modulation of visual perception. |
Guanghan Song; Denis Pellerin; Lionel Granjon Different types of sounds influence gaze differently in videos Journal Article In: Journal of Eye Movement Research, vol. 6, no. 4, pp. 1–13, 2013. @article{Song2013, This paper presents an analysis of the effect of different types of sounds on visual gaze when a person is looking freely at videos, which would be helpful to predict eye position. In order to test the effect of sound, an audio-visual experiment was designed with two groups of participants, with audio-visual (AV) and visual (V) conditions. By using statisti- cal tools, we analyzed the difference between eye position of participants with AV and V conditions. We observed that the effect of sound is different depending on the kind of sound, and that the classes with human voice (i.e. speech, singer, human noise and singers) have the greatest effect. Furthermore, the results of the distance between sound source and eye position of the group with AV condition, suggested that only particular types of sound attract human eye position to the sound source. Finally, an analysis of the fixation duration between AV and V conditions showed that participants with AV condition move eyes more frequently than those with V condition. |
Joo-Hyun Song; Patrick Bédard Allocation of attention for dissociated visual and motor goals Journal Article In: Experimental Brain Research, vol. 226, no. 2, pp. 209–219, 2013. @article{Song2013a, In daily life, selecting an object visually is closely intertwined with processing that object as a potential goal for action. Since visual and motor goals are typically identical, it remains unknown whether attention is primarily allocated to a visual target, a motor goal, or both. Here, we dissociated visual and motor goals using a visuomotor adaptation paradigm, in which participants reached toward a visual target using a computer mouse or a stylus pen, while the direction of the cursor was rotated 45° counter-clockwise from the direction of the hand movement. Thus, as visuomotor adaptation was accomplished, the visual target was dissociated from the movement goal. Then, we measured the locus of attention using an attention-demanding rapid serial visual presentation (RSVP) task, in which participants detected a pre-defined visual stimulus among the successive visual stimuli presented on either the visual target, the motor goal, or a neutral control location. We demonstrated that before visuomotor adaptation, participants performed better when the RSVP stream was presented at the visual target than at other locations. However, once visual and motor goals were dissociated following visuomotor adaptation, performance at the visual and motor goals was equated and better than performance at the control location. Therefore, we concluded that attentional resources are allocated both to visual target and motor goals during goal-directed reaching movements. |
Mingli Song; Dapeng Tao; Chun Chen; Jiajun Bu; Yezhou Yang Color-to-gray based on chance of happening preservation Journal Article In: Neurocomputing, vol. 119, pp. 222–231, 2013. @article{Song2013b, It is important to convert color images into grayscale ones for both commercial and scientific applications, such as reducing the publication cost and making the color blind people capture the visual content and semantics from color images. Recently, a dozen of algorithms have been developed for color-to-gray conversion. However, none of them considers the visual attention consistency between the color image and the converted grayscale one. Therefore, these methods may fail to convey important visual information from the original color image to the converted grayscale image. Inspired by the Helmholtz principle (Desolneux et al. 2008 [16]) that "we immediately perceive whatever could not happen by chance", we propose a new algorithm for color-to-gray to solve this problem. In particular, we first define the Chance of Happening (CoH) to measure the attentional level of each pixel in a color image. Afterward, natural image statistics are introduced to estimate the CoH of each pixel. In order to preserve the CoH of the color image in the converted grayscale image, we finally cast the color-to-gray to a supervised dimension reduction problem and present locally sliced inverse regression that can be efficiently solved by singular value decomposition. Experiments on both natural images and artificial pictures suggest (1) that the proposed approach makes the CoH of the color image and that of the converted grayscale image consistent and (2) the effectiveness and the efficiency of the proposed approach by comparing with representative baseline algorithms. In addition, it requires no human-computer interactions. |