All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2016 |
Rosa E. Guzzardo Tamargo; Jorge R. Valdés Kroff; Paola E. Dussias Examining the relationship between comprehension and production processes in code-switched language Journal Article In: Journal of Memory and Language, vol. 89, pp. 138–161, 2016. @article{GuzzardoTamargo2016, We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish-English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants' comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension. |
Julia Habicht; Birger Kollmeier; Tobias Neher Are experienced hearing aid users faster at grasping the meaning of a sentence than inexperienced users? An eye-tracking study Journal Article In: Trends in Hearing, vol. 20, 2016. @article{Habicht2016, This study assessed the effects of hearing aid (HA) experience on how quickly a participant can grasp the meaning of an acoustic sentence-in-noise stimulus presented together with two similar pictures that either correctly (target) or incorrectly (competitor) depict the meaning conveyed by the sentence. Using an eye tracker, the time taken by the participant to start fixating the target (the processing time) was measured for two levels of linguistic complexity (low vs. high) and three HA conditions: clinical linear amplification (National Acoustic Laboratories-Revised), single-microphone noise reduction with National Acoustic Laboratories-Revised, and linear amplification ensuring a sensation level of515 dB up to at least 4 kHz for the speech material used here. Timed button presses to the target stimuli after the end of the sentences (offline reaction times) were also collected. Groups of experienced (eHA) and inexperienced (iHA) HA users matched in terms of age, hearing loss, and working memory capacity took part (N¼15 each). For the offline reaction times, no effects were found. In contrast, processing times increased with linguistic complexity. Furthermore, for all HA conditions, processing times were longer (poorer) for the iHA group than for the eHA group, despite comparable speech recognition performance. Taken together, these results indicate that processing times are more sensitive to speech processing-related factors than offline reaction times. Furthermore, they support the idea that HA experience positively impacts the ability to process noisy speech quickly, irrespective of the precise gain characteristics. |
Britt Hadar; Joshua E. Skrzypek; Arthur Wingfield; Boaz M. Ben-David Working memory load affects processing time in spoken word recognition: Evidence from eye-movements Journal Article In: Frontiers in Neuroscience, vol. 10, pp. 221, 2016. @article{Hadar2016, In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. |
Ziad M. Hafed; Katarina Stingl; Karl Ulrich Bartz-Schmidt; Florian Gekeler; Eberhart Zrenner Oculomotor behavior of blind patients seeing with a subretinal visual implant Journal Article In: Vision Research, vol. 118, pp. 119–131, 2016. @article{Hafed2016, Electronic implants are able to restore some visual function in blind patients with hereditary retinal degenerations. Subretinal visual implants, such as the CE-approved Retina Implant Alpha IMS (Retina Implant AG, Reutlingen, Germany), sense light through the eye's optics and subsequently stimulate retinal bipolar cells via ~1500 independent pixels to project visual signals to the brain. Because these devices are directly implanted beneath the fovea, they potentially harness the full benefit of eye movements to scan scenes and fixate objects. However, so far, the oculomotor behavior of patients using subretinal implants has not been characterized. Here, we tracked eye movements in two blind patients seeing with a subretinal implant, and we compared them to those of three healthy controls. We presented bright geometric shapes on a dark background, and we asked the patients to report seeing them or not. We found that once the patients visually localized the shapes, they fixated well and exhibited classic oculomotor fixational patterns, including the generation of microsaccades and ocular drifts. Further, we found that a reduced frequency of saccades and microsaccades was correlated with loss of visibility. Last, but not least, gaze location corresponded to the location of the stimulus, and shape and size aspects of the viewed stimulus were reflected by the direction and size of saccades. Our results pave the way for future use of eye tracking in subretinal implant patients, not only to understand their oculomotor behavior, but also to design oculomotor training strategies that can help improve their quality of life. |
Matthew Haigh; Jeffrey S. Wood; Andrew J. Stewart Slippery slope arguments imply opposition to change Journal Article In: Memory & Cognition, vol. 44, no. 5, pp. 819–836, 2016. @article{Haigh2016, Slippery slope arguments (SSAs) of the form if A, then C describe an initial proposal (A) and a predicted, undesirable consequence of this proposal (C) (e.g., “If cannabis is ever legalized, then eventually cocaine will be legalized, too”). Despite SSAs being a common rhetorical device, there has been surprisingly little empirical research into their subjective evaluation and perception. Here, we present evidence that SSAs are interpreted as a form of consequentialist argument, inviting inferences about the speaker's (or writer's) attitudes. Study 1 confirmed the common intuition that a SSA is perceived to be an argument against the initial proposal (A), whereas Study 2 showed that the subjective strength of this inference relates to the subjective undesirability of the predicted consequences (C). Because arguments are rarely made out of context, in Studies 3 and 4 we examined how one important contextual factor, the speaker's known beliefs, influences the perceived coherence, strength, and persuasiveness of a SSA. Using an unobtrusive dependent variable (eye movements during reading), in Study 3 we showed that readers are sensitive to the internal coherence between a speaker's beliefs and the implied meaning of the argument. Finally, Study 4 revealed that this degree of internal coherence influences the perceived strength and persuasiveness of the argument. Together, these data indicate that SSAs are treated as a form of negative consequentialist argument. People infer that the speaker of a SSA opposes the initial proposal; therefore, SSAs are only perceived to be persuasive and conversationally relevant when the speaker's attitudes match this inference. |
Tuomo Häikiö; Raymond Bertram; Jukka Hyönä The hyphen as a syllabification cue in reading bisyllabic and multisyllabic words among Finnish 1st and 2nd graders Journal Article In: Reading and Writing, vol. 29, no. 1, pp. 159–182, 2016. @article{Haeikioe2016, Finnish ABC books present words with hyphens inserted at syllable boundaries. Syllabification by hyphens is abandoned in the 2nd grade for bisyllabic words, but continues for words with three or more syllables. The current eye movement study investigated how and to what extent syllable hyphens in bisyllabic (kah-vi ?cof-fee?) and multisyllabic words (haa-ruk-ka ?fork?, ap-pel-sii-ni ?orange?) affect eye movement behavior and reading speed of Finnish 1st and 2nd graders. Experiment 1 showed that 2nd graders had longer gaze durations, needed more fixations and had longer selective regression path durations for hyphenated than concatenated words. This implies that hyphenated words were difficult to process when first encountered, but also hard to integrate with prior sentence context. The effects were modified by number of syllables and reading skill. That is, the hyphenation effects were larger for multisyllabic than bisyllabic words and larger for more than less proficient readers. Experiment 2 showed the same hyphenation effect for 1st graders reading long multisyllabic words, even with a hyphen that was smaller in size and hence visually less salient. We argue that syllable hyphens prevent reasonably proficient readers from using the most efficient processing route for bi- and multisyllabic words and discuss the possible implications of the results for early Finnish reading instruction. |
E. Hainque; E. Apartis; P. M. Daye Switching between two targets with non-constant velocity profiles reveals shared internal model of target motion Journal Article In: European Journal of Neuroscience, vol. 44, no. 8, pp. 2622–2634, 2016. @article{Hainque2016, Several experiments have shown that smooth pursuit and saccades interact while tracking an object moving across the visual scene. It was proposed two decades ago that the amplitude of saccades triggered during smooth pursuit (“catch-up saccades”) were corrected by a delayed sensory signal to account for the ongoing target displacement during catch-up saccades. However recent studies used targets with non-constant velocity profiles and suggested that the correction of catch-up saccade amplitude must be done through an internal model of target motion. It is widely accepted that an internal model of target motion is also used by the central nervous system to cancel inherent delays between visual input and smooth pursuit motor output, ensuring accurate tracking of moving targets. Our study proposes a new paradigm in which the target switches unexpectedly from one target with a non-constant periodic velocity profile to another with a non-constant aperiodic velocity profile. Our results confirm the hypothesis that the central nervous system uses an internal model of target motion to correct catch-up saccade amplitude. In addition, we reconcile the sensory delayed and the internal model of target motion hypotheses and show that a common internal model of target motion is shared within the central nervous system to control smooth pursuit and to correct catch-up saccade amplitude. |
Biao Han; Rufin VanRullen Shape perception enhances perceived contrast: Evidence for excitatory predictive feedback? Journal Article In: Scientific Reports, vol. 6, pp. 22944, 2016. @article{Han2016, Predictive coding theory suggests that predictable responses are "explained away" (i.e., reduced) by feedback. Experimental evidence for feedback inhibition, however, is inconsistent: most neuroimaging studies show reduced activity by predictive feedback, while neurophysiology indicates that most inter-areal cortical feedback is excitatory and targets excitatory neurons. In this study, we asked subjects to judge the luminance of two gray disks containing stimulus outlines: one enabling predictive feedback (a 3D-shape) and one impeding it (random-lines). These outlines were comparable to those used in past neuroimaging studies. All 14 subjects consistently perceived the disk with a 3D-shape stimulus brighter; thus, predictive feedback enhanced perceived contrast. Since early visual cortex activity at the population level has been shown to have a monotonic relationship with subjective contrast perception, we speculate that the perceived contrast enhancement could reflect an increase in neuronal activity. In other words, predictive feedback may have had an excitatory influence on neuronal responses. Control experiments ruled out attention bias, local feature differences and response bias as alternate explanations. |
2015 |
Gareth Carrol; Kathy Conklin; Josephine Guy; Rebekah Scott Processing punctuation and word changes in different editions of prose fiction Journal Article In: Scientific Study of Literature, vol. 5, no. 2, pp. 200–228, 2015. @article{Carrol2015, The digital era has brought with it a shift in the field of literary editing in terms of the amount and kind of textual variation that can reasonably be annotated by editors. However, questions remain about how far readers engage with textual variants, especially minor ones such as small-scale changes to punctuation. In this study we present an eye-tracking experiment investigating reader sensitivity to variations in surface textual features of prose fiction. We monitored eye movements while participants read textual variants from Dickens and James, hypothesising that readers may pay more attention to lexical rather than punctuation changes. We found longer reading times for both types, but only lexical changes also increased reading times for the rest of the sentence. In addition, eye-movement behaviour and conscious ability to report changes were highly correlated. We discuss the implications for how such methods might be applied to questions of “literary” significance and textual processing. |
Giles M. Anderson; Glyn W. Humphreys Top-down expectancy versus bottom-up guidance in search for known color-form conjunctions Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 8, pp. 2622–2639, 2015. @article{Anderson2015, We assessed the effects of pairing a target object with its familiar color on eye movements in visual search, under conditions where the familiar color could or could not be predicted. In Experiment 1 participants searched for a yellow- or purple-colored corn target amongst aubergine distractors, half of which were yellow and half purple. Search was more efficient when the color of the target was familiar and early eye movements more likely to be directed to targets carrying a familiar color than an unfamiliar color. Ex- periment 2 introduced cues which predicted the target color at 80 % validity. Cue validity did not affect whether early fixations were to the target. Invalid cues, however, disrupted search efficiency for targets in an unfamiliar color whilst there was little cost to search efficiency for targets in their familiar color. These re- sults generalized across items with different colors (Experiment 3). The data are consistent with early pro- cesses in selection being automatically modulated in a bottom-up manner to targets in their familiar color, even when expectancies are set for other colors. |
Matthew J. Abbott; Bernhard Angele; Y. Danbi Ahn; Keith Rayner Skipping syntactically illegal the previews: The role of predictability. Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 6, pp. 1703–1714, 2015. @article{Abbott2015a, Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions. |
Craig Hedge; Klaus Oberauer; Ute Leonards Selection in spatial working memory is independent of perceptual selective attention, but they interact in a shared spatial priority map Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 8, pp. 2653–2668, 2015. @article{Hedge2015, We examined the relationship between the attentional selection of perceptual information and of information in working memory (WM) through four experiments, using a spatial WM-updating task. Participants remembered the locations of two objects in a matrix and worked through a sequence of updating operations, each mentally shifting one dot to a new location according to an arrow cue. Repeatedly updating the same object in two successive steps is typically faster than switching to the other object; this object switch cost reflects the shifting of attention in WM. In Experiment 1,the arrows were presented in random peripheral locations, drawing perceptual attention away from the selected object in WM. This manipulation did not eliminate the object switch cost, indicating that the mechanisms of perceptual selection do not underlie selection in WM. Experiments 2a and 2b corroborated the independence of selection observed in Experiment 1, but showed a benefit to reaction times when the placement of the arrow cue was aligned with the locations of relevant objects in WM. Experiment 2c showed that the same benefit also occurs when participants are not able to mark an updating location through eye fixations. Together, these data can be accounted for by a framework in which perceptual selection and selection in WM are separate mechanisms that interact through a shared spatial priority map. |
Joanna E. Lewis; Mark B. Neider Fixation not required: Characterizing oculomotor attention capture for looming stimuli Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 7, pp. 2247–2259, 2015. @article{Lewis2015, A stimulus moving toward us, such as a ball being thrown in our direction or a vehicle braking suddenly in front of ours, often represents a stimulus that requires a rapid response. Using a visual search task in which target and distractor items were systematically associated with a looming object, we explored whether this sort of looming motion captures attention, the nature of such capture using eye movement measures (overt/covert), and the extent to which such capture effects are more closely tied to motion onset or the motion itself. We replicated previous findings indicating that looming motion induces response time benefits and costs during visual search Lin, Franconeri, & Enns(Psychological Science 19(7): 686-693, 2008). These differences in response times were independent of fixation, indicating that these capture effects did not necessitate overt attentional shifts to a looming object for search benefits or costs to occur. Interestingly, we found no differences in capture benefits and costs associated with differences in looming motion type. Combined, our results suggest that capture effects associated with looming motion are more likely subserved by covert attentional mechanisms rather than overt mechanisms, and attention capture for looming motion is likely related to motion itself rather than the onset of motion. |
Hélène Devillez; Nathalie Guyader; Anne Guérin-Dugué An eye fixation–related potentials analysis of the P300 potential for fixations onto a target object when exploring natural scenes Journal Article In: Journal of Vision, vol. 15, no. 13, pp. 20, 2015. @article{Devillez2015, The P300 event-related potential has been extensively studied in electroencephalography with classical paradigms that force observers to not move their eyes. This potential is classically used to infer whether a target or a task-relevant stimulus was presented. Few researches have studied this potential through more ecological paradigms where observers were able to move their eyes. In this study, we examined with an ecological paradigm and an adapted methodology the P300 potential using a visual search task that involves eye movements to actively explore natural scenes and during which eye movements and electroencephalographic activity were coregistered. Averaging the electroencephalography signal time-locked to fixation onsets, a P300 potential was observed for fixations onto the target object but not for other fixations recorded for the same visual search or for fixations recorded during the free viewing without any task. Our approach consists of using control experimental conditions with similar eye movements to ensure that the P300 potential was attributable to the fact that the observer gazed at the target rather than to other factors such as eye movement pattern (the size of the previous saccade) or the ‘‘overlap issue'' between the potentials elicited by two successive fixations. We also proposed to model the time overlap issue of the potentials elicited by consecutive fixations with various durations. Our results show that the P300 potential can be studied in ecological situations without any constraint on the type of visual exploration, with some precautions in the interpretation of results due to the overlap issue. |
Charles C. -F. Or; Matthew F. Peterson; Miguel P. Eckstein Initial eye movements during face identification are optimal and similar across cultures Journal Article In: Journal of Vision, vol. 15, no. 13, pp. 1–25, 2015. @article{Or2015, Culture influences not only human high-level cognitive processes but also low-level perceptual operations. Some perceptual operations, such as initial eye movements to faces, are critical for extraction of information supporting evolutionarily important tasks such as face identification. The extent of cultural effects on these crucial perceptual processes is unknown. Here, we report that the first gaze location for face identification was similar across East Asian and Western Caucasian cultural groups: Both fixated a featureless point between the eyes and the nose, with smaller between-group than within-group differences and with a small horizontal difference across cultures (8% of the interocular distance). We also show that individuals of both cultural groups initially fixated at a slightly higher point on Asian faces than on Caucasian faces. The initial fixations were found to be both fundamental in acquiring the majority of information for face identification and optimal, as accuracy deteriorated when observers held their gaze away from their preferred fixations. An ideal observer that integrated facial information with the human visual system's varying spatial resolution across the visual field showed a similar information distribution across faces of both races and predicted initial human fixations. The model consistently replicated the small vertical difference between human fixations to Asian and Caucasian faces but did not predict the small horizontal leftward bias of Caucasian observers. Together, the results suggest that initial eye movements during face identification may be driven by brain mechanisms aimed at maximizing accuracy, and less influenced by culture. The findings increase our understanding of the interplay between the brain's aims to optimally accomplish basic perceptual functions and to respond to sociocultural influences. |
Zvi N. Roth; Ehud Zohary Position and identity information available in fMRI patterns of activity in human visual cortex Journal Article In: Journal of Neuroscience, vol. 35, no. 33, pp. 11559–11571, 2015. @article{Roth2015, Parietal cortex is often implicated in visual processing of actions. Action understanding is essentially abstract, specific to the type or goal of action, but greatly independent of variations in the perceived position of the action. If certain parietal regions are involved in action understanding, then we expect them to show these generalization and selectivity properties. However, additional functions of parietal cortex, such as self-action control, may impose other demands by requiring an accurate representation of the location of graspable objects. Therefore, the dimensions along which responses are modulated may indicate the functional role of specific parietal regions. Here, we studied the degree of position invariance and hand/object specificity during viewing of tool-grasping actions. To that end, we characterize the information available about location, hand, and tool identity in the patterns of fMRI activation in various cortical areas: early visual cortex, posterior intraparietal sulcus, anterior superior parietal lobule, and the ventral object-specific lateral occipital complex. Our results suggest a gradient within the human dorsal stream: along the posterior-anterior axis, position information is gradually lost, whereas hand and tool identity information is enhanced. This may reflect a gradual transformation of visual input from an initial retinotopic representation in early visual areas to an abstract, position-invariant representation of viewed action in anterior parietal cortex. |
Bram-Ernst Verhoef; Rufin Vogels; Peter Janssen; Leonardo Chelazzi Effects of microstimulation in the anterior intraparietal area during three-dimensional shape categorization Journal Article In: PLoS ONE, vol. 10, no. 8, pp. e0136543, 2015. @article{Verhoef2015, The anterior intraparietal area (AIP) of rhesus monkeys is part of the dorsal visual stream and contains neurons whose visual response properties are commensurate with a role in three-dimensional (3D) shape perception. Neuronal responses in AIP signal the depth structure of disparity-defined 3D shapes, reflect the choices of monkeys while they categorize 3D shapes, and mirror the behavioral variability across different stimulus conditions during 3D-shape categorization. However, direct evidence for a role of AIP in 3D-shape perception has been lacking. We trained rhesus monkeys to categorize disparity-defined 3D shapes and examined AIP's contribution to 3D-shape categorization by microstimulating in clusters of 3D-shape selective AIP neurons during task performance. We find that microstimulation effects on choices (monkey M1) and reaction times (monkey M1 and M2) depend on the 3D-shape preference of the stimulated site. Moreover, electrical stimulation of the same cells, during either the 3D-shape-categorization task or a saccade task, could affect behavior differently. Interestingly, in one monkey we observed a strong correlation between the strength of choice-related AIP activity (choice probabilities) and the influence of microstimulation on 3D-shape-categorization behavior (choices and reaction time). These findings propose AIP as part of the network responsible for 3D-shape perception. The results also show that the anterior intraparietal cortex contains cells with different tuning properties, i.e. 3D-shape- or saccade-related, that can be dynamically read out depending on the requirements of the task at hand. |
Barbara Nordhjem; Constanza I. Kurman Petrozzelli; Nicolás Gravel; Remco J. Renken; Frans W. Cornelissen Eyes on emergence: Fast detection yet slow recognition of emerging images Journal Article In: Journal of Vision, vol. 15, no. 9, pp. 1–16, 2015. @article{Nordhjem2015, Visual object recognition occurs at the intersection of visual perception and visual cognition. It typically occurs very fast and it has therefore been difficult to disentangle its constituent processes. Recognition time can be extended when using images with emergent properties, suggesting they may help examining how visual recognition unfolds over time. Until now, their use has been constrained by limited availability. We used a set of stimuli with emergent properties—akin to the famous Gestalt image of a Dalmatian—in combination with eye tracking to examine the processes underlying object recognition. To test whether cognitive processes influenced eye movement behavior during recognition, an unprimed and three primed groups were included. Recognition times were relatively long (median ; 5s for the unprimed group), confirming the object's emergent properties. Surprisingly, within the first 500 ms, the majority of fixations were already aimed at the object. Computational models of saliency could not explain these initial fixations. This suggests that observers relied on image statistics not captured by saliency models. For the primed groups, recognition times were reduced. However, threshold-free cluster enhancement-based analysis of the time courses indicated that viewing behavior did not differ between the groups, neither during the initial viewing nor around the moment of recognition. This implies that eye movements are mainly driven by perceptual processes and not affected by cognition. It further suggests that priming mainly boosts the observer's confidence in the decision reached. We conclude that emerging images can be a useful tool to dissociate the perceptual and cognitive contributions to visual object recognition. |
Hayley Crawford; Joanna Moss; Giles M. Anderson; Chris Oliver; Joseph P. McCleery Implicit discrimination of basic facial expressions of positive/negative emotion in Fragile X syndrome and autism spectrum disorder Journal Article In: American Journal on Intellectual and Developmental Disabilities, vol. 120, no. 4, pp. 328–345, 2015. @article{Crawford2015, Fragile X syndrome (FXS) and autism spectrum disorders (ASD) are characterized by impaired social functioning. We examined the spontaneous discrimination of happy and disgusted facial expressions, from neutral faces, in individuals with FXS (n = 13 |
Yaoguang Jiang; Dmitry Yampolsky; Gopathy Purushothaman; Vivien A. Casagrande Perceptual decision related activity in the lateral geniculate nucleus Journal Article In: Journal of Neurophysiology, vol. 114, no. 1, pp. 717–735, 2015. @article{Jiang2015, Fundamental to neuroscience is the understanding of how the language of neurons relates to behavior. In the lateral geniculate nucleus (LGN), cells show distinct properties such as selectivity for particular wavelengths, increments or decrements in contrast, or preference for fine detail versus rapid motion. No studies, however, have measured how LGN cells respond when an animal is challenged to make a perceptual decision using information within the receptive fields of those LGN cells. In this study we measured neural activity in the macaque LGN during a two alternative forced choice (2AFC) contrast detection task or during a passive fixation task, and found that a small proportion (13.5%) of single LGN parvocellular (P) and magnocellular (M) neurons matched the psychophysical performance of the monkey. The majority of LGN neurons measured in both tasks were not as sensitive as the monkey. The covariation between neural response and behavior (quantified as choice probability) was significantly above chance during active detection, even when there was no external stimulus. Interneuronal correlations and task-related gain modulations were negligible under the same condition. A bottom-up pooling model that compared sensory neural responses to make perceptual choices in the absence of interneuronal correlations could fully explain these results at the level of the LGN, supporting the hypothesis that the perceptual decision pool consists of multiple sensory neurons, and that response fluctuations in these neurons can influence perception. |
Yuka Matsuo; Masayuki Watanabe; Masako Taniike; Ikuko Mohri; Syoji Kobashi; Masaya Tachibana; Yasushi Kobayashi; Yuri Kitamura Kitamura Gap effect abnormalities during a visually guided pro-saccade task in children with attention deficit hyperactivity disorder Journal Article In: PLoS ONE, vol. 10, no. 5, pp. e0125573, 2015. @article{Matsuo2015, Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder that starts in early childhood and has a comprehensive impact on psychosocial activity and education as well as general health across the lifespan. Despite its prevalence, the current diagnostic criteria for ADHD are debated. Saccadic eye movements are easy to quantify and may be a quantitative biomarker for a wide variety of neurological and psychiatric disorders, including ADHD. The goal of this study was to examine whether children with ADHD exhibit abnormalities during a visually guided pro-saccadic eye-movement and to clarify the neurophysiological mechanisms associated with their behavioral impairments. Thirty-seven children with ADHD (aged 5–11 years) and 88 typically developing (TD) children (aged 5–11 years) were asked to perform a simple saccadic eye-movement task in which step and gap conditions were randomly interleaved. We evaluated the gap effect, which is the difference in the reaction time between the two conditions. Children with ADHD had a significantly longer reaction time than TD children (p < 0.01) and the gap effect was markedly attenuated (p < 0.01). These results suggest that the measurement of saccadic eye movements may provide a novel method for evaluating the behavioral symptoms and clinical features of ADHD, and that the gap effect is a potential biomarker for the diagnosis of ADHD in early childhood. |
Ana Radonjić; Nicolas P. Cottaris; David H. Brainard Color constancy supports cross-illumination color selection Journal Article In: Journal of Vision, vol. 15, no. 6, pp. 1–19, 2015. @article{Radonjic2015, We rely on color to select objects as the targets of our actions (e.g., the freshest fish, the ripest fruit). To be useful for selection, color must provide accurate guidance about object identity across changes in illumination. Although the visual system partially stabilizes object color appearance across illumination changes, how such color constancy supports object selection is not understood. To study how constancy operates in real-life tasks, we developed a novel paradigm in which subjects selected which of two test objects presented under a test illumination appeared closer in color to a target object presented under a standard illumination. From subjects' choices, we inferred a selection-based match for the target via a variant of maximum likelihood difference scaling, and used it to quantify constancy. Selection-based constancy was good when measured using naturalistic stimuli, but was dramatically reduced when the stimuli were simplified, indicating that a naturalistic stimulus context is critical for good constancy. Overall, our results suggest that color supports accurate object selection across illumination changes when both stimuli and task match how color is used in real life. We compared our selection-based constancy results with data obtained using a classic asymmetric matching task and found that the adjustment-based matches predicted selection well for our stimuli and instructions, indicating that the appearance literature provides useful guidance for the emerging study of constancy in natural tasks. |
Paul Dassonville; Scott A. Reed The Two-Wrongs model explains perception-action dissociations for illusions driven by distortions of the egocentric reference frame Journal Article In: Frontiers in Human Neuroscience, vol. 9, pp. 140, 2015. @article{Dassonville2015, Several studies have demonstrated a dissociation of the effects of illusion on perception and action, with perception generally reported to be susceptible to illusions, while actions are seemingly immune. These findings have been interpreted to support Milner and Goodale's Two Visual Systems model, which proposes the existence of separate visual processing streams for perception and action. However, an alternative interpretation suggests that this type of behavioral dissociation will occur for any illusion that is caused by a distortion of the observer's egocentric reference frame, without requiring the existence of separate perception and action systems that are differently affected by the illusion. In this scenario, movements aimed at illusory targets will be accurate if they are guided within the same distorted reference frame used for target encoding, since the error of motor guidance will cancel with the error of encoding (hence, for actions, two wrongs do make a right). We further test this Two-Wrongs model by examining two illusions for which the hypothesis makes very different predictions: the rod-and-frame illusion (which affects perception but not actions) and the simultaneous-tilt illusion (which affects perception and actions equally). We demonstrate that the rod-and-frame illusion is caused by a distortion of the observer's egocentric reference frame suitable for the cancellation of errors predicted by the Two-Wrongs model. In contrast, the simultaneous-tilt illusion is caused by local interactions between stimulus elements within an undistorted reference frame, precluding the cancellation of errors associated with the Two-Wrongs model such that the illusion is reflected in both perception and actions. These results provide evidence for a class of illusions that lead to dissociations of perception and action through distortions of the observer's spatial reference frame, rather than through the actions of functionally separate visual processing streams. |
Christopher A. Sanchez; Allison J. Jaeger If it's hard to read, it changes how long you do it: Reading time as an explanation for perceptual fluency effects on judgment Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 1, pp. 206–211, 2015. @article{Sanchez2015, Perceptual manipulations, such as changes in font type or figure-ground contrast, have been shown to increase judgments of difficulty or effort related to the presented material. Previous theory has suggested that this is the result of changes in online processing or perhaps the post-hoc influence of perceived difficulty recalled at the time of judgment. These two experiments seek to examine by which mechanism (or both) the fluency effect is pro-duced. Results indicate that disfluency does in fact change in situ reading behavior, and this change significantly me-diates judgments. Eye movement analyses corroborate this suggestion and observe a difference in how people read a disfluent presentation. These findings support the notion that readers are using perceptual cues in their reading ex-periences to change how they interact with the material, which in turn produces the observed biases. |
Sebastian Sandoval Similä; Robert D. McIntosh Look where you're going! Perceptual attention constrains the online guidance of action Journal Article In: Vision Research, vol. 110, pp. 179–189, 2015. @article{SandovalSimilae2015, Action guidance, like perceptual discrimination, requires selective attention. Perception is enhanced at the target of a reaching movement, but it is not known whether selecting an object for perception reciprocally prioritises it for action. Two theoretical frameworks, the premotor theory and the Visual Attention Model, predict that this reciprocal relation should hold. We tested the influence of perceptual attention on the online control of reaching. In Experiment 1, participants attended covertly to a flanker on one or other side of a fixated target, prior to reaching for that target, which occasionally jumped, after reach onset, to the attended or non-attended side. Participants corrected their reaches for almost all target jumps. In Experiment 2, we required covert monitoring of the flanker during reaching. This concurrent perceptual task globally reduced correction behaviour, indicating that perception and action share a common attentional resource. Corrections were especially unlikely toward the attended side. This is explained by assuming that perceptual attention primed an action toward the attended location and that the participant inhibited this primed action. The data thus imply that perceptual selection constrains online action guidance, as predicted by the premotor theory and the VAM. We further argue that the fact that participants can inhibit a location within the action system but simultaneously maintain its prioritisation for perceptual monitoring, is easier to reconcile with the VAM than with the premotor theory. |
Andreza Sartori; Victoria Yanulevskaya; Almila Akdag Salah; Jasper Uijlings; Elia Bruni; Nicu Sebe Affective analysis of abstract paintings using statistical analysis and art theory Journal Article In: ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 5, pp. 1–27, 2015. @article{Sartori2015, When artists express their feelings through the artworks they create, it is believed that the resulting works transform into objects with “emotions” capable of conveying the artists' mood to the audience. There is little to no dispute about this belief: Regardless of the artwork, genre, time, and origin of creation, people from different backgrounds are able to read the emotional messages. This holds true even for the most abstract paintings. Could this idea be applied to machines as well? Can machines learn what makes a work of art “emotional”? In this work, we employ a state-of-the-art recognition system to learn which statistical patterns are associated with positive and negative emotions on two different datasets that comprise professional and amateur abstract artworks. Moreover, we analyze and compare two different annotation methods in order to establish the ground truth of positive and negative emotions in abstract art. Additionally, we use computer vision techniques to quantify which parts of a painting evoke positive and negative emotions. We also demonstrate how the quantification of evidence for positive and negative emotions can be used to predict which parts of a painting people prefer to focus on. This method opens new opportunities of research on why a specific painting is perceived as emotional at global and local scales. |
David J. Schaeffer; Lingxi Chi; Cynthia E. Krafft; Qingyang Li; Nicolette F. Schwarz; Jennifer E. Mcdowell Individual differences in working memory moderate the relationship between prosaccade latency and antisaccade error rate Journal Article In: Psychophysiology, vol. 52, no. 4, pp. 605–608, 2015. @article{Schaeffer2015, Cognitive control is required for flexible responses in changing environments and can be assessed by measuring antisaccade error rate. Considerable variance in antisaccade error rate is observed in healthy participants, which motivated the current study to explore the cognitive factors affecting antisaccade performance. Relationships exist between prosaccade latency and antisaccade error rate, with faster prosaccade latencies linked to more antisaccade errors. Individual differences in working memory also impact saccadic performance. The current study tested the relationships among prosaccade latency, antisaccade error rate, and working memory in 153 healthy participants. Correlation and multiple regression analyses demonstrated that prosaccade latency predicted antisaccade error rate, and working memory moderated this relationship. These results may help elucidate individual differences in cognitive control among healthy individuals. |
Annett Schirmer; Christy Reece; Claris Zhao; Erik Ng; Esther Wu; Shih-Cheng Yen Reach out to one and you reach out to many: Social touch affects third-party observers Journal Article In: British Journal of Psychology, vol. 106, no. 1, pp. 107–132, 2015. @article{Schirmer2015, Casual social touch influences emotional perceptions, attitudes, and behaviours of interaction partners. We asked whether these influences extend to third-party observers. To this end, we developed the Social Touch Picture Set comprising line drawings of dyadic interactions, half of which entailed publicly acceptable casual touch and half of which served as no-touch controls. In Experiment 1, participants provided basic image norms by rating how frequently they observed a displayed touch gesture in everyday life and how comfortable they were observing it. Results implied that some touch gestures were observed more frequently and with greater comfort than others (e.g., handshake vs. hug). All gestures, however, obtained rating scores suitable for inclusion in Experiments 2 and 3. In Experiment 2, participants rated perceived valence, arousal, and likeability of randomly presented touch and no-touch images without being explicitly informed about touch. Image characters seemed more positive, aroused, and likeable when they touched as compared to when they did not touch. Image characters seemed more negative and aroused, but were equally likeable, when they received touch as compared to when there was no physical contact. In Experiment 3, participants passively viewed touch and no-touch images while their eye movements were recorded. Differential gazing at touch as compared to no-touch images emerged within the first 500 ms following image exposure and was largely restricted to the characters' upper body. Gazing at the touching body parts (e.g., hands) was minimal and largely unaffected by touch, suggesting that touch processing occurred outside the focus of visual attention. Together, these findings establish touch as an important visual cue and provide novel insights into how this cue modulates socio-emotional processing in third-party observers. |
Lisette J. Schmidt; Artem V. Belopolsky; Jan Theeuwes Potential threat attracts attention and interferes with voluntary saccades Journal Article In: Emotion, vol. 15, no. 3, pp. 329–338, 2015. @article{Schmidt2015, Several studies have shown that threatening stimuli are prioritized by the visual system. In the present study we investigated whether a stimulus associated with a threat of electrical shock attracts attention and accordingly interferes with the execution of voluntary eye movements to other locations. In 2 experiments, we showed that when a fear-conditioned and a neutral stimulus were presented simultaneously, voluntary saccades were initiated faster toward fear-conditioned compared with neutral stimuli. Moreover, saccades often erroneously went to the location of threat even when a saccade to a different location was required. This implies an automatic shift of attention to a fear-conditioned stimulus that interferes with saccade execution. The same pattern of results was found for a neutral stimulus that was always presented together with the fear-conditioned stimulus and consequently itself became associated with threat. The current results indicate that threatening stimuli attract visual attention and subsequently bias saccade target selection in a reflexive fashion. |
Paul Roux; Christine Passerieux; Franck Ramus An eye-tracking investigation of intentional motion perception in patients with schizophrenia Journal Article In: Journal of Psychiatry and Neuroscience, vol. 40, no. 2, pp. 118–125, 2015. @article{Roux2015, BACKGROUND: Schizophrenia has been characterized by an impaired attribution of intentions in social interactions. However, it remains unclear to what extent poor performance may be due to low-level processes or to later, higher-level stages or to what extent the deficit reflects an over- (hypermentalization) or underattribution of intentions (hypomentalization). METHODS: We evaluated intentional motion perception using a chasing detection paradigm in individuals with schizophrenia or schizoaffective disorder and in healthy controls while eye movements were recorded. Smooth pursuit was measured as a control task. Eye-tracking was used to dissociate ocular from cognitive stages of processing. RESULTS: We included 27 patients with schizophrenia, 2 with schizoaffective disorder and 29 controls in our analysis. As a group, patients had lower sensitivity to the detection of chasing than controls, but showed no bias toward the chasing present response. Patients showed a slightly different visual exploration strategy, which affected their ocular sensitivity to chasing. They also showed a decreased cognitive sensitivity to chasing that was not explained by differences in smooth pursuit ability, in visual exploration strategy or in general cognitive abilities. LIMITATIONS: It is not clear whether the deficit in intentional motion detection demonstrated in this study might be explained by a general deficit in motion perception in individuals with schizophrenia or whether it is specific to the social domain. CONCLUSION: Participants with schizophrenia showed a hypomentalization deficit: they adopted suboptimal visual exploration strategies and had difficulties deciding whether a chase was present or not, even when their eye movement revealed that chasing information had been seen correctly. |
Annie Roy-Charland; Melanie Perron; Jessica Boulard; Justin Chamberland; Nichola Hoffman If I point, do they look?: The impact of attention-orientation strategies on text exploration during shared book reading Journal Article In: Reading and Writing, vol. 28, no. 9, pp. 1285–1305, 2015. @article{RoyCharland2015, The current study examined the effect of pointing to the words and using highlighted text by examining eye movements when children in preschool, Grade 1 and 2 were read storybooks of two levels of difficulty. For all children, pointing to and highlighting the text was observed to increase the amount of time and number of fixations on the printed text than when there was no intervention. Furthermore, with difficult text, an increased amount of time and number of fixations was observed when the text was pointed to than when it was highlighted. For preschoolers, even with the increased attention on the text from pointing to and highlighting the words, the fixations did not match the narration. First and second graders, with the difficult book, made more matching fixations both when the printed text was pointed to and highlighted than when no intervention was done. Additionally, more matching fixations were made when the printed text was highlighted than when pointed to. Future research is required to examine the effects of attention-orienting strategies on reading related outcomes. |
Annie Roy-Charland; Melanie Perron; Cheryl Young; Jessica Boulard; Justin A. Chamberland The confusion of fear and surprise: A developmental study of the perceptual-attentional limitation hypothesis using eye movements Journal Article In: The Journal of Genetic Psychology, vol. 176, no. 5, pp. 281–298, 2015. @article{RoyCharland2015a, The goal of the present study was to test the Perceptual-Attentional Limitation Hypothesis in children and adults by manipulating the distinctiveness between expressions and recording eye movements. Children 3-5 and 9-11 years old as well as adults were presented pairs of expressions and required to identify a target emotion. Children 3-5 years old were less accurate than those 9-11 years old and adults. All children viewed pictures longer than adults but did not spend more time attending to the relevant cues. For all participants, accuracy for the recognition of fear was lower than for surprise when the distinctive cue was in the brow only. They also took longer and spent more time in both the mouth and brow zones than when a cue was in the mouth or both areas. Adults and children 9-11 years old made more comparisons between the expressions when fear comprised a single distinctive cue in the brow than when the distinctive cue was in the mouth only or when both cues were present. Children 3-5 years old made more comparisons for brow only than both. The results of the present study extend on the Perceptual-Attentional Limitation Hypothesis showing an importance of both decoder and stimuli, and an interaction between decoder and stimuli characteristics. |
Anthony J. Ryals; Jane X. Wang; Kelly L. Polnaszek; Joel L. Voss Hippocampal contribution to implicit configuration memory expressed via eye movements during scene exploration Journal Article In: Hippocampus, vol. 25, no. 9, pp. 1028–1041, 2015. @article{Ryals2015, Although hippocampus unequivocally supports explicit/declarative memory, fewer findings have demonstrated its role in implicit expressions of memory. We tested for hippocampal contributions to an implicit expression of configural/relational memory for complex scenes using eye-movement tracking during functional magnetic resonance imaging (fMRI) scanning. Participants studied scenes and were later tested using scenes that resembled study scenes in their overall feature configuration but comprised different elements. These configurally similar scenes were used to limit explicit memory, and were intermixed with new scenes that did not resemble studied scenes. Scene configuration memory was expressed through eye movements reflecting exploration overlap (EO), which is the viewing of the same scene locations at both study and test. EO reliably discriminated similar study-test scene pairs from study-new scene pairs, was reliably greater for similarity-based recognition hits than for misses, and correlated with hippocampal fMRI activity. In contrast, subjects could not reliably discriminate similar from new scenes by overt judgments, although ratings of familiarity were slightly higher for similar than new scenes. Hippocampal fMRI correlates of this weak explicit memory were distinct from EO-related activity. These findings collectively suggest that EO was an implicit expression of scene configuration memory associated with hippocampal activity. Visual exploration can therefore reflect implicit hippocampal-related memory processing that can be observed in eye-movement behavior during naturalistic scene viewing. |
Rachel A. Ryskin; Aaron S. Benjamin; Jonathan Tullis; Sarah Brown-Schmidt Perspective-taking in comprehension, production, and memory: An individual differences approach Journal Article In: Journal of Experimental Psychology: General, vol. 144, no. 5, pp. 898–915, 2015. @article{Ryskin2015, The ability to take a different perspective is central to a tremendous variety of higher level cognitive skills. To communicate effectively, we must adopt the perspective of another person both while speaking and listening. To ensure the successful retrieval of critical information in the future, we must adopt the perspective of our own future self and construct cues that will survive the passage of time. Here we explore the cognitive underpinnings of perspective-taking across a set of tasks that involve communication and memory, with an eye toward evaluating the proposal that perspective-taking is domain-general (e.g., Wardlow, 2013). We measured participants' perspective-taking ability in a language production task, a language comprehension task, and a memory task in which people generated their own cues for the future. Surprisingly, there was little variance common to the 3 tasks, a result that suggests that perspective-taking is not domain-general. Performance in the language production task was predicted by a measure of working memory, whereas performance in the cue-generation memory task was predicted by a combination of working memory and long-term memory measures. These results indicate that perspective-taking relies on differing cognitive capacities in different situations. |
Donghyun Ryu; Bruce Abernethy; David L. Mann; Jamie M. Poolton The contributions of central and peripheral vision to expertise in basketball: How blur helps to provide a clearer picture Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 1, pp. 167–183, 2015. @article{Ryu2015, The main purpose of this study was to examine the relative roles of central and peripheral vision when performing a dynamic forced-choice task. We did so by using a gaze-contingent display with different levels of blur in an effort to (a) test the limit of visual resolution necessary for information pick-up in each of these sectors of the visual field and, as a result, to (b) develop a more natural means of gaze-contingent display using a blurred central or peripheral visual field. The expert advantage seen in usual whole field visual presentation persists despite surprisingly high levels of impairment to central or peripheral vision. Consistent with the well-established central/peripheral differences in sensitivity to spatial frequency, high levels of blur did not prevent better-than-chance performance by skilled players when peripheral information was blurred, but they did affect response accuracy when impairing central vision. Blur was found to always alter the pattern of eye movements before it decreased task performance. The evidence accumulated across the 4 experi- ments provides new insights into several key questions surrounding the role that different sectors of the visual field play in expertise in dynamic, time-constrained tasks. |
Golbarg T. Saber; Franco Pestilli; Clayton E. Curtis Saccade planning evokes topographically specific activity in the dorsal and ventral streams Journal Article In: Journal of Neuroscience, vol. 35, no. 1, pp. 245–252, 2015. @article{Saber2015, Saccade planning may invoke spatially-specific feedback signals that bias early visual activity in favor of top-down goals. We tested this hypothesis by measuring cortical activity at the early stages of the dorsal and ventral visual processing streams. Human subjects maintained saccade plans to (prosaccade) or away (antisaccade) from a spatial location over long memory-delays. Results show that cortical activity persists in early visual cortex at the retinotopic location of upcoming saccade goals. Topographically specific activity persists as early as V1, and activity increases along both dorsal (V3A/B, IPS0) and ventral (hV4, VO1) visual areas. Importantly, activity persists when saccade goals are available only via working memory and when visual targets and saccade goals are spatially disassociated. We conclude that top-down signals elicit retinotopically specific activity in visual cortex both in the dorsal and ventral streams. Such activity may underlie mechanisms that prioritize locations of task-relevant objects. |
Chihiro Saegusa; Janis Intoy; Shinsuke Shimojo Visual attractiveness is leaky: The asymmetrical relationship between face and hair Journal Article In: Frontiers in Psychology, vol. 6, pp. 377, 2015. @article{Saegusa2015, Predicting personality is crucial when communicating with people. It has been revealed that the perceived attractiveness or beauty of the face is a cue. As shown in the well-known "what is beautiful is good" stereotype, perceived attractiveness is often associated with desirable personality. Although such research on attractiveness used mainly the face isolated from other body parts, the face is not always seen in isolation in the real world. Rather, it is surrounded by one's hairstyle, and is perceived as a part of total presence. In human vision, perceptual organization/integration occurs mostly in a bottom up, task-irrelevant fashion. This raises an intriguing possibility that task-irrelevant stimulus that is perceptually integrated with a target may influence our affective evaluation. In such a case, there should be a mutual influence between attractiveness perception of the face and surrounding hair, since they are assumed to share strong and unique perceptual organization. In the current study, we examined the influence of a task-irrelevant stimulus on our attractiveness evaluation, using face and hair as stimuli. The results revealed asymmetrical influences in the evaluation of one while ignoring the other. When hair was task-irrelevant, it still affected attractiveness of the face, but only if the hair itself had never been evaluated by the same evaluator. On the other hand, the face affected the hair regardless of whether the face itself was evaluated before. This has intriguing implications on the asymmetry between face and hair, and perceptual integration between them in general. Together with data from a post hoc questionnaire, it is suggested that both implicit non-selective and explicit selective processes contribute to attractiveness evaluation. The findings provide an understanding of attractiveness perception in real-life situations, as well as a new paradigm to reveal unknown implicit aspects of information integration for emotional judgment. |
Carola Salvi; Emanuela Bricolo; Steven L. Franconeri; John Kounios; Mark Beeman Sudden insight is associated with shutting out visual inputs Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 6, pp. 1814–1819, 2015. @article{Salvi2015, Creative ideas seem often to appear when we close our eyes, stare at a blank wall, or gaze out of a window—all signs of shutting out distractions and turning attention inward. Prior research has demonstrated that attention-related brain areas are differently active when people solve problems with sudden insight (the Aha! phenomenon), relative to deliberate, analytic solving.We directly investigated the relationship be- tween attention deployment and problemsolving by recording eye movements and blinks, which are overt indicators of at- tention, as people solved short, visually presented problems. In the preparation period, before problems eventually solved by insight, participants blinked more frequently and longer, and made fewer fixations, than before problems eventually solved by analysis. Immediately prior to solutions, partici- pants blinked longer and looked away from the problemmore often when solving by insight than when solving analytically. These phenomena extend prior research with a direct demon- stration of dynamic differences in attention as people solve problems with sudden insight versus analytically. |
Uzma Samadani; Sameer Farooq; Robert Ritlop; Floyd Warren; Marleen Reyes; Elizabeth Lamm; Anastasia Alex; Elena Nehrbass; Radek Kolecki; Michael Jureller; Julia Schneider; Agnes Chen; Chen Shi; Neil Mendhiratta; Jason H. Huang; Meng Qian; Roy Kwak; Artem Mikheev; Henry Rusinek; Ajax George; Robert Fergus; Douglas Kondziolka; Paul P. Huang; R. Theodore Smith Detection of third and sixth cranial nerve palsies with a novel method for eye tracking while watching a short film clip Journal Article In: Journal of Neurosurgery, vol. 122, pp. 707–720, 2015. @article{Samadani2015, OBJECT: Automated eye movement tracking may provide clues to nervous system function at many levels. Spatial calibration of the eye tracking device requires the subject to have relatively intact ocular motility that implies function of cranial nerves (CNs) III (oculomotor), IV (trochlear), and VI (abducent) and their associated nuclei, along with the multiple regions of the brain imparting cognition and volition. The authors have developed a technique for eye tracking that uses temporal rather than spatial calibration, enabling detection of impaired ability to move the pupil relative to normal (neurologically healthy) control volunteers. This work was performed to demonstrate that this technique may detect CN palsies related to brain compression and to provide insight into how the technique may be of value for evaluating neuropathological conditions associated with CN palsy, such as hydrocephalus or acute mass effect. METHODS: The authors recorded subjects' eye movements by using an Eyelink 1000 eye tracker sampling at 500 Hz over 200 seconds while the subject viewed a music video playing inside an aperture on a computer monitor. The aperture moved in a rectangular pattern over a fixed time period. This technique was used to assess ocular motility in 157 neurologically healthy control subjects and 12 patients with either clinical CN III or VI palsy confirmed by neuro-ophthalmological examination, or surgically treatable pathological conditions potentially impacting these nerves. The authors compared the ratio of vertical to horizontal eye movement (height/width defined as aspect ratio) in normal and test subjects. RESULTS: In 157 normal controls, the aspect ratio (height/width) for the left eye had a mean value ± SD of 1.0117 ± 0.0706. For the right eye, the aspect ratio had a mean of 1.0077 ± 0.0679 in these 157 subjects. There was no difference between sexes or ages. A patient with known CN VI palsy had a significantly increased aspect ratio (1.39), whereas 2 patients with known CN III palsy had significantly decreased ratios of 0.19 and 0.06, respectively. Three patients with surgically treatable pathological conditions impacting CN VI, such as infratentorial mass effect or hydrocephalus, had significantly increased ratios (1.84, 1.44, and 1.34, respectively) relative to normal controls, and 6 patients with supratentorial mass effect had significantly decreased ratios (0.27, 0.53, 0.62, 0.45, 0.49, and 0.41, respectively). These alterations in eye tracking all reverted to normal ranges after surgical treatment of underlying pathological conditions in these 9 neurosurgical cases. CONCLUSIONS: This proof of concept series of cases suggests that the use of eye tracking to detect CN palsy while the patient watches television or its equivalent represents a new capacity for this technology. It may provide a new tool for the assessment of multiple CNS functions that can potentially be useful in the assessment of awake patients with elevated intracranial pressure from hydrocephalus or trauma. |
Uzma Samadani; Meng Qian Li; Eugene Laska; Robert Ritlop; Robert Kolecki; Marleen Reyes; Lindsey Altomare; Je Yeong Sone; Aylin Adem; Paul P. Huang; Douglas Kondziolka; Stephen Wall; Spiros Frangos; Charles Marmar Sensitivity and specificity of an eye movement tracking-based biomarker for concussion Journal Article In: Concussion, vol. 1, no. 1, pp. 1–14, 2015. @article{Samadani2015a, Object: The purpose of the current study is to determine the sensitivity and specificity of an eye tracking method as a classifier for identifying concussion. Methods: Brain injured and control subjects prospectively underwent both eye tracking and Sport Concussion Assessment Tool 3. The results of eye tracking biomarker based classifier models were then validated against a dataset of individuals not used in building a model. The area under the curve (AUC) of receiver operating characteristics was examined. Results: An optimal classifier based on best subset had an AUC of 0.878, and a cross-validated AUC of 0.852 in CT- subjects and an AUC of 0.831 in a validation dataset. The optimal misclassification rate in an external dataset (n = 254) was 13%. Conclusion: If one defines concussion based on history, examination, radiographic and Sport Concussion Assessment Tool 3 criteria, it is possible to generate an eye tracking based biomarker that enables detection of concussion with reasonably high sensitivity and specificity. |
Uzma Samadani; Robert Ritlop; Marleen Reyes; Elena Nehrbass; Meng Li; Elizabeth Lamm; Julia Schneider; David Shimunov; Maria Sava; Radek Kolecki; Paige Burris; Lindsey Altomare; Talha Mehmood; Theodore Smith; Jason H. Huang; Christopher McStay; S. Rob Todd; Meng Qian; Douglas Kondziolka; Stephen Wall; Paul P. Huang Eye tracking detects disconjugate eye movements associated with structural traumatic brain injury and concussion Journal Article In: Journal of Neurotrauma, vol. 32, no. 8, pp. 548–556, 2015. @article{Samadani2015b, Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury. |
Alexandra Reichenbach; Jörn Diedrichsen Processing reafferent and exafferent visual information for action and perception Journal Article In: Journal of Vision, vol. 15, no. 8, pp. 1–12, 2015. @article{Reichenbach2015, A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception. |
Regina M. Reinert; Stefan Huber; Hans-Christoph Nuerk; Korbinian Moeller Strategies in unbounded number line estimation? Evidence from eye-tracking Journal Article In: Cognitive Processing, vol. 16, no. 1, pp. 359–363, 2015. @article{Reinert2015, For bounded number line estimation, recent studies indicated influences of proportion-based strategies as documented by eye-tracking data. In the current study, we investigated solution strategies in bounded and unbounded number line estimation by directly comparing participants' estimation performance as well as their corresponding eye-fixation behaviour. For bounded number line estimation, increased numbers of fixations at and around reference points (i.e. start, middle and endpoint) confirmed the prominent use of proportion-based strategies. In contrast, in unbounded number line estimation, the number of fixations on the number line decreased continuously with increasing magnitude of the target number. Additionally, we observed that in bounded and unbounded number line estimation participants' first fixation on the number line was a valid predictor of the location of the target number. In sum, these data corroborate the idea that unbounded number line estimation is less influenced by proportion-based estimation strategies not directly related to numerical estimations. |
Thomas R. Reppert; Karolina M. Lempert; Paul W. Glimcher; Reza Shadmehr Modulation of saccade vigor during value-based decision making Journal Article In: Journal of Neuroscience, vol. 35, no. 46, pp. 15369–15378, 2015. @article{Reppert2015, During value-based decision-making, individuals consider the various options and select the one that provides the maximum subjective value. Although the brain integrates abstract information to compute and compare these values, the only behavioral outcome is often the decision itself. However, if the options are visual stimuli, during deliberation the brain moves the eyes from one stimulus to the other. Previous work suggests that saccade vigor, i.e., peak velocity as a function of amplitude, is greater if reward is associated with the visual stimulus. This raises the possibility that vigor during the free viewing of options may be influenced by the valuation of each option. Here, humans chose between a small, immediate monetary reward and a larger but delayed reward. As the deliberation began, vigor was similar for the saccades made to the two options but diverged 0.5 s before decision time, becoming greater for the preferred option. This difference in vigor increased as a function of the difference in the subjective values that the participant assigned to the delayed and immediate options. After the decision was made, participants continued to gaze at the options, but with reduced vigor, making it possible to infer timing of the decision from the sudden drop in vigor. Therefore, the subjective value that the brain assigned to a stimulus during decision-making affected the motor system via the vigor with which the eyes moved toward that stimulus. |
James D. Retell; Dustin Venini; Stefanie I. Becker Oculomotor capture by new and unannounced color singletons during visual search Journal Article In: Attention, Perception, and Psychophysics, vol. 77, pp. 1529–1543, 2015. @article{Retell2015, The surprise capture hypothesis states that a stimulus will capture attention to the extent that it is preattentively available and deviates from task-expectancies. Interestingly, it has been noted by Horstmann (Psychological Science 13: 499–505. doi:10.1111/1467-9280.00488, 2002, Human Perception and Performance 31: 1039–1060. doi:10.1037/ 00961523.31.5.1039, 2005, Psychological Research, 70, 13- 25, 2006) that the time course of capture by such classes of stimuli appears distinct from that of capture by expected stimuli. Specifically, attention shifts to an unexpected stimulus are delayed relative to an expected stimulus (delayed onset account). Across two experiments, we investigated this claim under conditions of unguided (Exp. 1) and guided (Exp. 2) search using eye-movements as the primary index of attentional selection. In both experiments, we found strong evidence of surprise capture for the first presentation of an unannounced color singleton. However, in both experiments the pattern of eye-movements was not consistent with a delayed onset account of attention capture. Rather, we observed costs associated with the unexpected stimulus only once the target had been selected. We propose an interference account of surprise capture to explain our data and argue that this account also can explain existing patterns of data in the literature. |
Michael R. Richards; Henry W. Fields; F. Michael Beck; Allen R. Firestone; Dirk B. Walther; Stephen F. Rosenstiel; James M. Sacksteder Contribution of malocclusion and female facial attractiveness to smile esthetics evaluated by eye tracking Journal Article In: American Journal of Orthodontics and Dentofacial Orthopedics, vol. 147, no. 4, pp. 472–482, 2015. @article{Richards2015, There is disagreement in the literature concerning the importance of the mouth in overall facial attractiveness. Eye tracking provides an objective method to evaluate what people see. The objective of this study was to determine whether dental and facial attractiveness alters viewers' visual attention in terms of which area of the face (eyes, nose, mouth, chin, ears, or other) is viewed first, viewed the greatest number of times, and viewed for the greatest total time (duration) using eye tracking. Methods Seventy-six viewers underwent 1 eye tracking session. Of these, 53 were white (49% female, 51% male). Their ages ranged from 18 to 29 years, with a mean of 19.8 years, and none were dental professionals. After being positioned and calibrated, they were shown 24 unique female composite images, each image shown twice for reliability. These images reflected a repaired unilateral cleft lip or 3 grades of dental attractiveness similar to those of grades 1 (near ideal), 7 (borderline treatment need), and 10 (definite treatment need) as assessed in the aesthetic component of the Index of Orthodontic Treatment Need (AC-IOTN). The images were then embedded in faces of 3 levels of attractiveness: attractive, average, and unattractive. During viewing, data were collected for the first location, frequency, and duration of each viewer's gaze. Results Observer reliability ranged from 0.58 to 0.92 (intraclass correlation coefficients) but was less than 0.07 (interrater) for the chin, which was eliminated from the study. Likewise, reliability for the area of first fixation was kappa less than 0.10 for both intrarater and interrater reliabilities; the area of first fixation was also removed from the data analysis. Repeated-measures analysis of variance showed a significant effect (P <0.001) for level of attractiveness by malocclusion by area of the face. For both number of fixations and duration of fixations, the eyes overwhelmingly were most salient, with the mouth receiving the second most visual attention. At times, the mouth and the eyes were statistically indistinguishable in viewers' gazes of fixation and duration. As the dental attractiveness decreased, the visual attention increased on the mouth, approaching that of the eyes. AC-IOTN grade 10 gained the most attention, followed by both AC-IOTN grade 7 and the cleft. AC-IOTN grade 1 received the least amount of visual attention. Also, lower dental attractiveness (AC-IOTN 7 and AC-IOTN 10) received more visual attention as facial attractiveness increased. Conclusions Eye tracking indicates that dental attractiveness can alter the level of visual attention depending on the female models' facial attractiveness when viewed by laypersons. |
Gerulf Rieger; Brian M. Cash; Sarah M. Merrill; James Jones-Rounds; Sanjay Muralidharan Dharmavaram; Ritch C. Savin-Williams Sexual arousal: The correspondence of eyes and genitals Journal Article In: Biological Psychology, vol. 104, pp. 56–64, 2015. @article{Rieger2015, Men's, more than women's, sexual responses may include a coordination of several physiological indices in order to build their sexual arousal to relevant targets. Here, for the first time, genital arousal and pupil dilation to sexual stimuli were simultaneously assessed. These measures corresponded more strongly with each other, subjective sexual arousal, and self-reported sexual orientation in men than women. Bisexual arousal is more prevalent in women than men. We therefore predicted that if bisexual-identified men show bisexual arousal, the correspondence of their arousal indices would be more female-typical, thus weaker, than for other men. Homosexual women show more male-typical arousal than other women; hence, their correspondence of arousal indices should be stronger than for other women. Findings, albeit weak in effect, supported these predictions. Thus, if sex-specific patterns are reversed within one sex, they might affect more than one aspect of sexual arousal. Because pupillary responses reflected sexual orientation similar to genital responses, they offer a less invasive alternative for the measurement of sexual arousal. |
Ioannis Rigas; Oleg V. Komogortsev Eye movement-driven defense against iris print-attacks Journal Article In: Pattern Recognition Letters, vol. 68, no. 2, pp. 316–326, 2015. @article{Rigas2015, This paper proposes a methodology for the utilization of eye movement cues for the task of iris print-attack detection. We investigate the fundamental distortions arising in the eye movement signal during an iris print-attack, due to the structural and functional discrepancies between a paper-printed iris and a natural eye iris. The performed experiments involve the execution of practical print-attacks against an eye-tracking device, and the collection of the resulting eye movement signals. The developed methodology for the detection of print-attack signal distortions is evaluated on a large database collected from 200 subjects, which contains both the real (‘live') eye movement signals and the print-attack (‘spoof') eye movement signals. The suggested methodology provides a sufficiently high detection performance, with a maximum average classification rate (ACR) of 96.5% and a minimum equal error rate (EER) of 3.4%. Due to the hardware similarities between eye tracking and iris capturing systems, we hypothesize that the proposed methodology can be adopted into the existing iris recognition systems with minimal cost. To further support this hypothesis we experimentally investigate the robustness of our scheme by simulating conditions of reduced sampling resolution (temporal and spatial), and of limited duration of the eye movement signals. |
Hannah Rigler; Ashley Farris-Trimble; Lea Greiner; Jessica Walker; J. Bruce Tomblin; Bob McMurray The slow developmental time course of real-time spoken word recognition Journal Article In: Developmental Psychology, vol. 51, no. 12, pp. 1690–1703, 2015. @article{Rigler2015, This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than 16-year-olds; however, both age groups ultimately fixated targets to the same degree. This contrasts with a prior study of adolescents with language impairment (McMurray, Samelson, Lee, & Tomblin, 2010) that showed a different pattern of real-time processes. These findings suggest that the dynamics of word recognition are still developing even at these late ages, and developmental changes may derive from different sources than individual differences in relative language ability. |
Brian Riordan; Melody Dye; Michael N. Jones Grammatical number processing and anticipatory eye movements are not tightly coordinated in English spoken language comprehension Journal Article In: Frontiers in Psychology, vol. 6, pp. 590, 2015. @article{Riordan2015, Recent studies of eye movements in world-situated language comprehension have demonstrated that rapid processing of morphosyntactic information - e.g., grammatical gender and number marking - can produce anticipatory eye movements to referents in the visual scene. We investigated how type of morphosyntactic information and the goals of language users in comprehension affected eye movements, focusing on the processing of grammatical number morphology in English-speaking adults. Participants' eye movements were recorded as they listened to simple English declarative (There are the lions.) and interrogative (Where are the lions?) sentences. In Experiment 1, no differences were observed in speed to fixate target referents when grammatical number information was informative relative to when it was not. The same result was obtained in a speeded task (Experiment 2) and in a task using mixed sentence types (Experiment 3). We conclude that grammatical number processing in English and eye movements to potential referents are not tightly coordinated. These results suggest limits on the role of predictive eye movements in concurrent linguistic and scene processing. We discuss how these results can inform and constrain predictive approaches to language processing. |
Christian H. Poth; Arvid Herwig; Werner X. Schneider Breaking object correspondence across saccadic eye movements deteriorates object recognition Journal Article In: Frontiers in Systems Neuroscience, vol. 9, pp. 176, 2015. @article{Poth2015, Visual perception is based on information processing during periods of eye fixations that are interrupted by fast saccadic eye movements. The ability to sample and relate information on task-relevant objects across fixations implies that correspondence between presaccadic and postsaccadic objects is established. Postsaccadic object information usually updates and overwrites information on the corresponding presaccadic object. The presaccadic object representation is then lost. In contrast, the presaccadic object is conserved when object correspondence is broken. This helps transsaccadic memory but it may impose attentional costs on object recognition. Therefore, we investigated how breaking object correspondence across the saccade affects postsaccadic object recognition. In Experiment 1, object correspondence was broken by a brief postsaccadic blank screen. Observers made a saccade to a peripheral object which was displaced during the saccade. This object reappeared either immediately after the saccade or after the blank screen. Within the postsaccadic object, a letter was briefly presented (terminated by a mask). Observers reported displacement direction and letter identity in different blocks. Breaking object correspondence by blanking improved displacement identification but deteriorated postsaccadic letter recognition. In Experiment 2, object correspondence was broken by changing the object's contrast-polarity. There were no object displacements and observers only reported letter identity. Again, breaking object correspondence deteriorated postsaccadic letter recognition. These findings identify transsaccadic object correspondence as a key determinant of object recognition across the saccade. This is in line with the recent hypothesis that breaking object correspondence results in separate representations of presaccadic and postsaccadic objects which then compete for limited attentional processing resources (Schneider, 2013). Postsaccadic object recognition is then deteriorated because less resources are available for processing postsaccadic objects. |
Seema Gorur Prasad; Gouri Shanker Patil; Ramesh Kumar Mishra Effect of exogenous cues on covert spatial orienting in deaf and normal hearing individuals Journal Article In: PLoS ONE, vol. 10, no. 10, pp. e0141324, 2015. @article{Prasad2015, Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf. |
Iya Khelm Price; Naoko Witzel; Jeffrey Witzel Orthographic and phonological form interference during silent reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 6, pp. 1628–1647, 2015. @article{Price2015, This study reports 2 eye-tracking experiments investigating form interference during sentence-level silent reading. The items involved reduced and unreduced relative clauses (RCs) with words that were orthographically and phonologically similar (injection-infection; O+P+, Experiment 1) as well as with words that were orthographically similar, but phonologically dissimilar (laughter-daughter; O+P-, Experiment 2). Both experiments revealed syntactic processing disruptions for reduced RCs. Processing difficulty was also observed at the form-related word in both experiments under first-pass and second-pass reading measures. These form-interference effects did not interact with structural processing difficulty under first-pass measures in either experiment. Under second-pass time, there were larger processing disruptions for reduced RCs in O+P+ sentences relative to their controls. This was not the case, however, for O+P- sentences. These results suggest 2 components to form-interference effects during silent reading: (a) an early, low-level component that is driven in large part by visual form overlap and (b) a component that relates to late stages of interpretation and that is associated more closely with phonological form overlap. |
Silvia Primativo; Lisa S. Arduino; Roberta Daini; Maria De Luca; Carlo Toneatto; Marialuisa Martelli Impaired oculo-motor behaviour affects both reading and scene perception in neglect patients Journal Article In: Neuropsychologia, vol. 70, pp. 90–106, 2015. @article{Primativo2015, Unilateral spatial neglect (USN) is a common neuropsychological disorder following a right-sided brain lesion. Although USN is mostly characterized by symptoms involving the left hemispace, other symptoms are not left lateralized. Recently, it was shown that patients with neglect dyslexia, a reading disturbance that affects about 40% of USN patients, manifest a non-lateralized impairment of eye movement behaviour in association with their reading deficit when they read aloud and perform non-verbal saccadic tasks (Primativo et al., 2013).In the present paper, we aimed to demonstrate that the eye movement impairment shown by some USN patients reflects a more general oculo-motor disorder that is not confined to orthographic material, the horizontal axis or constrained saccadic tasks. We conjectured that inaccurate oculo-motor behaviour in USN patients indicates the presence of a reading deficit. With this aim we evaluated 20 patients, i.e., 10 right-sided brain-damaged patients without neglect and 10 patients affected by USN. On the basis of the patients' eye movement patterns during a scene exploration task, we found that 4 out of the 10 USN patients presented an abnormal oculo-motor pattern. These same four patients (but not the others) also failed in performing 5 different saccadic tasks and produced neglect dyslexia reading errors in both single words and texts. First, we show that a large proportion of USN patients have inaccurate eye movement behaviour in non-reading tasks. Second, we demonstrate that this exploratory deficit is predictive of the reading impairment. Thus, we conclude that the eye movement deficit prevents reading and impairs the performance on many other perceptual tests, including scene exploration. The large percentage of patients with impaired eye-movement pattern suggests that particular attention should be paid to eye movement behaviour during the diagnostic phase in order to program the best rehabilitation strategy for each patient. |
Mario Prsa; Danilo Jimenez-Rezende; Olaf Blanke Inference of perceptual priors from path dynamics of passive self-motion Journal Article In: Journal of Neurophysiology, vol. 113, no. 5, pp. 1400–1413, 2015. @article{Prsa2015, The monitoring of one's own spatial orientation depends on the ability to estimate successive self-motion cues accurately. This process has become to be known as path integration. A feature of sequential cue estimation, in general, is that the history of previously experienced stimuli, or priors, biases perception. Here, we investigate how during angular path integration, the prior imparted by the displacement path dynamics affects the translation of vestibular sensations into percep-tual estimates. Subjects received successive whole-body yaw rotations and were instructed to report their position within a virtual scene after each rotation. The overall movement trajectory either followed a parabolic path or was devoid of explicit dynamics. In the latter case, estimates were biased toward the average stimulus prior and were well captured by an optimal Bayesian estimator model fit to the data. However, the use of parabolic paths reduced perceptual uncertainty, and a decrease of the average size of bias and thus the weight of the average stimulus prior were observed over time. The produced esti-mates were, in fact, better accounted for by a model where a predic-tion of rotation magnitude is inferred from the underlying path dynamics on each trial. Therefore, when passively displaced, we seem to be able to build, over time, from sequential vestibular measure-ments an internal model of the vehicle's movement dynamics. Our findings suggest that in ecological conditions, vestibular afference can be internally predicted, even when self-motion is not actively gener-ated by the observer, thereby augmenting both the accuracy and precision of displacement perception. |
M. Victoria Puig; Earl K. Miller Neural substrates of dopamine D2 receptor modulated executive functions in the monkey prefrontal cortex Journal Article In: Cerebral Cortex, vol. 25, no. 9, pp. 2980–2987, 2015. @article{Puig2015, Dopamine D2 receptors (D2R) play a major role in cognition, mood and motor movements. Their blockade by antipsychotic drugs reduces hallucinatory and delusional behaviors in schizophrenia, but often fails to alleviate affective and cognitive dysfunctions. The prefrontal cortex (PFC) expresses D2R and is altered in schizophrenia. We investigated how D2R modulate behavior and PFC function in monkeys. Two monkeys learned new and performed highly familiar visuomotor associations, where each cue was associated with a saccade to a right or left target. We recorded neural spikes and local field potentials from multiple electrodes while injecting the D2R antagonist eticlopride in the lateral PFC. Blocking prefrontal D2R impaired associative learning and cognitive flexibility, reduced motivation, but left the performance of familiar associations intact. Eticlopride reduced saccade-direction selectivity of prefrontal neurons, leading to a decrease in neural information about the associations, and an increase in alpha oscillations. These results, together with our recent study using a D1R antagonist, suggest that D1R and D2R in the primate lateral PFC cooperate to modulate several executive functions. Our findings help to gain insight into why antipsychotic drugs, with strong antagonistic actions on D2R, fail to ameliorate cognitive and emotional deficits in schizophrenia. |
Michael Puntiroli; Dirk Kerzel; Sabine Born Perceptual enhancement prior to intended and involuntary saccades Journal Article In: Journal of Vision, vol. 15, no. 4, pp. 1–20, 2015. @article{Puntiroli2015, Prior to an eye movement, attention is gradually shifted toward the point where the saccade will land. Our goal was to better understand the allocation of attention in an oculomotor capture paradigm for saccades that go straight to the eye movement target and for saccades that go to a distractor and are followed by corrective saccades to the target (i.e., involuntary saccades). We also sought to test facilitation at the future retinotopic location of target and nontarget objects, with the principal aim of verifying whether the remapping process accounts for the retinal displacement caused by involuntary saccades. Two experiments were run employing a dual-task design, primarily requiring participants to perform saccades toward a target while discriminating an asymmetric cross presented briefly before saccade onset. The results clearly show perceptual facilitation at the target location for goal- directed saccades and at the distractor location when oculomotor capture occurred. Facilitation was observed at a location relating to the remapping of a future saccade landing point, in sequences of oculomotor capture. In contrast, performance remained unaffected at the remapped location of a salient distracting object, which was not looked at. The findings are taken as evidence that presaccadic enhancement occurs prior to involuntary and voluntary saccades alike and that the remapping process also indiscriminatingly accounts for the retinal displacement caused by either. |
Haoyue Qian; Xiangping Gao; Zhiguo Wang Faces distort eye movement trajectories, but the distortion is not stronger for your own face Journal Article In: Experimental Brain Research, vol. 233, no. 7, pp. 2155–2166, 2015. @article{Qian2015, It is currently unclear whether a person's own face has greater capacity in absorbing his/her attention than faces of others. With two visual distractor tasks, the present study assessed the extent to which a person's own face attracts his/her attention, by measuring face distractor elicited distortion of saccade trajectories. Experiment 1 showed that upright faces induced stronger distortion of saccade trajectories than inverted ones. This face inversion effect, however, was not stronger for the participant's own face than for unfamiliar other's faces. By manipulating fixation stimulus offset and using peripheral onset target, Experiment 2 further demonstrated that these observations were not contingent on saccade latency. Together, these findings suggest that a person's own face is not more salient or attention-absorbing than unfamiliar other's faces. |
Rishi Rajalingham; Kailyn Schmidt; James J. DiCarlo Comparison of object recognition behavior in human and monkey Journal Article In: Journal of Neuroscience, vol. 35, no. 35, pp. 12127–12136, 2015. @article{Rajalingham2015, Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize "pooled human" object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. |
Kate Rath-Wilson; Daniel Guitton Refuting the hypothesis that a unilateral human parietal lesion abolishes saccade corollary discharge Journal Article In: Brain, vol. 138, no. 12, pp. 3760–3775, 2015. @article{RathWilson2015, This paper questions the prominent role that the parietal lobe is thought to play in the processing of corollary discharges for saccadic eye movements. A corollary discharge copies the motor neurons' signal and sends it to brain areas involved in monitoring eye trajectories. The classic double-step saccade task has been used extensively to study these mechanisms: two targets (T1 and T2) are quickly (40-150 ms) flashed sequentially in the periphery. After the extinction of the fixation point, subjects are to make two saccades (S1 and S2), in the dark, to the remembered locations of the targets in the order they appeared. The success of S2 requires a corollary discharge encoding S1's vector. Patients with a parietal lobe lesion, particularly on the right, are impaired at generating an accurate S2 when S1 is directed contralesionally, but not ipsilesionally, thought due to an impaired contralesional corollary discharge. In contrast, we hypothesize that failure on the classic double-step task is due to visual processing and attentional deficits that commonly result from lesions of the parietal lobe and imperfect data analysis methods. Here, we studied parietal patients who fail in the classic double-step task when tested and data analysed according to previously published methods. We then tested our patients on two modified versions of the double-step task, designed to mitigate deficits other than corollary discharge that may have confounded previous investigations. In our 'exogenous' task, T2 was presented prior to T1 and for longer (T2: 800-1200 ms, T1: 350 ms) than in the classic task. S1 went to T1 and S2 to T2, all in the dark. All patients who completed sufficient trials had a corollary discharge for contralesional and ipsilesional S1s (5/5). In our 'endogenous' task, a single target was presented peripherally for 800-1200 ms. After extinction of target and fixation point, patients made first an 'endogenous' S1, of self-determined amplitude either to the left or right, before making S2 to the remembered location of the previously flashed target. To be successful, a corollary discharge of endogenous S1-generated in the dark-was required in the calculation of S2's motor vector. Every parietal patient showed evidence of using corollary discharges for endogenous S1s in the ipsilesional and contralesional directions (6/6). Our results support the hypothesis, based on our previous studies of corollary discharge mechanisms in hemidecorticate patients, and electrophysiological studies by others in monkey, that corollary discharges for left and right saccades are available to each cortical hemisphere. |
Kate Rath-Wilson; Daniel Guitton Oculomotor control after hemidecortication: A single hemisphere encodes corollary discharges for bilateral saccades Journal Article In: Cortex, vol. 63, pp. 232–249, 2015. @article{RathWilson2015a, Patients who have had a cerebral hemisphere surgically removed as adults can generate accurate leftward and rightward saccadic eye movements, a task classically thought to require two hemispheres each controlling contralateral saccades. Here, we asked whether one hemisphere can generate sequences of saccades, the success of which requires the use of corollary discharges. Using a double-step saccade paradigm, we tested two hemidecorticate subjects who, by definition, are contralesionally hemianopic. In experiment 1, two targets, T1 and T2, were flashed in their seeing hemifield and subjects had to look in the dark to T1, then T2. In experiment 2, only one target was flashed; before looking at it, the subject had first to saccade voluntarily elsewhere. Both subjects were able to complete the tasks, independent of first and second saccade direction and whether the saccades were voluntarily or visually triggered. Both subjects displayed a strategy, typical in hemianopia, of making multiple-step saccades and placing, at overall movement-end, the recalled locations of T1 and T2 on off-foveal locations in their seeing hemifield, in a retinal area typically spanning a 5-15° window, depending on the subject, trial type and target eccentricity. In summary, a single hemisphere monitored the amplitude and direction of the first multiple-step saccade sequence bilaterally, and combined this information with the recalled initial retinotopic location of T2 (no longer visible) to generate a correct target-directed second saccade sequence in the dark. Unexpectedly, our hemidecorticate subjects performed better on the double-step task than subjects with isolated unilateral parietal lesions, reported in the literature to have marked deficiencies in monitoring contralesional saccadic eye movements. Thus, plasticity-dependent mechanisms that lead to recovery of function after hemidecortication are different than those deployed after smaller lesions. This implies a reconsideration of the classical links between behavioural deficits and discrete cortical lesions. |
Anne K. Rau; Kristina Moll; Margaret J. Snowling; Karin Landerl Effects of orthographic consistency on eye movement behavior: German and English children and adults process the same words differently Journal Article In: Journal of Experimental Child Psychology, vol. 130, pp. 92–105, 2015. @article{Rau2015, The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. |
Supriya Ray; Stephen J. Heinen A mechanism for decision rule discrimination by supplementary eye field neurons Journal Article In: Experimental Brain Research, vol. 233, no. 2, pp. 459–476, 2015. @article{Ray2015, A decision to select an action from alternatives is often guided by rules that flexibly map sensory inputs to motor outputs when certain conditions are satisfied. However, the neural mechanisms underlying rule-based decision making remain poorly understood. Two complementary types of neurons in the supplementary eye field (SEF) of macaques have been identified that modulate activity differentially to interpret rules in an ocular go–nogo task, which stipulates that the animal either visually pursue a moving object if it intersects a visible zone (‘go'), or maintain fixation if it does not (‘nogo'). These neurons discriminate between go and nogo rule-states by increasing activity to signal their preferred (agonist) rule-state and decreasing activity to signal their non-preferred (antagonist) rule-state. In the current study, we found that SEF neurons decrease activity in anticipation of the antagonist rule-state, and do so more rapidly when the rule-state is easier to predict. This rapid decrease in activity could underlie a process of elimination in which trajectories that do not invoke the preferred rule-state receive no further computational resources. Furthermore, discrimination between difficult and easy trials in the antagonist rule-state occurs prior to when discrimination within the agonist rule-state occurs. A winner-take-all like model that incorporates a pair of mutually inhibited integrators to accumulate evidence in favor of either the decision to pursue or the decision to continue fixation accounts for the observed neural phenomena. |
Chiara Reali; Yulia Esaulova; Anton Öttl; Lisa Stockhausen Role descriptions induce gender mismatch effects in eye movements during reading Journal Article In: Frontiers in Psychology, vol. 6, pp. 1607, 2015. @article{Reali2015, The present eye-tracking study investigates the effect of gender typicality on the resolution of anaphoric personal pronouns in English. Participants read descriptions of a person performing a typically male, typically female or gender-neutral occupational activity. The description was followed by an anaphoric reference (he or she) which revealed the referent's gender. The first experiment presented roles which were highly typical for men (e.g., blacksmith) or for women (e.g., beautician), the second experiment presented role descriptions with a moderate degree of gender typicality (e.g., psychologist, lawyer). Results revealed a gender mismatch effect in early and late measures in the first experiment and in early stages in the second experiment. Moreover, eye-movement data for highly typical roles correlated with explicit typicality ratings. The results are discussed from a cross-linguistic perspective, comparing natural gender languages and grammatical gender languages. An interpretation of the cognitive representation of typicality beliefs is proposed. |
Chiara Reali; Yulia Esaulova; Lisa Stockhausen Isolating stereotypical gender in a grammatical gender language: Evidence from eye movements Journal Article In: Applied Psycholinguistics, vol. 36, no. 4, pp. 977–1006, 2015. @article{Reali2015a, The present study investigates the effects of stereotypical gender during anaphor resolution in German. The study aims at isolating the effects of gender-stereotypical cues from the effects of grammatical gender. Experiment 1 employs descriptions of typically male, female, and neutral occupations that contain no grammatical cue to the referent gender, followed by a masculine or feminine role noun, in a reaction time priming paradigm. Experiment 2 uses eye-tracking methodology to examine how the gender typicality of these descriptions affects the resolution of a matching or mismatching anaphoric pronoun. Results show a mismatch effect manifest at very early stages of processing. Both experiments also reveal asymmetries in the processing of the two genders suggesting that the representation of female rather than male referents is more flexible in counterstereotypical contexts. No systematic relation is found between eye movements and individual gender attitude measures, whereas a reliable correlation is found with gender typicality ratings. |
Eric A. Reavis; Sebastian M. Frank; Peter U. Tse Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions Journal Article In: NeuroImage, vol. 110, pp. 171–181, 2015. @article{Reavis2015, Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. |
Tobias Schoeberl; Isabella Fuchs; Jan Theeuwes; Ulrich Ansorge Stimulus-driven attentional capture by subliminal onset cues Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 3, pp. 737–748, 2015. @article{Schoeberl2015, In two experiments, we tested whether subliminal abrupt onset cues capture attention in a stimulus-driven way. An onset cue was presented 16 ms prior to the stimulus display that consisted of clearly visible color targets. The onset cue was presented either at the same side as the target (the valid cue condition) or on the opposite side of the target (the invalid cue condition). Because the onset cue was presented 16 ms before other placeholders were presented, the cue was subliminal to the participant. To ensure that this subliminal cue captured attention in a stimulus-driven way, the cue's features did not match the top-down attentional control settings of the participants: (1) The color of the cue was always different than the color of the non-singleton targets ensuring that a top-down set for a specific color or for a singleton would not match the cue, and (2) colored targets and distractors had the same objective luminance (measured by the colorimeter) and subjective lightness (measured by flicker photometry), preventing a match between the top-down set for target and cue contrast. Even though a match between the cues and top-down settings was prevented, in both experiments, the cues captured attention, with faster response times in valid than invalid cue conditions (Experiments 1 and 2) and faster response times in valid than the neutral conditions (Experiment 2). The results support the conclusion that subliminal cues capture attention in a stimulus-driven way. |
Chris Scholes; Paul V. McGraw; Marcus Nyström; Neil W. Roach Fixational eye movements predict visual sensitivity Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 282, pp. 1–10, 2015. @article{Scholes2015, During steady fixation, observers make small fixational saccades at a rate of around 1–2 per second. Presentation of a visual stimulus triggers a biphasic modulation in fixational saccade rate—an initial inhibition followed by a period of elevated rate and a subsequent return to baseline. Here we show that, during passive viewing, this rate signature is highly sensitive to small changes in stimulus contrast. By training a linear support vector machine to classify trials in which a stimulus is either present or absent,we directly com- pared the contrast sensitivity of fixational eye movements with individuals' psychophysical judgements. Classification accuracy closely matched psycho- physical performance, and predicted individuals' threshold estimates with less bias and overall error than those obtained using specific features of the sig- nature. Performance of the classifier was robust to changes in the training set (novel subjects and/or contrasts) and good prediction accuracy was obtained with a practicable number of trials. Our results indicate a tight coupling between the sensitivity of visual perceptual judgements and fixational eye con- trol mechanisms. This raises the possibility that fixational saccades could provide a novel and objective means of estimating visual contrast sensitivity without the need for observers to make any explicit judgement. |
Daniel E. Schoth; H. J. Godwin; Simon P. Liversedge; Christina Liossi Eye movements during visual search for emotional faces in individuals with chronic headache Journal Article In: European Journal of Pain, vol. 19, no. 5, pp. 722–732, 2015. @article{Schoth2015, BACKGROUND: Attentional biases for pain-related information have been frequently reported in individuals with chronic pain. Recording of participants' eye movements provides a continuous measure of attention, although to date this methodology has received little use in research exploring attentional biases in chronic pain. The aim of the current investigation was to explore the specificity of attentional orienting bias using a novel visual search task while recording participant eye movement behaviours. This also allowed for the investigation of whether attentional biases for pain-related information exist in the presence of multiple stimuli competing for attention. METHODS: Twenty-three participants with chronic headache and 24 pain-free, healthy control participants were engaged in a visual search task where pain, angry, happy and neutral faces were used as both target and distractor stimuli. While completing this task, participants' eye movements were recorded. RESULTS: Supporting the adopted hypothesis, participants with chronic headache, relative to healthy controls, demonstrated a significantly higher proportion of initial fixations to target pain expressions when the pain expressions were presented in displays containing neutral-distractor faces. No significant differences were found between groups in the time taken to fixate target pain expressions (localization time). CONCLUSIONS: Individuals with chronic headache show facilitated initial orienting towards pain expressions specifically when used as targets in a visual search task. This study adds to a growing body of research supporting the presence of pain-related attentional biases in chronic pain as assessed via different experimental paradigms, and shows biases to exist when multiple stimuli competing for attention are presented simultaneously. |
Elizabeth R. Schotter; Michelle Lee; Michael Reiderman; Keith Rayner The effect of contextual constraint on parafoveal processing in reading Journal Article In: Journal of Memory and Language, vol. 83, pp. 118–139, 2015. @article{Schotter2015, Semantic preview benefit in reading is an elusive and controversial effect because empirical studies do not always (but sometimes) find evidence for it. Its presence seems to depend on (at least) the language being read, visual properties of the text (e.g., initial letter capitalization), the type of relationship between preview and target, and as shown here, semantic constraint generated by the prior sentence context. Schotter (2013) reported semantic preview benefit for synonyms, but not semantic associates when the preview/target was embedded in a neutral sentence context. In Experiment 1, we embedded those same previews/targets into constrained sentence contexts and in Experiment 2 we replicated the effects reported by Schotter (2013; in neutral sentence contexts) and Experiment 1 (in constrained contexts) in a within-subjects design. In both experiments, we found an early (i.e., first-pass) apparent preview benefit for semantically associated previews in constrained contexts that went away in late measures (e.g., total time). These data suggest that sentence constraint (at least as manipulated in the current study) does not operate by making a single word form expected, but rather generates expectations about what kinds of words are likely to appear. Furthermore, these data are compatible with the assumption of the E-Z Reader model that early oculomotor decisions reflect "hedged bets" that a word will be identifiable and, when wrong, lead the system to identify the wrong word, triggering regressions. |
Volkhard Schroth; Roland Joos; Wolfgang Jaschinski Effects of prism eyeglasses on objective and subjective fixation disparity Journal Article In: PLoS ONE, vol. 10, no. 10, pp. e0138871, 2015. @article{Schroth2015, In optometry of binocular vision, the question may arise whether prisms should be included in eyeglasses to compensate an oculomotor and/or sensory imbalance between the two eyes. The corresponding measures of objective and subjective fixation disparity may be reduced by the prisms, or the adaptability of the binocular vergence system may diminish effects of the prisms over time. This study investigates effects of wearing prisms constantly for about 5 weeks in daily life. Two groups of 12 participants received eyeglasses with prisms having either a base-in direction or a base-out direction with an amount up to 8 prism diopters. Prisms were prescribed based on clinical fixation disparity test plates at 6 m. Two dependent variables were used: (1) subjective fixation disparity was indicated by a perceived offset of dichoptic nonius lines that were superimposed on the fusion stimuli and (2) objective fixation disparity was measured with a video based eye tracker relative to monocular calibration. Stimuli were presented at 6 m and included either central or more peripheral fusion stimuli. Repeated measurements were made without the prisms and with the prisms after about 5 weeks of wearing these prisms. Objective and subjective fixation disparity were correlated, but the type of fusion stimulus and the direction of the required prism may play a role. The prisms did not reduce the fixation disparity to zero, but induced significant changes in fixation disparity with large effect sizes. Participants receiving base-out prisms showed hypothesized effects, which were concurrent in both types of fixation disparity. In participants receiving base-in prisms, the individual effects of subjective and objective effects were negatively correlated: the larger the subjective (sensory) effect, the smaller the objective (motor) effect. This response pattern was related to the vergence adaptability, i.e. the individual fusional vergence reserves. |
Sarah Schuster; Stefan Hawelka; Fabio Richlan; Philipp Ludersdorfer; Florian Hutzler Eyes on words: A fixation-related fMRI study of the left occipito-temporal cortex during self-paced silent reading of words and pseudowords Journal Article In: Scientific Reports, vol. 5, pp. 12686, 2015. @article{Schuster2015, The predominant finding of studies assessing the response of the left ventral occipito-temporal cortex (vOT) to familiar words and to unfamiliar, but pronounceable letter strings (pseudowords) is higher activation for pseudowords. One explanation for this finding is that readers automatically generate predictions about a letter string's identity – pseudowords mismatch these predictions and the higher vOT activation is interpreted as reflecting the resultant prediction errors. The majority of studies, however, administered tasks which imposed demands above and beyond the intrinsic requirements of visual word recognition. The present study assessed the response of the left vOT to words and pseudowords by using the onset of the first fixation on a stimulus as time point for modeling the BOLD signal (fixation-related fMRI). This method allowed us to assess the neural correlates of self-paced silent reading with minimal task demands and natural exposure durations. In contrast to the predominantly reported higher vOT activation for pseudowords, we found higher activation for words. This finding is at odds with the expectation of higher vOT activation for pseudowords due to automatically generated predictions and the accompanying elevation of prediction errors. Our finding conforms to an alternative explanation which considers such top-down processing to be non-automatic and task-dependent. |
Alexander C. Schütz; Felix Lossin; Karl R. Gegenfurtner Dynamic integration of information about salience and value for smooth pursuit eye movements Journal Article In: Vision Research, vol. 113, pp. 169–178, 2015. @article{Schuetz2015a, Eye movement behavior can be determined by bottom-up factors like visual salience and by top-down factors like expected value. These different types of signals have to be combined for the control of eye movements. In this study we investigated how smooth pursuit eye movements integrate salience and value information. Observers were asked to track a random-dot kinematogram containing two coherent motion directions. To manipulate salience, the coherence or the density of one of the motion signals was varied. To manipulate value, observers won or lost money in a separate experiment if they were tracking one or the other motion direction. Our results show that pursuit direction was initially determined only by salience. 300-400 ms after target motion onset, pursuit steered towards the rewarded direction and the salience effects disappeared. The time course of this effect depended crucially on the difficulty to segment the two signal directions. These results indicate that salience determines early pursuit responses in the same way as saccades with short latencies. Value information is processed slower and dominates pursuit after several 100 ms. |
Alexander C. Schütz; David Souto Perceptual task induces saccadic adaptation by target selection Journal Article In: Frontiers in Human Neuroscience, vol. 9, pp. 566, 2015. @article{Schuetz2015, Adaptation of saccades can be induced by different error signals, such as retinal position errors, prediction errors, or reinforcement learning. Recently, we showed that a shift in the spatial goal of a perceptual task can induce saccadic adaptation, in the absence of a bottom-up position error. Here, we investigated whether this top-down effect is mediated by the visibility of the task-relevant object, by reinforcement due to the feedback about the perceptual judgment or by a target selection mechanism. Participants were asked to discriminate visual stimuli arranged in a vertical compound. To induce adaptation, the discrimination target was presented at eccentric locations in the compound. In the first experiment, we compared adaptation with an easy and difficult discrimination. In the second experiment, we compared adaptation when feedback about the perceptual task was valid and when feedback was provided but was unrelated to performance. In the third experiment, we compared adaptation with instructions to fixate one of the elements in the compound-target selection-to the perceptual task condition-target selection and discrimination. To control for a bottom-up stimulus effect, we ran a fourth experiment in which the only instruction was to look at the compound. The saccade amplitude data were fitted by a two-state model distinguishing between an immediate and a gradual error correction process. We replicated our finding that a perceptual task can drive adaptation of saccades. Adaptation showed no effect of feedback reliability, nor an effect of the perceptual task beyond target selection. Adaptation was induced by a top-down signal since it was absent when there was no target selection instruction and no perceptual task. The immediate error correction was larger for the difficult than for the easy condition, suggesting that task difficulty affects mainly voluntary saccade targeting. In addition, the repetition of experiments one week later increased the magnitude of the gradual error correction. The results dissociate two distinct components of adaptation: an immediate and a gradual error correction. We conclude that perceptual-task induced adaptation is most likely due to top-down target selection within a larger object. |
Immo Schütz; Denise Y. P. Henriques; Katja Fiehler No effect of delay on the spatial representation of serial reach targets Journal Article In: Experimental Brain Research, vol. 233, no. 4, pp. 1225–1235, 2015. @article{Schuetz2015b, When reaching for remembered target locations, it has been argued that the brain primarily relies on egocentric metrics and especially target position relative to gaze when reaches are immediate, but that the visuo-motor system relies stronger on allocentric (i.e., object-centered) metrics when a reach is delayed. However, previous reports from our group have shown that reaches to single remembered targets are represented relative to gaze, even when static visual landmarks are available and reaches are delayed by up to 12 s. Based on previous findings which showed a stronger contribution of allocentric coding in serial reach planning, the present study aimed to determine whether delay influences the use of a gaze-dependent reference frame when reaching to two remembered targets in a sequence after a delay of 0, 5 or 12 s. Gaze was varied relative to the first and second target and shifted away from the target before each reach. We found that participants used egocentric and allocentric reference frames in combination with a stronger reliance on allocentric information regardless of whether reaches were executed immediately or after a delay. Our results suggest that the relative contributions of egocentric and allocentric reference frames for spatial coding and updating of sequential reach targets do not change with a memory delay between target presentation and reaching. |
Hillary Schwarb; Patrick D. Watson; Kelsey Campbell; Christopher L. Shander; Jim M. Monti; Gillian E. Cooke; Jane X. Wang; Arthur F. Kramer; Neal J. Cohen Competition and cooperation among relational memory representations Journal Article In: PLoS ONE, vol. 10, no. 11, pp. e0143832, 2015. @article{Schwarb2015, Mnemonic processing engages multiple systems that cooperate and compete to support task performance. Exploring these systems' interaction requires memory tasks that produce rich data with multiple patterns of performance sensitive to different processing sub-components. Here we present a novel context-dependent relational memory paradigm designed to engage multiple learning and memory systems. In this task, participants learned unique face-room associations in two distinct contexts (i.e., different colored buildings). Faces occupied rooms as determined by an implicit gender-by-side rule structure (e.g., male faces on the left and female faces on the right) and all faces were seen in both contexts. In two experiments, we use behavioral and eye-tracking measures to investigate interactions among different memory representations in both younger and older adult populations; furthermore we link these representations to volumetric variations in hippocampus and ventromedial PFC among older adults. Overall, performance was very accurate. Successful face placement into a studied room systematically varied with hippocampal volume. Selecting the studied room in the wrong context was the most typical error. The proportion of these errors to correct responses positively correlated with ventromedial prefrontal volume. This novel task provides a powerful tool for investigating both the unique and interacting contributions of these systems in support of relational memory. |
Arryn Robbins; Michael C. Hout Categorical target templates: Typical category members are found and identified quickly during word-cued search Journal Article In: Visual Cognition, vol. 23, no. 7, pp. 817–821, 2015. @article{Robbins2015, What information do people use to guide search when they lack precise details about the appearance of their target? In this study, we employed categorical (word-cued) search and eye tracking, to examine how category typicality influences search performance. We found that typical category members were fixated and identified more quickly than atypical categories. This finding held when the participant was cued at the superordinate level (finding “clothing” among non-clothing items) or the basic level (finding a “shirt” among other clothing items). This suggests that categorical target templates may be constructed by piecing together features from the most typical category member(s). |
Joanne S. Robertson; Jason D. Forte; Michael E. R. Nicholls Deviating to the right: Using eyetracking to study the role of attention in navigation asymmetries Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 3, pp. 830–843, 2015. @article{Robertson2015, The ability to navigate accurately through the environment and avoid obstacles is essential for effective interactions with the environment. It is therefore surprising that systematic rightward errors are observed when neurologically intact participants navigate through doorways-most likely due to the operation of biases in spatial attention. These rightward errors may arise due to the operation of an extinction-like process, whereby participants overattend to the left doorpost and collide with the right one. Alternatively, rightward biases might reflect a bisection bias, such that the extrapersonal nature of the aperture causes participants to misbisect the aperture slightly to the right of true center. Because eye movements and spatial attention are closely related, in this study we used eyetracking to test the extinction and bisection models in a remote wheelchair navigation task. University students (n = 16) made rightward errors when navigating the wheelchair through a doorway, and fixated more frequently toward the right side of the aperture throughout the trial. These results are inconsistent with an extinction-based theory of navigation asymmetry, which predicts a leftward bias in eye position due to participants overattending to the left side of the doorway. Instead, the observed rightward bias in eye movements strongly supports a bisection-based theory of navigation asymmetry, whereby participants mentally "mark" the midpoint of a doorway toward the right and then head toward that point, resulting in rightward deviations. The rightward nature of participants' navigation errors and eye positions is consistent with the existence of a rightward attentional bias for extrapersonal stimuli. |
Joost Rommers; Antje S. Meyer; Falk Huettig Verbal and nonverbal predictors of language-mediated anticipatory eye movements Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 3, pp. 720–730, 2015. @article{Rommers2015, During language comprehension, listeners often anticipate upcoming information. This can draw listeners' overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., "In 1969 Neil Armstrong was the first man to set foot on the moon") while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements were observed to targets and to shape competitors. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movements. The findings are consistent with multiple-mechanism accounts of predictive language processing. |
Eefje W. M. Rondeel; Henk Steenbergen; Rob W. Holland; Ad Knippenberg A closer look at cognitive control: differences in resource allocation during updating, inhibition and switching as revealed by pupillometry Journal Article In: Frontiers in Human Neuroscience, vol. 9, pp. 494, 2015. @article{Rondeel2015, The present study investigated resource allocation, as measured by pupil dilation, in tasks measuring updating (2-Back task), inhibition (Stroop task) and switching (Number Switch task). Because each cognitive control component has unique characteristics, differences in patterns of resource allocation were expected. Pupil and behavioral data from 35 participants were analyzed. In the 2-Back task (requiring correct matching of current stimulus identity at trial p with the stimulus two trials back, p -2) we found that better performance (low total of errors made in the task) was positively correlated to the mean pupil dilation during correctly responding to targets. In the Stroop task, pupil dilation on incongruent trials was higher than those on congruent trials. Incongruent vs. congruent trial pupil dilation differences were positively related to reaction time differences between incongruent and congruent trials. Furthermore, on congruent Stroop trials, pupil dilation was negatively related to reaction times, presumably because more effort allocation paid off in terms of faster responses. In addition, pupil dilation on correctly-responded-to congruent trials predicted a weaker Stroop interference effect in terms of errors, probably because pupil dilation on congruent trials were diagnostic of task motivation, resulting in better performance. In the Number Switch task we found higher pupil dilation in switch as compared to non-switch trials. On the Number Switch task, pupil dilation was not related to performance. We also explored error-related pupil dilation in all tasks. The results provide new insights in the diversity of the cognitive control components in terms of resource allocation as a function of individual differences, task difficulty and error processing. |
Lara Rösler; Martin Rolfs; Stefan Stigchel; Sebastiaan F. W. Neggers; Wiepke Cahn; René S. Kahn; Katharine N. Thakkar Failure to use corollary discharge to remap visual target locations is associated with psychotic symptom severity in schizophrenia Journal Article In: Journal of Neurophysiology, vol. 114, no. 2, pp. 1129–1136, 2015. @article{Roesler2015, Corollary discharge (CD) refers to "copies" of motor signals sent to sensory areas, allowing prediction of future sensory states. They enable the putative mechanisms supporting the distinction between self-generated and externally generated sensations. Accordingly, many authors have suggested that disturbed CD engenders psychotic symptoms of schizophrenia, which are characterized by agency distortions. CD also supports perceived visual stability across saccadic eye movements and is used to predict the postsaccadic retinal coordinates of visual stimuli, a process called remapping. We tested whether schizophrenia patients (SZP) show remapping disturbances as evidenced by systematic transsaccadic mislocalizations of visual targets. SZP and healthy controls (HC) performed a task in which a saccadic target disappeared upon saccade initiation and, after a brief delay, reappeared at a horizontally displaced position. HC judged the direction of this displacement accurately, despite spatial errors in saccade landing site, indicating that their comparison of the actual to predicted postsaccadic target location relied on accurate CD. SZP performed worse and relied more on saccade landing site as a proxy for the presaccadic target, consistent with disturbed CD. This remapping failure was strongest in patients with more severe psychotic symptoms, consistent with the theoretical link between disturbed CD and phenomenological experiences in schizophrenia. |
Lars A. Ross; Victor A. Del Bene; Sophie Molholm; Hans Peter Frey; John J. Foxe Sex differences in multisensory speech processing in both typically developing children and those on the autism spectrum Journal Article In: Frontiers in Neuroscience, vol. 9, pp. 185, 2015. @article{Ross2015, Background: Previous work has revealed sizeable deficits in the abilities of children with an autism spectrum disorder (ASD) to integrate auditory and visual speech signals, with clear implications for social communication in this population. There is a strong male preponderance in ASD, with approximately four affected males for every female. The presence of sex differences in ASD symptoms suggests a sexual dimorphism in the ASD phenotype, and raises the question of whether this dimorphism extends to ASD traits in the neurotypical population. Here, we investigated possible sexual dimorphism in multisensory speech integration in both ASD and neurotypical individuals. Methods: We assessed whether males and females differed in their ability to benefit from visual speech when target words were presented under varying levels of signal-to-noise, in samples of neurotypical children and adults, and in children diagnosed with an ASD. Results: In typically developing (TD) children and children with ASD, females (n= 47 and n=15 respectively) were significantly superior in their ability to recognize words under audiovisual listening conditions compared to males (n= 55 and n=58 respectively). This sex difference was absent in our sample of neurotypical adults (n= 28 females; n= 28 males). Conclusions: We propose that the development of audiovisual integration is delayed in male relative to female children, a delay that is also observed in ASD. In neurotypicals, these sex differences disappear in early adulthood when females approach their performance maximum and males “catch up”. Our findings underline the importance of considering sex differences in the search for autism endophenotypes and strongly encourage increased efforts to study the underrepresented population of females within ASD. |
Ian Cunnings; Clare Patterson; Claudia Felser Structural constraints on pronoun binding and coreference: Evidence from eye movements during reading Journal Article In: Frontiers in Psychology, vol. 6, pp. 840, 2015. @article{Cunnings2015, A number of recent studies have investigated how syntactic and non-syntactic constraints combine to cue memory retrieval during anaphora resolution. In this paper we investigate how syntactic constraints and gender congruence interact to guide memory retrieval during the resolution of subject pronouns. Subject pronouns are always technically ambiguous, and the application of syntactic constraints on their interpretation depends on properties of the antecedent that is to be retrieved. While pronouns can freely corefer with non-quantified referential antecedents, linking a pronoun to a quantified antecedent is only possible in certain syntactic configurations via variable binding. We report the results from a judgment task and three online reading comprehension experiments investigating pronoun resolution with quantified and non-quantified antecedents. Results from both the judgment task and participants' eye movements during reading indicate that comprehenders freely allow pronouns to corefer with non-quantified antecedents, but that retrieval of quantified antecedents is restricted to specific syntactic environments. We interpret our findings as indicating that syntactic constraints constitute highly weighted cues to memory retrieval during anaphora resolution. |
Mario Dalmaso; Giovanni Galfano; Luigi Castelli The impact of same- and other-race gaze distractors on the control of saccadic eye movements Journal Article In: Perception, vol. 44, no. 8-9, pp. 1020–1028, 2015. @article{Dalmaso2015, Two experiments were aimed at investigating whether the implementation of voluntary saccades in White participants could be modulated more strongly by gaze distractors embedded in White versus Black faces. Participants were instructed to make a rightward or leftward saccade, depending on a central directional cue. Saccade direction could be either congruent or incongruent with gaze direction of the distractor face. In Experiment 1, White faces produced greater interference on saccadic accuracy than Black faces when the averted-gaze face and cue onset were simultaneous rather than separated by a 900-ms asynchrony. In Experiment 2, two temporal intervals (50 ms vs. 1,000 ms) occurred between the initial presentation of the face with direct-gaze and the averted-gaze face onset, whereas the averted-gaze face and cue onset were synchronous. A greater interference emerged for White versus Black faces irrespective of the temporal interval. Overall, these findings suggest that saccadic generation system is sensitive to features of face stimuli conveying eye gaze. |
Maya Dank; Avital Deutsch; Kathryn Bock Resolving conflicts in natural and grammatical gender agreement: Evidence from eye movements Journal Article In: Journal of Psycholinguistic Research, vol. 44, no. 4, pp. 435–467, 2015. @article{Dank2015, The present research investigated the attraction phenomenon, which commonly occurs in the domain of production but is also apparent in comprehension. It particularly focused on its accessibility to conceptual influence, in analogy to previous findings in production in Hebrew (Deutsch and Dank, J Mem Lang, 60:112–143, 2009). The experiments made use of the contrast between grammatical and natural gender in Hebrew, using complex subject noun phrases containing head nouns and prepositional phrases with local nouns. Noun phrases were manipulated to produce (a) matches and mismatches in grammatical gender between heads and local nouns; and (b) inanimate nouns and animate nouns with natural gender that served either as head or as local nouns. These noun phrases were the subjects of sentences that ended with predicates agreeing in gender with the head noun, with the local noun, or both. The ungrammatical sentences were those in which the gender of the predicate and the head noun did not match. To assess the impact of conflicts in grammatical and natural gender on the time course of reading, participants' eye movements were monitored. The results revealed clear disruptions in reading the predicate due to grammatical-gender mismatches with head and local nouns, in analogy to attraction in production. When the head nouns conveyed natural gender these effects were amplified, but variations in the natural gender of local nouns had negligible consequences. The results imply that comprehension and production are similarly sensitive to the control of grammatical agreement by grammatical and natural gender in subject noun phrases. |
Ravi K. Das; Chandni Hindocha; Tom P. Freeman; Antonio I. Lazzarino; H. Valerie Curran; Sunjeev K. Kamboj Assessing the translational feasibility of pharmacological drug memory reconsolidation blockade with memantine in quitting smokers Journal Article In: Psychopharmacology, vol. 232, no. 18, pp. 3363–3374, 2015. @article{Das2015, RATIONALE: Preclinical reconsolidation research offers the first realistic opportunity to pharmacologically weaken the maladaptive memory structures that support relapse in drug addicts. N-methyl D-aspartate receptor (NMDAR) antagonism is a highly effective means of blocking drug memory reconsolidation. However, no research using this approach exists in human addicts. OBJECTIVES: The objective of this study was to assess the potential and clinical outcomes of blocking the reconsolidation of cue-smoking memories with memantine in quitting smokers. METHODS: Fifty-nine dependent and motivated to quit smokers were randomised to one of three groups receiving the following: (1) memantine with or (2) without reactivation of associative cue-smoking memories or (3) reactivation with placebo on their target quit day in a double-blind manner. Participants aimed to abstain from smoking for as long as possible. Levels of smoking and FTND score were assessed prior to intervention and up to a year later. Primary outcome was latency to relapse. Subjective craving measures and attentional bias to smoking cues were assessed in-lab. RESULTS: All study groups successfully reduced their smoking up to 3 months. Memantine in combination with smoking memory reactivation did not affect any measure of smoking outcome, reactivity or attention capture to smoking cues. CONCLUSIONS: Brief exposure to smoking cues with memantine did not appear to weaken these memory traces. These findings could be due to insufficient reconsolidation blockade by memantine or failure of exposure to smoking stimuli to destabilise smoking memories. Research assessing the treatment potential of reconsolidation blockade in human addicts should focus on identification of tolerable drugs that reliably block reward memory reconsolidation and retrieval procedures that reliably destabilise strongly trained memories. |
Isabelle Dautriche; Daniel Swingley; Anne Christophe Learning novel phonological neighbors: Syntactic category matters Journal Article In: Cognition, vol. 143, pp. 77–86, 2015. @article{Dautriche2015, Novel words (like tog) that sound like well-known words (dog) are hard for toddlers to learn, even though children can hear the difference between them (Swingley & Aslin, 2002, 2007). One possibility is that phonological competition alone is the problem. Another is that a broader set of probabilistic considerations is responsible: toddlers may resist considering tog as a novel object label because its neighbor dog is also an object. In three experiments, French 18-month-olds were taught novel words whose word forms were phonologically similar to familiar nouns (noun-neighbors), to familiar verbs (verb-neighbors) or to nothing (no-neighbors). Toddlers successfully learned the no-neighbors and verb-neighbors but failed to learn the noun-neighbors, although both novel neighbors had a familiar phonological neighbor in the toddlers' lexicon. We conclude that when creating a novel lexical entry, toddlers' evaluation of similarity in the lexicon is multidimensional, incorporating both phonological and semantic or syntactic features. |
Joshua Correll; Bernd Wittenbrink; Matthew T. Crawford; Melody S. Sadler Stereotypic vision: How stereotypes disambiguate visual stimuli Journal Article In: Journal of Personality and Social Psychology, vol. 108, no. 2, pp. 219–233, 2015. @article{Correll2015, Three studies examined how participants use race to disambiguate visual stimuli. Participants performed a first-person-shooter task in which Black and White targets appeared holding either a gun or an innocuous object (e.g., a wallet). In Study 1, diffusion analysis (Ratcliff, 1978) showed that participants rapidly acquired information about a gun when it appeared in the hands of a Black target, and about an innocuous object in the hands of a White target. For counterstereotypic pairings (armed Whites, unarmed Blacks), participants acquired information more slowly. In Study 2, eye tracking showed that participants relied on more ambiguous information (measured by visual angle from fovea) when responding to stereotypic targets; for counterstereotypic targets, they achieved greater clarity before responding. In Study 3, participants were briefly exposed to targets (limiting access to visual information) but had unlimited time to respond. In spite of their slow, deliberative responses, they showed racial bias. This pattern is inconsistent with control failure and suggests that stereotypes influenced identification of the object. All 3 studies show that race affects visual processing by supplementing objective information. |
Patrick H. Cox; Maximilian Riesenhuber There is a "U" in clutter: Evidence for robust sparse codes underlying clutter tolerance in human vision Journal Article In: Journal of Neuroscience, vol. 35, no. 42, pp. 14148–14159, 2015. @article{Cox2015, The ability to recognize objects in clutter is crucial for human vision, yet the underlying neural computations remain poorly understood. Previous single-unit electrophysiology recordings in inferotemporal cortex in monkeys and fMRI studies of object-selective cortex in humans have shown that the responses to pairs of objects can sometimes be well described as a weighted average of the responses to the constituent objects. Yet, from a computational standpoint, it is not clear how the challenge of object recognition in clutter can be solved if downstream areas must disentangle the identity of an unknown number of individual objects from the confounded average neuronal responses. An alternative idea is that recognition is based on a subpopulation of neurons that are robust to clutter, i.e., that do not show response averaging, but rather robust object-selective responses in the presence of clutter. Here we show that simulations using the HMAX model of object recognition in cortex can fit the aforementioned single-unit and fMRI data, showing that the averaging-like responses can be understood as the result of responses of object-selective neurons to suboptimal stimuli. Moreover, the model shows how object recognition can be achieved by a sparse readout of neurons whose selectivity is robust to clutter. Finally, the model provides a novel prediction about human object recognition performance, namely, that target recognition ability should show a U-shaped dependency on the similarity of simultaneously presented clutter objects. This prediction is confirmed experimentally, supporting a simple, unifying model of how the brain performs object recognition in clutter.$backslash$n$backslash$nSIGNIFICANCE STATEMENT: The neural mechanisms underlying object recognition in cluttered scenes (i.e., containing more than one object) remain poorly understood. Studies have suggested that neural responses to multiple objects correspond to an average of the responses to the constituent objects. Yet, it is unclear how the identities of an unknown number of objects could be disentangled from a confounded average response. Here, we use a popular computational biological vision model to show that averaging-like responses can result from responses of clutter-tolerant neurons to suboptimal stimuli. The model also provides a novel prediction, that human detection ability should show a U-shaped dependency on target-clutter similarity, which is confirmed experimentally, supporting a simple, unifying account of how the brain performs object recognition in clutter. |
Hayley Crawford; Joanna Moss; Joseph P. McCleery; Giles M. Anderson; Chris Oliver Face scanning and spontaneous emotion preference in Cornelia de Lange syndrome and Rubinstein-Taybi syndrome Journal Article In: Journal of Neurodevelopmental Disorders, vol. 7, no. 1, pp. 1–12, 2015. @article{Crawford2015a, BACKGROUND: Existing literature suggests differences in face scanning in individuals with different socio-behavioural characteristics. Cornelia de Lange syndrome (CdLS) and Rubinstein-Taybi syndrome (RTS) are two genetically defined neurodevelopmental disorders with unique profiles of social behaviour. METHODS: Here, we examine eye gaze to the eye and mouth regions of neutrally expressive faces, as well as the spontaneous visual preference for happy and disgusted facial expressions compared to neutral faces, in individuals with CdLS versus RTS. RESULTS: Results indicate that the amount of time spent looking at the eye and mouth regions of faces was similar in 15 individuals with CdLS and 17 individuals with RTS. Both participant groups also showed a similar pattern of spontaneous visual preference for emotions. CONCLUSIONS: These results provide insight into two rare, genetically defined neurodevelopmental disorders that have been reported to exhibit contrasting socio-behavioural characteristics and suggest that differences in social behaviour may not be sufficient to predict attention to the eye region of faces. These results also suggest that differences in the social behaviours of these two groups may be cognitively mediated rather than subcortically mediated. |
Eileen T. Crehan; Robert R. Althoff Measuring the stare-in-the-crowd effect: a new paradigm to study social perception Journal Article In: Behavior Research Methods, vol. 47, no. 4, pp. 994–1003, 2015. @article{Crehan2015, Social perceptual ability plays a key role in successful social functioning. Social interactions demand a number of simultaneous skills, one of which is the detection of self-directed gaze. This study demonstrates how the ability to accurately detect self-directed gaze, called the stare-in-the-crowd effect, can be studied using a new eye-tracking paradigm. A set of images was developed to test this effect using a group of healthy undergraduate students. Eye movements and pupil size were tracked while they viewed these images. Participants also completed behavioral measures about themselves. Results show that self-directed gaze results in significantly more looking by participants. Behavioral predictors of gaze behaviors were not identified, likely given the health of the sample. However, correlations with variables are reported to explore in future research. |
Nigel T. M. Chen; Patrick J. F. Clarke; Tamara L. Watson; Colin MacLeod; Adam J. Guastella Attentional bias modification facilitates attentional control mechanisms: Evidence from eye tracking Journal Article In: Biological Psychology, vol. 104, pp. 139–146, 2015. @article{Chen2015d, Social anxiety is thought to be maintained by biased attentional processing towards threatening information. Research has further shown that the experimental attenuation of this bias, through the implementation of attentional bias modification (ABM), may serve to reduce social anxiety vulnerability. However, the mechanisms underlying ABM remain unclear. The present study examined whether inhibitory attentional control was associated with ABM. A non-clinical sample of participants was randomly assigned to receive either ABM or a placebo task. To assess pre-post changes in attentional control, participants were additionally administered an emotional antisaccade task. ABM participants exhibited a subsequent shift in attentional bias away from threat as expected. ABM participants further showed a subsequent decrease in antisaccade cost, indicating a general facilitation of inhibitory attentional control. Mediational analysis revealed that the shift in attentional bias following ABM was independent to the change in attentional control. The findings suggest that the mechanisms of ABM are multifaceted. |
Po-Heng Chen; Jie-Li Tsai In: Language and Linguistics, vol. 16, no. 4, pp. 555–586, 2015. @article{Chen2015, The purpose of the present study is twofold: (1) To examine whether the syntactic category constraint can determine the semantic resolution of Chinese syntactic category ambiguous words; and (2) to investigate whether the syntactic category of alternative meanings of Chinese homographs can influence the subordinate bias effect (SBE) during lexical ambiguity resolution. In the present study, four types of Chinese biased homo- graphs (NN, VV, VN, and NV) were embedded into syntactically and semantically subordinate-biased sentences. Each homograph was assigned a frequency-matched unambiguous word as control, which could fit into the same sentence frame. Participants' eye movements were recorded as they read each sentence. In general, the results showed that in a subordinate-biased context, (1) the SBE for the four types of homograph was significant only in the second-pass reading on the post-target words and (2) numerically, the NV homographs revealed a larger effect size of SBE than VN homographs on both target and post-target words. Our findings support the constraint-satisfaction models, suggesting that the syntactic category constraint is not the only factor influencing the semantic resolution of syntactic category ambiguous words, which is opposed to the prediction of the syntax-first models. |
Qi Chen; Daniel Mirman Interaction between phonological and semantic representations: Time matters Journal Article In: Cognitive Science, vol. 39, no. 3, pp. 538–558, 2015. @article{Chen2015a, Computational modeling and eye-tracking were used to investigate how phonological and semantic information interact to influence the time course of spoken word recognition. We extended our recent models (Chen & Mirman, 2012; Mirman, Britt, & Chen, 2013) to account for new evidence that competition among phonological neighbors influences activation of semantically related concepts during spoken word recognition (Apfelbaum, Blumstein, & McMurray, 2011). The model made a novel prediction: Semantic input modulates the effect of phonological neighbors on target word processing, producing an approximately inverted-U-shaped pattern with a high phonological density advantage at an intermediate level of semantic input-in contrast to the typical disadvantage for high phonological density words in spoken word recognition. This prediction was confirmed with a new analysis of the Apfelbaum et al. data and in a visual world paradigm experiment with preview duration serving as a manipulation of strength of semantic input. These results are consistent with our previous claim that strongly active neighbors produce net inhibitory effects and weakly active neighbors produce net facilitative effects. |
Sheng-Chang Chen; Mi-Shan Hsiao; Hsiao-Ching She In: Computers in Human Behavior, vol. 53, pp. 169–180, 2015. @article{Chen2015e, This study examined the effectiveness of the different spatial abilities of high school students who constructed their understanding of the atomic orbital concepts and mental models after learning with multimedia learning materials presented in static and dynamic modes of 3D representation. A total of 60 high school students participated in this study and were randomly assigned into static and dynamic 3D representation groups. The dependent variables included a pre-test and post-test on atomic orbital concepts, an atomic orbital mental model construction test, and students' eye-movement behaviors. Results showed that students who learned with dynamic 3D representation allocated a significantly greater amount of attention, exhibited better performance on the mental model test, and constructed more sophisticated 3D hybridizations of the orbital mental model than the students in the static 3D group. The logistic regression result indicated that the dynamic 3D representation group students' number of saccades and number of re-readings were positive predictors, while the number of fixations was the negative predictor, for developing the students' 3D mental models of an atomic orbital. High-spatial-ability students outperformed the low-spatial-ability students on the atomic orbital conceptual test and mental model construction, while both types of students allocated similar amounts of attention to the 3D representations. Our results demonstrated that low-spatial-ability students' eye movement behaviors positively correlate with their performance on the atomic orbital concept test and the mental model construction. |
Xinxin Chen; Hongyan Yu; Fang Yu What is the optimal number of response alternatives for rating scales? From an information processing perspective Journal Article In: Journal of Marketing Analytics, vol. 3, no. 2, pp. 69–78, 2015. @article{Chen2015f, Rating scales are measuring instruments that are widely used in social science research. However, many different rating scale formats are used in the literature, differing specifically in the number of response alternatives offered. Previous studies on the optimal number of response alternatives have focused exclusively on the participants' final response results, rather than on the participants' information processing. We used an eye-tracking study to explore this issue from an information processing perspective. We analyzed the information processing in six scales with different response alternatives. We compared the reaction times, net acquiescence response styles, extreme response styles and proportional changes in the response alternatives of the six scales. Our results suggest that the optimal number of response alternatives is five. |
Joseph D. Chisholm; Alan Kingstone Action video game players' visual search advantage extends to biologically relevant stimuli Journal Article In: Acta Psychologica, vol. 159, pp. 93–99, 2015. @article{Chisholm2015, Research investigating the effects of action video game experience on cognition has demonstrated a host of performance improvements on a variety of basic tasks. Given the prevailing evidence that these benefits result from efficient control of attentional processes, there has been growing interest in using action video games as a general tool to enhance everyday attentional control. However, to date, there is little evidence indicating that the benefits of action video game playing scale up to complex settings with socially meaningful stimuli - one of the fundamental components of our natural environment. The present experiment compared action video game player (AVGP) and non-video game player (NVGP) performance on an oculomotor capture task that presented participants with face stimuli. In addition, the expression of a distractor face was manipulated to assess if action video game experience modulated the effect of emotion. Results indicate that AVGPs experience less oculomotor capture than NVGPs; an effect that was not influenced by the emotional content depicted by distractor faces. It is noteworthy that this AVGP advantage emerged despite participants being unaware that the investigation had to do with video game playing, and participants being equivalent in their motivation and treatment of the task as a game. The results align with the notion that action video game experience is associated with superior attentional and oculomotor control, and provides evidence that these benefits can generalize to more complex and biologically relevant stimuli. |
Joseph D. Chisholm; Alan Kingstone Action video games and improved attentional control: Disentangling selection-and response-based processes Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 5, pp. 1430–1436, 2015. @article{Chisholm2015a, Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus–response processes that impact human performance. |
Wonil Choi; John M. Henderson Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing Journal Article In: Neuropsychologia, vol. 75, pp. 109–118, 2015. @article{Choi2015, Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network. |
Wonil Choi; Matthew W. Lowder; Fernanda Ferreira; John M. Henderson Individual differences in the perceptual span during reading: Evidence from the moving window technique Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 7, pp. 2463–2475, 2015. @article{Choi2015a, We report the results of an eye tracking experiment that used the gaze-contingent moving window technique to examine individual differences in the size of readers' perceptual span. Participants read paragraphs while the size of the rightward window of visible text was systematically manipulated across trials. In addition, participants completed a large battery of individual-difference measures representing two cognitive constructs: language ability and oculomotor processing speed. Results showed that higher scores on language ability measures and faster oculomotor processing speed were associated with faster reading times and shorter fixation durations. More interestingly, the size of readers' perceptual span was modulated by individual differences in language ability but not by individual differences in oculomotor processing speed, suggesting that readers with greater language proficiency are more likely to have efficient mechanisms to extract linguistic information beyond the fixated word. |