All EyeLink Eye Tracker Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2016 |
Elizabeth R. Schotter; Mallorie Leinenger Reversed preview benefit effects: Forced fixations emphasize the importance of parafoveal vision for efficient reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 12, pp. 2039–2067, 2016. @article{Schotter2016, Current theories of eye movement control in reading posit that processing of an upcoming parafoveal preview word is used to facilitate processing of that word once it is fixated (i.e., as a foveal target word). This preview benefit is demonstrated by shorter fixation durations in the case of valid (i.e., identical or linguistically similar) compared with invalid (i.e., dissimilar) preview conditions. However, we suggest that processing of the preview can directly influence fixation behavior on the target, independent of similarity between them. In Experiment 1, unrelated high and low frequency words were used as orthogonally crossed previews and targets and we observed a reversed preview benefit for low frequency targets—shorter fixation durations with an invalid, higher frequency preview compared with a valid, low frequency preview. In Experiment 2, the target words were replaced with orthographically legal and illegal nonwords and we found a similar effect of preview frequency on fixation durations on the targets, as well as a bimodal distribution in the illegal nonword target conditions with a denser early peak for high than low frequency previews. In Experiment 3, nonwords were used as previews for high and low frequency targets, replicating standard findings that “denied” preview increases fixation durations and the influence of target properties. These effects can be explained by forced fixations, cases in which fixations on the target were shortened as a consequence of the timing of word recognition of the preview relative to the time course of saccade programming to that word from the prior one. That is, the preview word was (at least partially) recognized so that it should have been skipped, but the word could not be skipped because the saccade to that word was in a nonlabile stage. In these cases, the system preinitiates the subsequent saccade off the upcoming word to the following word and the intervening fixation is short. |
Jillian M. Schuh; Inge Marie Eigsti; Daniel Mirman Discourse comprehension in autism spectrum disorder: Effects of working memory load and common ground Journal Article In: Autism Research, vol. 9, no. 12, pp. 1340–1352, 2016. @article{Schuh2016, Pragmatic language impairments are nearly universal in autism spectrum disorders (ASD). Discourse requires that we monitor information that is shared or mutually known, called "common ground." While many studies have examined the role of Theory of Mind (ToM) in such impairments, few have examined working memory (WM). Common ground impairments in ASD could reflect limitations in both WM and ToM. This study explored common ground use in youth ages 8-17 years with high-functioning ASD (n = 13) and typical development (n = 22); groups did not differ on age, gender, IQ, or standardized language. We tracked participants' eye movements while they performed a discourse task in which some information was known only to the participant (e.g., was privileged; a manipulation of ToM). In addition, the amount of privileged information varied (a manipulation of WM). All participants were slower to fixate the target when considering privileged information, and this effect was greatest during high WM load trials. Further, the ASD group was more likely to fixate competing (non-target) shapes. Predictors of fixation patterns included ASD symptomatology, language ability, ToM, and WM. Groups did not differ in ToM. Individuals with better WM fixated the target more rapidly, suggesting an association between WM capacity and efficient discourse. In addition to ToM knowledge, WM capacity constrains common ground representation and impacts pragmatic skills in ASD. Social impairments in ASD are thus associated with WM capacity, such that deficits in domain-general, nonsocial processes such as WM exert an influence during complex social interactions. |
Tarkeshwar Singh; Christopher M. Perry; Troy M. Herter A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment Journal Article In: Journal of NeuroEngineering and Rehabilitation, vol. 13, pp. 1–17, 2016. @article{Singh2016, BACKGROUND: Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. RESULTS: Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. CONCLUSIONS: The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth. |
Petra Sinn; Ralf Engbert Small saccades versus microsaccades: Experimental distinction and model-based unification Journal Article In: Vision Research, vol. 118, pp. 132–143, 2016. @article{Sinn2016, Natural vision is characterized by alternating sequences of rapid gaze shifts (saccades) and fixations. During fixations, microsaccades and slower drift movements occur spontaneously, so that the eye is never motionless. Theoretical models of fixational eye movements predict that microsaccades are dynamically coupled to slower drift movements generated immediately before microsaccades, which might be used as a criterion to distinguish microsaccades from small voluntary saccades. Here we investigate a sequential scanning task, where participants generate goal-directed saccades and microsaccades with overlapping amplitude distributions. We show that properties of microsaccades are correlated with precursory drift motion, while amplitudes of goal-directed saccades do not dependent on previous drift epochs. We develop and test a mathematical model that integrates goal-directed and fixational eye movements, including microsaccades. Using model simulations, we reproduce the experimental finding of correlations within fixational eye movement components (i.e., between physiological drift and microsaccades) but not between goal-directed saccades and fixational drift motion. These results lend support to a functional difference between microsaccades and goal-directed saccades, while, at the same time, both types of behavior may be part of an oculomotor continuum that is quantitatively described by our mathematical model. |
Kevin J. Skoblenick; Thilo Womelsdorf; Stefan Everling Ketamine alters outcome-related local field potentials in monkey prefrontal cortex Journal Article In: Cerebral Cortex, vol. 26, no. 6, pp. 2743–2752, 2016. @article{Skoblenick2016, A subanesthetic dose of the noncompetitive N-methyl-d-aspartate receptor antagonist ketamine is known to induce a schizophrenia-like phenotype in humans and nonhuman primates alike. The transient behavioral changes mimic the positive, negative, and cognitive symptoms of the disease but the neural mechanisms behind these changes are poorly understood. A growing body of evidence indicates that the cognitive control processes associated with prefrontal cortex (PFC) regions relies on groups of neurons synchronizing at narrow-band frequencies measurable in the local field potential (LFP). Here,we recorded LFPs from the caudo-lateral PFC of 2 macaque monkeys performing an antisaccade task, which requires the suppression of an automatic saccade toward a stimulus and the initiation of a goal-directed saccade in the opposite direction. Preketamine injection activity showed significant differences in a narrow 20–30 Hz beta frequency band between correct and error trials in the postsaccade response epoch. Ketamine significantly impaired the animals' performance and was associated with a loss of the differences in outcome-specific beta-band power. Instead, we observed a large increase in high-gamma-band activity. Our results suggest that the PFC employs beta-band synchronization to prepare for top–down cognitive control of saccades and the monitoring of task outcome. |
Timothy J. Slattery; Mark Yates; Bernhard Angele Interword and interletter spacing effects during reading revisited: Interactions with word and font characteristics Journal Article In: Journal of Experimental Psychology: Applied, vol. 22, no. 4, pp. 406–422, 2016. @article{Slattery2016, Despite the large number of eye movement studies conducted over the past 30+ years, relatively few have examined the influence that font characteristics have on reading. However, there has been renewed interest in 1 particular font characteristic, letter spacing, which has both theoretical (visual word recognition) and applied (font design) importance. Recently published results that letter spacing has a bigger impact on the reading performance of dyslexic children have perhaps garnered the most attention (Zorzi et al., 2012). Unfortunately, the effects of increased interletter spacing have been mixed with some authors reporting facilitation and others reporting inhibition (van den Boer & Hakvoort, 2015). The authors present findings from 3 experiments designed to resolve the seemingly inconsistent letter-spacing effects and provide clarity to researchers and font designers and researchers. The results indicate that the direction of spacing effects depend on the size of the default spacing chosen by font developers. Experiment 3 found that interletter spacing interacts with interword spacing, as the required space between words depends on the amount of space used between letters. Interword spacing also interacted with word type as the inhibition seen with smaller interword spacing was evident with nouns and verbs but not with function words. |
B. J. Sleezer; M. D. Castagno; Benjamin Y. Hayden Rule encoding in orbitofrontal cortex and striatum guides selection Journal Article In: Journal of Neuroscience, vol. 36, no. 44, pp. 11223–11237, 2016. @article{Sleezer2016a, Active maintenance of rules, like other executive functions, is often thought to be the domain of a discrete executive system. Analternative view is that rule maintenance is a broadly distributed function relying on widespread cortical and subcortical circuits. Tentative evidence supporting this view comes from research showing some rule selectivity in the orbitofrontal cortex and dorsal striatum. We recorded in these regions and in the ventral striatum, which has not been associated previously with rule representation, as macaques performed a Wisconsin Card Sorting Task. We found robust encoding ofrule category (color vs shape) and rule identity (six possible rules) in all three regions. Rule identity modulated responses to potential choice targets, suggesting that rule information guides behavior by highlighting choice targets. The effects that we observed were not explained by differences in behavioral performance across rules and thus cannot be attributed to reward expectation. Our results suggest that rule maintenance and rule-guided selection of options are distributed processes and provide new insight into orbital and striatal contributions to executive control. |
Brianna J. Sleezer; Benjamin Y. Hayden Differential contributions of ventral and dorsal striatum to early and late phases of cognitive set reconfiguration Journal Article In: Journal of Cognitive Neuroscience, vol. 28, no. 12, pp. 1849–1864, 2016. @article{Sleezer2016, Flexible decision-making, a defining feature of human cognition, is typically thought of as a canonical pFC function. Recent work suggests that the striatum may participate as well; however, its role in this process is not well understood. We recorded activity of neurons in both the ventral (VS) and dorsal (DS) striatum while rhesus macaques performed a version of the Wisconsin Card Sorting Test, a classic test of flexibility. Our version of the task involved a trial-and-error phase before monkeys could identify the correct rule on each block. We observed changes in firing rate in both regions when monkeys switched rules. Specifically, VS neurons demonstrated switch-related activity early in the trial-and-error period when the rule needed to be updated, and a portion of these neurons signaled information about the switch context (i.e., whether the switch was intradimensional or extradimensional). Neurons in both VS and DS demonstrated switch-related activity at the end of the trial-and-error period, immediately before the rule was fully established and main- tained, but these signals did not carry any information about switch context. We also observed associative learning signals (i.e., specific responses to options associated with rewards in the presentation period before choice) that followed the same pattern as switch signals (early in VS, later in DS). Taken together, these results endorse the idea that the striatum participates directly in cognitive set reconfiguration and suggest that single neurons in the striatum may contribute to a functional handoff from the VS to the DS during reconfiguration processes. |
Adam C. Snyder; Michael J. Morais; Matthew A. Smith Dynamics of excitatory and inhibitory networks are differentially altered by selective attention Journal Article In: Journal of Neurophysiology, vol. 116, no. 4, pp. 1807–1820, 2016. @article{Snyder2016, Inhibition and excitation form two fundamental modes of neuronal interaction, yet we understand relatively little about their distinct roles in service of perceptual and cognitive processes. We developed a multidimensional waveform analysis to identify fast-spiking (putative inhibitory) and regular-spiking (putative excitatory) neurons in vivo and used this method to analyze how attention affects these two cell classes in visual area V4 of rhesus macaques. We found that putative inhibitory neurons had both greater increases in firing rate and decreases in correlated variability with attention when compared to putative excitatory neurons. Moreover, the time course of attention effects for putative inhibitory neurons more closely tracked the temporal statistics of target probability in our task. Finally, the session-to-session variability in a behavioral measure of attention co-varied with the magnitude of this effect. Together, these results suggest that selective targeting of inhibitory neurons and networks is a critical mechanism for attentional modulation. |
Annie Tremblay; Mirjam Broersma; Caitlin E. Coughlin; Jiyoun Choi Effects of the native language on the learning of fundamental frequency in second-language speech segmentation Journal Article In: Frontiers in Psychology, vol. 7, pp. 985, 2016. @article{Tremblay2016, This study investigates whether the learning of prosodic cues to word boundaries in speech segmentation is more difficult if the native and second/foreign languages (L1 and L2) have similar (though non-identical) prosodies than if they have markedly different prosodies (Prosodic-Learning Interference Hypothesis). It does so by comparing French, Korean, and English listeners' use of fundamental-frequency (F0) rise as a cue to word-final boundaries in French. F0 rise signals phrase-final boundaries in French and Korean but word-initial boundaries in English. Korean-speaking and English-speaking L2 learners of French, who were matched in their French proficiency and French experience, and native French listeners completed a visual-world eye-tracking experiment in which they recognized words whose final boundary was or was not cued by an increase in F0. The results showed that Korean listeners had greater difficulty using F0 rise as a cue to word-final boundaries in French than French and English listeners. This suggests that L1-L2 prosodic similarity can make the learning of an L2 segmentation cue difficult, in line with the proposed Prosodic-Learning Interference Hypothesis. We consider mechanisms that may underlie this difficulty and discuss the implications of our findings for understanding listeners' phonological encoding of L2 words. |
Johanne Tromp; Peter Hagoort; Antje S. Meyer Pupillometry reveals increased pupil size during indirect request comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 6, pp. 1093–1108, 2016. @article{Tromp2016, Fluctuations in pupil size have been shown to reflect variations in processing demands during lexical and syntactic processing in language comprehension. An issue that has not received attention is whether pupil size also varies due to pragmatic manipulations. In two pupillometry experiments, we investigated whether pupil diameter was sensitive to increased processing demands as a result of comprehending an indirect request versus a direct statement. Adult participants were presented with 120 picture-sentence combinations that could be interpreted either as an indirect request (a picture of a window with the sentence "it's very hot here") or as a statement (a picture of a window with the sentence "it's very nice here"). Based on the hypothesis that understanding indirect utterances requires additional inferences to be made on the part of the listener, we predicted a larger pupil diameter for indirect requests than statements. The results of both experiments are consistent with this expectation. We suggest that the increase in pupil size reflects additional processing demands for the comprehension of indirect requests as compared to statements. This research demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. |
Wendy Troop-Gordon; Robert D. Gordon; Laura Vogel-Ciernia; Elizabeth Ewing Lee; Kari J. Visconti Visual attention to dynamic scenes of ambiguous provocation and children's aggressive behavior Journal Article In: Journal of Clinical Child and Adolescent Psychology, pp. 1–16, 2016. @article{TroopGordon2016, Research on biases in attention related to children's aggression has yielded mixed results. Some research suggests that inattention to social cues and reliance on maladaptive social schemas underlie aggression. Other research suggests that maladaptive social schemas lead aggressive individuals to attend to nonhostile cues. The primary objective of this study was to test the proposition that aggression is related to delayed attention to cues followed by selective attention to nonhostile cues after the provocation has occurred. A second objective was to test whether these biases are associated with aggression only when children hold negative social schemas. The eye fixations of70 children (34 boys, 36 girls; Mage = 11.71 years) were monitored with an eye tracker as they watched video clips of child actors portraying scenes of ambiguous provocation. Aggression was measured using peer-, teacher-, and parent-reports, and children completed a measure ofantisocial and prosocial peer beliefs. Aggressive behavior was associated with greater time until fixation on the provocateur among youth who held antisocial peer beliefs. Aggression was also associated with greater time until fixation on an actor displaying empathy for the victim among children reporting low levels of prosocial peer beliefs. After the provocation, aggression was associated with suppressed attention to an amused peer among children who held negative peer beliefs. Increasing attention to cues in a scene ofambiguous provocation, in conjunction with fostering more positive beliefs about peers, may be effective in reducing hostile responding among aggressive youth. |
Massimo Turatto; David Pascucci Short-term and long-term plasticity in the visual-attention system: Evidence from habituation of attentional capture Journal Article In: Neurobiology of learning and memory, vol. 130, pp. 159–169, 2016. @article{Turatto2016, Attention is known to be crucial for learning and to regulate activity-dependent brain plasticity. Here we report the opposite scenario, with plasticity affecting the onset-driven automatic deployment of spatial attention. Specifically, we showed that attentional capture is subject to habituation, a fundamental form of plasticity consisting in a response decrement to repeated stimulations. Participants performed a visual discrimination task with focused attention, while being occasionally exposed to a distractor consisting of a high-luminance peripheral onset. With practice, short-term and long-term habituation of attentional capture emerged, making the visual-attention system fully immune to distraction. Furthermore, spontaneous recovery of attentional capture was found when the distractor was temporarily removed. Capture, however, once habituated was surprisingly resistant to spontaneous recovery, taking from several minutes to days to recover. The results suggest that the mechanisms subserving exogenous attentional orienting are subject to profound and enduring plastic changes based on previous experience, and that habituation can impact high-order cognitive functions. |
Alexandra Ţurcan; Ruth Filik An eye-tracking investigation of written sarcasm comprehension: The roles of familiarity and context Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 12, pp. 1867–1893, 2016. @article{Turcan2016, This article addresses a current theoretical debate between the standard pragmatic model, the graded salience hypothesis, and the implicit display theory, by investigating the roles of the context and of the properties of the sarcastic utterance itself in the comprehension of a sarcastic remark. Two eye-tracking experiments were conducted where we manipulated the speaker's expectation in the context and the familiarity of the sarcastic remark. The results of the first eye-tracking study showed that literal comments were read faster than unfamiliar sarcastic comments, regardless of whether an explicit expectation was present in the context. The results of the second eye-tracking study indicated an early processing difficulty for unfamiliar sarcastic comments, but not for familiar sarcastic comments. Later reading time measures indicated a general difficulty for sarcastic comments. Overall, results seem to suggest that the familiarity of the utterance does indeed affect the time course of sarcasm processing (supporting the graded salience hypothesis), although there is no evidence that making the speaker's expectation explicit in the context affects it as well (thus failing to support the implicit display theory). (PsycINFO Database Record |
Yoshiyuki Ueda; Atsuko Tominaga; Shogo Kajimura; Michio Nomura Spontaneous eye blinks during creative task correlate with divergent processing Journal Article In: Psychological Research, vol. 80, no. 4, pp. 652–659, 2016. @article{Ueda2016, Creativity consists of divergent and convergent thinking, with both related to individual eye blinks at rest. To assess underlying mechanisms between eye blinks and traditional creativity tasks, we investigated the relationship between creativity performance and eye blinks at rest and during tasks. Participants performed an alternative uses and remote association task while eye blinks were recorded. Results showed that the relationship between eye blinks at rest and creativity performance was compatible with those of previous research. Interestingly, we found that the generation of ideas increased as a function of eye blink number during the alternative uses task. On the other hand, during the remote association task, accuracy was independent of eye blink number during the task, but response time increased with it. Moreover, eye blink changes in participants who responded quickly during the remote association task were different depending on their resting state eye blinks; that is, participants with many eye blinks during rest showed little increasing eye blinks and achieved solutions quickly. Positive correlations between eye blinks during creative tasks and yielding ideas on the alternative uses task and response time on the remote association task suggest that eye blinks during creativity tasks relate to divergent thinking processes such as conceptual reorganization. |
Heng Ru May Tan; Joachim Gross; P. J. Uhlhaas MEG sensor and source measures of visually induced gamma-band oscillations are highly reliable Journal Article In: NeuroImage, vol. 137, pp. 34–44, 2016. @article{Tan2016, High frequency brain oscillations are associated with numerous cognitive and behavioral processes. Non-invasive measurements using electro-/magnetoencephalography (EEG/MEG) have revealed that high frequency neural signals are heritable and manifest changes with age as well as in neuropsychiatric illnesses. Despite the extensive use of EEG/MEG-measured neural oscillations in basic and clinical research, studies demonstrating test-retest reliability of power and frequency measures of neural signals remain scarce. Here, we evaluated the test-retest reliability of visually induced gamma (30-100 Hz) oscillations derived from sensor and source signals acquired over two MEG sessions. The study required participants (N = 13) to detect the randomly occurring stimulus acceleration while viewing a moving concentric grating. Sensor and source MEG measures of gamma-band activity yielded comparably strong reliability (average intraclass correlation |
Hanlin Tang; Jedediah M. Singer; Matias J. Ison; Gnel Pivazyan; Melissa Romaine; Rosa Frias; Elizabeth Meller; Adrianna Boulin; James Carroll; Victoria Perron; Sarah Dowcett; Marlise Arellano; Gabriel Kreiman Predicting episodic memory formation for movie events Journal Article In: Scientific Reports, vol. 6, pp. 30175, 2016. @article{Tang2016, Episodic memories are long lasting and full of detail, yet imperfect and malleable. We quantitatively evaluated recollection of short audiovisual segments from movies as a proxy to real-life memory formation in 161 subjects at 15minutes up to a year after encoding. Memories were reproducible within and across individuals, showed the typical decay with time elapsed between encoding and testing, were fallible yet accurate, and were insensitive to low-level stimulus manipulations but sensitive to high-level stimulus properties. Remarkably, memorability was also high for single movie frames, even one year post-encoding. To evaluate what determines the efficacy of long-term memory formation, we developed an extensive set of content annotations that included actions, emotional valence, visual cues and auditory cues. These annotations enabled us to document the content properties that showed a stronger correlation with recognition memory and to build a machine-learning computational model that accounted for episodic memory formation in single events for group averages and individual subjects with an accuracy of up to 80%. These results provide initial steps towards the development of a quantitative computational theory capable of explaining the subjective filtering steps that lead to how humans learn and consolidate memories. |
A. Caglar Tas; Steven J. Luck; Andrew Hollingworth The relationship between visual attention and visual working memory encoding: A dissociation between covert and overt orienting Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 48, no. 8, pp. 1121–1138, 2016. @article{Tas2016, There is substantial debate over whether visual working memory (VWM) and visual attention constitute a single system for the selection of task-relevant perceptual information or whether they are distinct systems that can be dissociated when their representational demands diverge. In the present study, we focused on the relationship between visual attention and the encoding of objects into VWM. Participants performed a color change-detection task. During the retention interval, a secondary object, irrelevant to the memory task, was presented. Participants were instructed either to execute an overt shift of gaze to this object (Experiments 1–3) or to attend it covertly (Experiments 4 and 5). Our goal was to determine whether these overt and covert shifts of attention disrupted the information held in VWM. We hypothesized that saccades, which typically introduce a memorial demand to bridge perceptual disrup- tion, would lead to automatic encoding of the secondary object. However, purely covert shifts of attention, which introduce no such demand, would not result in automatic memory encoding. The results supported these predictions. Saccades to the secondary object produced substantial interference with VWM performance, but covert shifts of attention to this object produced no interference with VWM performance. These results challenge prevailing theories that consider attention and VWM to reflect a common mechanism. In addition, they indicate that the relationship between attention and VWM is dependent on the memorial demands of the orienting behavior. |
Jessica Taubert; Valerie Goffaux; Goedele Van Belle; Wim Vanduffel; Rufin Vogels The impact of orientation filtering on face-selective neurons in monkey inferior temporal cortex Journal Article In: Scientific Reports, vol. 6, pp. 21189, 2016. @article{Taubert2016, Faces convey complex social signals to primates. These signals are tolerant of some image transformations (e.g. changes in size) but not others (e.g. picture-plane rotation). By filtering face stimuli for orientation content, studies of human behavior and brain responses have shown that face processing is tuned to selective orientation ranges. In the present study, for the first time, we recorded the responses of face-selective neurons in monkey inferior temporal (IT) cortex to intact and scrambled faces that were filtered to selectively preserve horizontal or vertical information. Guided by functional maps, we recorded neurons in the lateral middle patch (ML), the lateral anterior patch (AL), and an additional region located outside of the functionally defined face-patches (CONTROL). We found that neurons in ML preferred horizontal-passed faces over their vertical-passed counterparts. Neurons in AL, however, had a preference for vertical-passed faces, while neurons in CONTROL had no systematic preference. Importantly, orientation filtering did not modulate the firing rate of neurons to phase-scrambled face stimuli in any recording region. Together these results suggest that face-selective neurons found in the face-selective patches are differentially tuned to orientation content, with horizontal tuning in area ML and vertical tuning in area AL. |
Jessica Nelson Taylor; Charles A. Perfetti Eye movements reveal readers' lexical quality and reading experience Journal Article In: Reading and Writing, vol. 29, no. 6, pp. 1069–1103, 2016. @article{Taylor2016, Two experiments demonstrate that individual differences among normal adult readers, including lexical quality, are expressed in silent reading at the word level. In the first of two studies we identified major dimensions of variability among college readers and among words using factor analysis. We then examined the effects of these dimensions of variability on eye movements during paragraph reading. More experienced readers (who also were higher in reading speed) read words more quickly, especially less frequent words, while readers with higher lexical knowledge showed shorter early fixations, especially for more frequent words. These results suggest that individual differences in reading may reflect differences in the quality of lexical representations and in reading experience, which is a source of lexical quality. In a second study, we controlled the lexical knowledge readers obtained from new words through a training paradigm that varied exposure to a word's orthographic, phonological, and meaning constituents. Training exposure to orthographic and phonological constituents affected first pass reading measures, and phonological and meaning training affected second pass measures. Incomplete knowledge of word components slowed first pass reading times, com- pared to both more complete knowledge and no knowledge. Training effects were mediated by individual differences, pointing to lexical quality and reading experi- ence—which, combined reflect reading expertise—as important in word reading as part of text reading. |
Yasuo Terao; Hideki Fukuda; Shinnichi Tokushuge; Yoshiko Nomura; Ritsuko Hanajima; Yoshikazu Ugawa Saccade abnormalities associated with focal cerebral lesions – How cortical and basal ganglia commands shape saccades in humans Journal Article In: Clinical Neurophysiology, vol. 127, no. 8, pp. 2953–2967, 2016. @article{Terao2016, Objective: To study saccade abnormalities associated with focal cerebral lesions, including the cerebral cortex and basal ganglia (BG). Methods: We studied the latency and amplitude of reflexive and voluntary saccades in 37 patients with focal lesions of the frontal and parietal cortices and BG (caudate and putamen), and 51 age-matched controls, along with the ability to inhibit unwanted reflexive saccades. Results: Latencies of reflexive saccades were prolonged in patients with parietal lesions involving the parietal eye field (PEF), whereas their amplitude was decreased with parietal or putaminal lesions. In contrast, latency of voluntary saccades was prolonged and their success rate reduced with frontal lesions including the frontal eye field (FEF) or its outflow tract as well as the dorsolateral/medial prefrontal cortex, and caudate lesions, whereas their amplitude was decreased with parietal lesions. Inhibitory control of reflexive saccades was impaired with frontal, caudate and, less prominently, parietal lesions. Conclusions: PEF is important in triggering reflexive saccades, also determining their amplitude. Whereas FEF and the caudate emit commands for initiating voluntary saccades, their amplitude is mainly determined by PEF. Commands not only from FEF and dorsolateral/medial prefrontal cortex but also from the caudate and PEF serve to inhibit unnecessary reflexive saccades. Significance: The findings suggested how cortical and BG commands shape reflexive and voluntary saccades in humans. |
Louis Thibault; Ronald Van Den Berg; Patrick Cavanagh; Claire Sergent Retrospective attention gates discrete conscious access to past sensory stimuli Journal Article In: PLoS ONE, vol. 11, no. 2, pp. e0148504, 2016. @article{Thibault2016, Cueing attention after the disappearance of visual stimuli biases which items will be remembered best. This observation has historically been attributed to the influence of attention on memory as opposed to subjective visual experience. We recently challenged this view by showing that cueing attention after the stimulus can improve the perception of a single Gabor patch at threshold levels of contrast. Here, we test whether this retro-perception actually increases the frequency of consciously perceiving the stimulus, or simply allows for a more precise recall of its features. We used retro-cues in an orientation-matching task and performed mixture-model analysis to independently estimate the proportion of guesses and the precision of non-guess responses. We find that the improvements in performance conferred by retrospective attention are overwhelmingly determined by a reduction in the proportion of guesses, providing strong evidence that attracting attention to the target's location after its disappearance increases the likelihood of perceiving it consciously. |
Vijay Vitthal Thitme; Akanksha Varghese Image retrieval using vector of locally aggregated descriptors Journal Article In: International Journal of Advance Research in Computer Science and Management Studies, vol. 4, no. 2, pp. 97–104, 2016. @article{Thitme2016, Partial duplicate image retrieval is very powerful and important task in the real world applications such as landmark search, copyright protection, fake image identification. In the internet applications users continuously upload images which may be partially duplicate images on the domains like social sites orkut, facebook, and related applications etc. The partial image is nothing but segment of whole image, and the various methods of transformations are scaling, resolution, illumination, rotation and viewpoint. This method is considered as of much more valuable by different real world aspects and has motivated towards this study. The method of retrieving images which is based on the object methods generally uses the whole image as the query image. In object based image retrieval methods usually use the whole image as the query. This method is compared with text system by using the bag of visual words (BOV) Generally there may be lots of noise on the images so it is impossible to perform operations on large scale dataset of images. This approach is not much more used because no any spatial data is used to retrieve the efficient images.The art of image retrieval methods represent image with a large dimensional vector of visual words by making quantization of local features, such as Scale Invariant Feature Transform, solely on the descriptor space. Quantization of the Local features to visual words are done firstly in descriptor space and then in orientation space. Local Self-Similarity Descriptor (LSSD) value is used which is used to captures the internal geometric layouts in the local text self-similar regions near interest points. |
Kathleen Thomaes; Iris M. Engelhard; Marit Sijbrandij; Danielle C. Cath; Odile A. Heuvel Degrading traumatic memories with eye movements: A pilot functional MRI study in PTSD Journal Article In: European Journal of Psychotraumatology, vol. 7, no. 1, pp. 1–10, 2016. @article{Thomaes2016, Background: Eye movement desensitization and reprocessing (EMDR) is an effective treatment for post-traumatic stress disorder (PTSD). During EMDR, the patient recalls traumatic memories while making eye movements (EMs). Making EMs during recall is associated with decreased vividness and emotionality of traumatic memories, but the underlying mechanism has been unclear. Recent studies support a ''working-memory'' (WM) theory, which states that the two tasks (recall and EMs) compete for limited capacity of WM resources. However, prior research has mainly relied on self-report measures. Methods: Using functional magnetic resonance imaging, we tested whether ''recall with EMs,''relative to a ''recall-only'' control condition, was associated with reduced activity of primary visual and emotional processing brain regions, associatedwith vividness and emotionality respectively, and increased activity of the dorsolateral prefrontal cortex (DLPFC), associated with working memory. We used a randomized, controlled, crossover experimental design in eight adult patients with a primary diagnosis of PTSD. A script-driven imagery (SDI) procedure was used to measure responsiveness to an audio-script depicting the participant's traumatic memory before and after conditions. Results: SDI activated mainly emotional processing-related brain regions (anterior insula, rostral anterior cingulate cortex (ACC), and dorsomedial prefrontal cortex), WM-related (DLPFC), and visual (association) brain regions before both conditions. Although predicted pre-to post-test decrease in amygdala activation after "recall with EMs" was not significant, SDI activated less right amygdala and rostral ACC activity after "recall with EMs" compared to post-"recall-only." Furthermore, functional connectivity from the right amygdala to the rostral ACC was decreased after "recall with EMs" compared with after "recall-only." Conclusions: These preliminary results in a small sample suggest that making EMs during recall, which is part of the regular EMDR treatment protocol, might reduce activity and connectivity in emotional processing-related areas. This study warrants replication in a larger sample. |
Paul M. J. Thomas; Lily Fitz Gibbon; Jane E. Raymond Value conditioning modulates visual working memory processes Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 1, pp. 6–10, 2016. @article{Thomas2016, Learning allows the value of motivationally salient events to become associated with stimuli that predict those events. Here, we asked whether value associations could facilitate visual working memory (WM), and whether such effects would be valence dependent. Our experiment was specifically designed to isolate value-based effects on WM from value-based effects on selective attention that might be expected to bias encoding. In a simple associative learning task, participants learned to associate the color of tinted faces with gaining or losing money or neither. Tinted faces then served as memoranda in a face identity WM task for which previously learned color associations were irrelevant and no monetary outcomes were forthcoming. Memory was best for faces with gain-associated tints, poorest for faces with loss-associated tints, and average for faces with no-outcome-associated tints. Value associated with 1 item in the WM array did not modulate memory for other items in the array. Eye movements when studying faces did not depend on the valence of previously learned color associations, arguing against value-based biases being due to differential encoding. This valence-sensitive value-conditioning effect on WM appears to result from modulation of WM maintenance processes. |
Xiaoguang Tian; Masatoshi Yoshida; Ziad M. Hafed A microsaccadic account of attentional capture and inhibition of return in Posner cueing Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 23, 2016. @article{Tian2016, Microsaccades exhibit systematic oscillations in direction after spatial cueing, and these oscillations correlate with facilitatory and inhibitory changes in behavioral performance in the same tasks. However, independent of cueing, facilitatory and inhibitory changes in visual sensitivity also arise pre-microsaccadically. Given such pre-microsaccadic modulation, an imperative question to ask becomes: how much of task performance in spatial cueing may be attributable to these peri-movement changes in visual sensitivity? To investigate this question, we adopted a theoretical approach. We developed a minimalist model in which: (1) microsaccades are repetitively generated using a rise-to-threshold mechanism, and (2) pre-microsaccadic target onset is associated with direction-dependent modulation of visual sensitivity, as found experimentally. We asked whether such a model alone is sufficient to account for performance dynamics in spatial cueing. Our model not only explained fine-scale microsaccade frequency and direction modulations after spatial cueing, but it also generated classic facilitatory (i.e., attentional capture) and inhibitory [i.e., inhibition of return (IOR)] effects of the cue on behavioral performance. According to the model, cues reflexively reset the oculomotor system, which unmasks oscillatory processes underlying microsaccade generation; once these oscillatory processes are unmasked, "attentional capture" and "IOR" become direct outcomes of pre-microsaccadic enhancement or suppression, respectively. Interestingly, our model predicted that facilitatory and inhibitory effects on behavior should appear as a function of target onset relative to microsaccades even without prior cues. We experimentally validated this prediction for both saccadic and manual responses. We also established a potential causal mechanism for the microsaccadic oscillatory processes hypothesized by our model. We used retinal-image stabilization to experimentally control instantaneous foveal motor error during the presentation of peripheral cues, and we found that post-cue microsaccadic oscillations were severely disrupted. This suggests that microsaccades in spatial cueing tasks reflect active oculomotor correction of foveal motor error, rather than presumed oscillatory covert attentional processes. Taken together, our results demonstrate that peri-microsaccadic changes in vision can go a long way in accounting for some classic behavioral phenomena. |
Shin-ichi Tokushige; Yasuo Terao; Shun-ichi Matsuda; Satomi Inomata-Terada; Takahiro Shimizu; Nobuyuki Tanaka; Masashi Hamada; Akihiro Yugeta; Ritsuko Hanajima; Harushi Mori; Shoji Tsuji; Yoshikazu Ugawa Motor neuron disease with saccadic abnormalities similar to progressive supranuclear palsy Journal Article In: Neurology and Clinical Neuroscience, vol. 4, pp. 146–152, 2016. @article{Tokushige2016, Background: In recent years, a variety of clinical types of amyotrophic lateral sclerosis have come to be recognized. As some patients present with oculomotor abnormalities both clinically and pathologically, the progressive supranuclear palsy variant of amyotrophic lateral sclerosis has been proposed. Aim: To describe atypical cases of motor neuron disease with abnormal extraocular movements mimicking progressive supranuclear palsy. Methods: We present three motor neuron disease patients with slow saccades, who were aged 57, 63 and 62 years. Neurological examinations found vertical gaze palsy in two patients. The two patients who presented extrapyramidal signs were regarded as motor neuron disease with parkinsonism, whereas the other was diagnosed with amyotrophic lateral sclerosis. Their saccades were investigated by visually-guided saccade and memory-guided saccade tasks, and were compared with those of 14 age-matched normal participants (60.3 +/- 1.9 years). Results: In all these patients, the visually-guided saccade latencies were significantly prolonged compared with normal participants, whereas the memory-guided saccade latencies were not. The velocity and amplitude of saccades of the patients were significantly reduced in visually-guided saccade and memory-guided saccade in comparison with normal participants. Conclusion: The patterns of saccadic abnormalities in the patients were similar to those of progressive supranuclear palsy patients, suggesting that some patients with motor neuron disease show saccade abnormalities similar to those of progressive supranuclear palsy patients from the clinical and physiological perspective. Motor neuron disease with slow saccades and parkinsonism, as reported here, suggest the existence of progressive supranuclear palsy-variant amyotrophic lateral sclerosis. |
Jianliang Tong; Jun Maruta; Kristin J. Heaton; Alexis L. Maule; Umesh Rajashekar; Lisa A. Spielman; Jamshid Ghajar Degradation of binocular coordination during sleep deprivation Journal Article In: Frontiers in Neurology, vol. 7, pp. 90, 2016. @article{Tong2016, To aid a clear and unified visual perception while tracking a moving target, both eyes must be coordinated, so the image of the target falls on approximately corresponding areas of the fovea of each eye. The movements of the two eyes are decoupled during sleep, suggesting a role of arousal in regulating binocular coordination. While the absence of visual input during sleep may also contribute to binocular decoupling, sleepiness is a state of reduced arousal that still allows for visual input, providing a context within which the role of arousal in binocular coordination can be studied. We examined the effects of sleep deprivation on binocular coordination using a test paradigm that we previously showed to be sensitive to sleep deprivation. We quantified binocular coordination with the SD of the distance between left and right gaze positions on the screen. We also quantified the stability of conjugate gaze on the target, i.e., gaze-target synchronization, with the SD of the distance between the binocular average gaze and the target. Sleep deprivation degraded the stability of both binocular coordination and gaze-target synchronization, but between these two forms of gaze control the horizontal and vertical components were affected differently, suggesting that disconjugate and conjugate eye movements are under different regulation of attentional arousal. The prominent association found between sleep deprivation and degradation of binocular coordination in the horizontal direction may be used for a fit-for-duty assessment. |
Mark Torrance; Roger Johansson; Victoria Johansson; Åsa Wengelin Reading during the composition of multi-sentence texts: An eye-movement study Journal Article In: Psychological Research, vol. 80, no. 5, pp. 729–743, 2016. @article{Torrance2016, Writers composing multi-sentence texts have immediate access to a visual representation of what they have written. Little is known about the detail of writers' eye movements within this text during production. We describe two experiments in which competent adult writ- ers' eye movements were tracked while performing short expository writing tasks. These are contrasted with condi- tions in which participants read and evaluated researcher- provided texts. Writers spent a mean of around 13 % of their time looking back into their text. Initiation of these look-back sequences was strongly predicted by linguisti- cally important boundaries in their ongoing production (e.g., writers were much more likely to look back imme- diately prior to starting a new sentence). 36 %of look-back sequences were associated with sustained reading and the remainder with less patterned forward and backward sac- cades between words (‘‘hopping''). Fixation and gaze durations and the presence of word-length effects sug- gested lexical processing of fixated words in both reading and hopping sequences. Word frequency effects were not present when writers read their own text. Findings demonstrate the technical possibility and potential value of examining writers' fixations within their just-written text. We suggest that these fixations do not serve solely, or even primarily, in monitoring for error, but play an important role in planning ongoing production. |
Matteo Toscani; Sunčica Zdravković; Karl R. Gegenfurtner Lightness perception for surfaces moving through different illumination levels Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–18, 2016. @article{Toscani2016, Lightness perception has mainly been studied with static scenes so far. This study presents four experiments investigating lightness perception under dynamic illumination conditions. We asked participants for lightness matches of a virtual three-dimensional target moving through a light field while their eye movements were recorded. We found that the target appeared differently, depending on the direction of motion in the light field and its precise position in the light field. Lightness was also strongly affected by the choice of fixation positions with the spatiotemporal image sequence. Overall, lightness constancy was improved when observers could freely view the object, over when they were forced to fixate certain regions. Our results show that dynamic scenes and nonuniform light fields are particularly challenging for our visual system. Eye movements in such scenarios are chosen to improve lightness constancy. |
Annie Tran; James E. Hoffman Visual attention is required for multiple object tracking Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 12, pp. 2103–2114, 2016. @article{Tran2016, In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or “architectural limits” in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. |
Natale Stucchi; Lisa Scocchia; Alessandro Carlini When geometry constrains vision: Systematic misperceptions within geometrical configurations Journal Article In: PLoS ONE, vol. 11, no. 3, pp. e0151488, 2016. @article{Stucchi2016, How accurate are we in reproducing a point within a simple shape? This is the empirical question we addressed in this work. Participants were presented with a tiny disk embedded in an empty circle (Experiment 1 and 3) or in a square (Experiment 2). Shortly afterwards, the disk vanished and they had to reproduce the previously seen disk position within the empty shape by means of the mouse cursor, as accurately as possible. Several loci inside each shape were tested. We found that the space delimited by a circle and by a square is not homogeneous and the observed distortion appears to be consistent across observers and specific for the two tested shapes. However, a common pattern can be identified when reproducing geometrical loci enclosed in a shape: errors are shifted toward the periphery in the region around the centre and toward the centre in the region nearby the edges. The error absolute value declines progressively as we approach an equilibrium contour line between the centre and the outline of the shape where the error is null. These results suggest that enclosing an empty space within a shape imposes an organization to it an warps its metrics: not only the perceived loci inside a shape are not the same as the geometrical loci, but they are misperceived in a systematic way that is functional to the correct identification of the centre of the shape. Eye movements recordings (Experiment 3) are consistent with this interpretation of the data. |
Z. K. Sun; J. Y. Wang; F. Luo Experimental pain induces attentional bias that is modified by enhanced motivation: An eye tracking study Journal Article In: European Journal of Pain, vol. 20, no. 8, pp. 1266–1277, 2016. @article{Sun2016, Background: In this study, the effects of prior pain experience and motivation on attentional bias towards pain-related information were investigated within two visual-probe tasks via eye movement behaviours. It is hypothesized that pain experience would induce stronger attentional bias and such bias could be suppressed by the motivation to avoid impeding pain.Methods: All participants took part in visual-probe tasks with pictures and words as stimuli that are typically used in studies of attentional bias. They were allocated to three groups: no-pain (NP) group, performing tasks without experiencing pain; pain-experience (PE) group, performing the same tasks following painful stimuli; and pain-experience-with-motivation (PEM) group, undergoing the same procedure as PE group with additional instructions about avoiding impeding pain. Eye movements were recorded during the tasks.Results: The eye movement data showed that: (1) participants in the PE group exhibited stronger attention bias towards painful pictures than those in the NP group; (2) the attentional bias towards painful pictures was significantly reduced in the PEM group as compared to the PE group. By contrast, the verbal task failed to find these effects using sensory pain words as stimuli.Conclusion: This study was the first that revealed the impact of acute experimental pain on attentional bias towards pain-related information in healthy individuals through eye tracking. It may provide a possible solution to reduce hypervigilance towards pain-related information by altering the motivational relevance. WHAT DOES THIS STUDY ADD?: (1) This study revealed the impact of experimental pain on attentional bias in healthy individuals; (2) This study may provide a possible approach of altering motivational relevance to control the pain-induced attentional bias towards pain-related information. |
Yao-Ting Sung; Jih-Ho Cha; Jung-Yueh Tu; Ming-Da Wu; Wei-Chun Lin Investigating the processing of relative clauses in Mandarin Chinese: Evidence from eye-movement aata Journal Article In: Journal of Psycholinguistic Research, vol. 45, no. 5, pp. 1089–1113, 2016. @article{Sung2016, A number of previous studies on Chinese relative clauses (RC) have reported conflicting results on processing asymmetry. This study aims to revisit the prevalent debate on whether subject-extracted RCs (SRC) or object-extracted RCs (ORC) are easier to process by using the eye-movement technique. In the current study, the data are analyzed in terms of the gaze duration and regression of eye-movement in three critical areas: head noun, embedded verb, and RC-modifying noun phrase as subject. The results show an ORC preference for the processing of RC structures, which supports the word-order account and the Dependency Locality Theory, and a better cross-clausal integration for SRC, which supports the perspective-shift account. The processing asymmetry in Chinese RCs are discussed under relevant theoretical accounts, such as structure-based, memory-based, and perspective shift accounts. We argue that the findings are associated with the syntactic nature of Chinese (a head-initial language with pre-nominal RCs). |
Yao-Ting Sung; Jung-Yueh Tu; Jih-Ho Cha; Ming-Da Wu Processing preference toward object-extracted relative clauses in mandarin chinese by l1 and l2 speakers: An eye-tracking study Journal Article In: Frontiers in Psychology, vol. 7, pp. 4, 2016. @article{Sung2016a, The current study employed an eye-movement technique with an attempt to explore the reading patterns for the two types of Chinese relative clauses, subject-extracted relative clauses (SRCs) and object-extracted relative clauses (ORCs), by native speakers (L1), and Japanese learners (L2) of Chinese. The data were analyzed in terms of gaze duration, regression path duration, and regression rate on the two critical regions, head noun, and embedded verb. The results indicated that both the L1 and L2 participants spent less time on the head nouns in ORCs than in SRCs. Also, the L2 participants spent less time on the embedded verbs in ORCs than in SRCs and their regression rate for embedded verbs was generally lower in ORCs than in SRC. The findings showed that the participants experienced less processing difficulty in ORCs than SRCs. These results suggest an ORC preference in L1 and L2 speakers of Chinese, which provides evidence in support of linear distance hypothesis and implies that the syntactic nature of Chinese is at play in the RC processing. |
John Sustersic; Brad Wyble; Siddharth Advani; Vijaykrishnan Narayanan Towards a unified multiresolution vision model for autonomous ground robots Journal Article In: Robotics and Autonomous Systems, vol. 75, pp. 221–232, 2016. @article{Sustersic2016, While remotely operated unmanned vehicles are increasingly a part of everyday life, truly autonomous robots capable of independent operation in dynamic environments have yet to be realized-particularly in the case of ground robots required to interact with humans and their environment. We present a unified multiresolution vision model for this application designed to provide the wide field of view required to maintain situational awareness and sufficient visual acuity to recognize elements of the environment while permitting feasible implementations in real-time vision applications. The model features a kind of color-constant processing through single-opponent color channels and contrast invariant oriented edge detection using a novel implementation of the Combination of Receptive Fields model. The model provides color and edge-based salience assessment, as well as a compressed color image representation suitable for subsequent object identification. We show that bottom-up visual saliency computed using this model is competitive with the current state-of-the-art while allowing computation in a compressed domain and mimicking the human visual system with nearly half (45%) of computational effort focused within the fovea. This method reduces storage requirement of the image pyramid to less than 5% of the full image, and computation in this domain reduces model complexity in terms of both computational costs and memory requirements accordingly. We also quantitatively evaluate the model for its application domain by using it with a camera/lens system with a 185° field of view capturing 3.5M pixel color images by using a tuned salience model to predict human fixations. |
Benjamin Swets; Christopher A. Kurby Eye movements reveal the influence of event structure on reading behavior Journal Article In: Cognitive Science, vol. 40, no. 2, pp. 466–480, 2016. @article{Swets2016, When we read narrative texts such as novels and newspaper articles, we segment information presented in such texts into discrete events, with distinct boundaries between those events. But do our eyes reflect this event structure while reading? This study examines whether eye movements during the reading of discourse reveal how readers respond online to event structure. Participants read narrative passages as we monitored their eye movements. Several measures revealed that event structure predicted eye movements. In two experiments, we found that both early and overall reading times were longer for event boundaries. We also found that regressive saccades were more likely to land on event boundaries, but that readers were less likely to regress out of an event boundary. Experiment 2 also demonstrated that tracking event structure carries a working memory load. Eye movements provide a rich set of online data to test the cognitive reality of event segmentation during reading. |
Martin Szinte; Donatas Jonikaitis; Martin Rolfs; Patrick Cavanagh; Heiner Deubel Presaccadic motion integration between current and future retinotopic locations of attended objects Journal Article In: Journal of Neurophysiology, vol. 116, no. 4, pp. 1592–1602, 2016. @article{Szinte2016, Object tracking across eye movements is thought to rely on pre-saccadic updating of attention between the object's current and its "remapped" location (i.e., the post-saccadic retinotopic location). Here we report evidence for a bi-focal, pre-saccadic sampling between these two positions. While preparing a saccade, participants viewed four spatially separated random dot kinematograms, one of which was cued by a colored flash. They reported the direction of a coherent motion signal at the cued location while a second signal occurred simultaneously either at the cue's remapped location or at one of several control locations. Motion integration between the signals occurred only when the two motion signals were congruent and were shown at the cue and at its remapped location. This shows that the visual system integrates features between both the current and the future retinotopic locations of an attended object, and that such pre-saccadic sampling is feature-specific. |
Jérôme Tagu; Karine Doré-Mazars; Christelle Lemoine-Lardennois; Dorine Vergilino-Perez How eye dominance strength modulates the influence of a distractor on saccade accuracy Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 2, pp. 534–543, 2016. @article{Tagu2016, PURPOSE. Neuroimaging studies have shown that the dominant eye is linked preferentially to the ipsilateral primary visual cortex. However, its role in perception still is misunderstood. We examined the influence of eye dominance and eye dominance strength on saccadic parameters, contrasting stimulations presented in the two hemifields. METHODS. Participants with contrasted eye dominance (left or right) and eye dominance strength (strong or weak) were asked to make a saccade toward a target displayed at 5degree or 7degree left or right of a fixation cross. In some trials, a distractor at 3degree of eccentricity also was displayed either in the same hemifield as the target (to induce a global effect on saccade amplitude) or in the opposite hemifield (to induce a remote distractor effect on saccade latency). RESULTS. Eye dominance did influence saccade amplitude as participants with strong eye dominance showed more accurate saccades toward the target (weaker global effect) in the hemifield contralateral to the dominant eye than in the ipsilateral one. Such asymmetry was not found in participants with weak eye dominance or when a remote distractor was used. CONCLUSIONS. We show that eye dominance strength influences saccade target selection. We discuss several arguments supporting the view that such advantage may be linked to the relationship between the dominant eye and ipsilateral hemisphere. |
Tobias Talanow; Anna-Maria Kasparbauer; Maria Steffens; Inga Meyhöfer; Bernd Weber; Nikolaos Smyrnis; Ulrich Ettinger Facing competition: Neural mechanisms underlying parallel programming of antisaccades and prosaccades Journal Article In: Brain and Cognition, vol. 107, pp. 37–47, 2016. @article{Talanow2016, The antisaccade task is a prominent tool to investigate the response inhibition component of cognitive control. Recent theoretical accounts explain performance in terms of parallel programming of exogenous and endogenous saccades, linked to the horse race metaphor. Previous studies have tested the hypothesis of competing saccade signals at the behavioral level by selectively slowing the programming of endogenous or exogenous processes e.g. by manipulating the probability of antisaccades in an experimental block. To gain a better understanding of inhibitory control processes in parallel saccade programming, we analyzed task-related eye movements and blood oxygenation level dependent (BOLD) responses obtained using functional magnetic resonance imaging (fMRI) at 3T from 16 healthy participants in a mixed antisaccade and prosaccade task. The frequency of antisaccade trials was manipulated across blocks of high (75%) and low (25%) antisaccade frequency. In blocks with high antisaccade frequency, antisaccade latencies were shorter and error rates lower whilst prosaccade latencies were longer and error rates were higher. At the level of BOLD, activations in the task-related saccade network (left inferior parietal lobe, right inferior parietal sulcus, left precentral gyrus reaching into left middle frontal gyrus and inferior frontal junction) and deactivations in components of the default mode network (bilateral temporal cortex, ventromedial prefrontal cortex) compensated increased cognitive control demands. These findings illustrate context dependent mechanisms underlying the coordination of competing decision signals in volitional gaze control. |
Oleg Solopchuk; Andrea Alamia; Etienne Olivier; Alexandre Zénon Chunking improves symbolic sequence processing and relies on working memory gating mechanisms Journal Article In: Learning and Memory, vol. 23, no. 3, pp. 108–112, 2016. @article{Solopchuk2016, Chunking, namely the grouping of sequence elements in clusters, is ubiquitous during sequence processing, but its impact on performance remains debated. Here, we found that participants who adopted a consistent chunking strategy during symbolic sequence learning showed a greater improvement of their performance and a larger decrease in cognitive workload over time. Stronger reliance on chunking was also associated with higher scores in a WM updating task, suggesting the contribution of WM gating mechanisms to sequence chunking. Altogether, these results indicate that chunking is a cost-saving strategy that enhances effectiveness of symbolic sequence learning. |
Stephen Soncin; Donald C. Brien; Brian C. Coe; Alina Marin; Douglas P. Munoz Contrasting emotion processing and executive functioning in attention-Deficit/hyperactivity disorder and bipolar disorder Journal Article In: Behavioral Neuroscience, vol. 130, no. 5, pp. 531–543, 2016. @article{Soncin2016, Attention-deficit/hyperactivity disorder (ADHD) and bipolar disorder (BD) are highly comorbid and share executive function and emotion processing deficits, complicating diagnoses despite distinct clinical features. We compared performance on an oculomotor task that assessed these processes to capture subtle differences between ADHD and BD. The interaction between emotion processing and executive functioning may be informative because, although these processes overlap anatomically, certain regions that are compromised in each network are different in ADHD and BD. Adults, aged 18-62, with ADHD ( = 22), BD ( = 20), and healthy controls ( = 21) performed an interleaved pro- and antisaccade task (looking toward vs. looking away from a visual target, respectively). Task irrelevant emotional faces (fear, happy, sad, neutral) were presented on a subset of trials either before or with the target. The ADHD group made more direction errors (looked in the wrong direction) than controls. Presentation of negatively valenced (fear, sad) and ambiguous (neutral) emotional faces increased saccadic reaction time in BD only compared to controls, whereas longer presentation of sad faces modestly increased group differences. The antisaccade task differentiated ADHD from controls. Emotional processing further impaired processing speed in BD. We propose that the dorsolateral prefrontal cortex is critical in both processing systems, but the inhibitory signal this region generates is impacted by dysfunction in the emotion processing network, possibly at the orbitofrontal cortex, in BD. These results suggest there are differences in how emotion processing and executive functioning interact, which could be utilized to improve diagnostic specificity. |
David Souto; Karl R. Gegenfurtner; Alexander C. Schütz Saccade adaptation and visual uncertainty Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 227, 2016. @article{Souto2016, Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty. |
Eelke Spaak; Yvonne Fonken; Ole Jensen; Floris P. Lange The neural mechanisms of prediction in visual search Journal Article In: Cerebral Cortex, vol. 26, no. 11, pp. 4327–4336, 2016. @article{Spaak2016, The speed of visual search depends on bottom-up stimulus features (e.g., we quickly locate a red item among blue distractors), but it is also facilitated by the presence of top-down perceptual predictions about the item. Here, we identify the nature, source, and neuronal substrate of the predictions that speed up resumed visual search. Human subjects were presented with a visual search array that was repeated up to 4 times, while brain activity was recorded using magnetoencephalography (MEG). Behaviorally, we observed a bimodal reaction time distribution for resumed visual search, indicating that subjects were extraordinarily rapid on a proportion of trials. MEG data demonstrated that these rapid-response trials were associated with a prediction of (1) target location, as reflected by alpha-band (8-12 Hz) lateralization; and (2) target identity, as reflected by beta-band (15-30 Hz) lateralization. Moreover, we show that these predictions are likely generated in a network consisting of medial superior frontal cortex and right temporo-parietal junction. These findings underscore the importance and nature of perceptual hypotheses for efficient visual search. |
Anja Sperlich; Johannes Meixner; Jochen Laubrock Development of the perceptual span in reading: A longitudinal study Journal Article In: Journal of Experimental Child Psychology, vol. 146, pp. 181–201, 2016. @article{Sperlich2016, The perceptual span is a standard measure of parafoveal processing, which is considered highly important for efficient reading. Is the perceptual span a stable indicator of reading performance? What drives its development? Do initially slower and faster readers converge or diverge over development? Here we present the first longitudinal data on the development of the perceptual span in elementary school children. Using the moving window technique, eye movements of 127 German children in three age groups (Grades 1, 2, and 3 in Year 1) were recorded at two time points (T1 and T2) 1 year apart. Introducing a new measure of the perceptual span, nonlinear mixed-effects modeling was used to separate window size effects from asymptotic reading performance. Cross-sectional differences were well replicated longitudinally. Asymptotic reading rate increased monotonously with grade, but in a decelerating fashion. A significant change in the perceptual span was observed only between Grades 2 and 3. Together with results from a cross-lagged panel model, this suggests that the perceptual span increases as a consequence of relatively well-established word reading. Stabilities of observed and predicted reading rates were high after Grade 1, whereas the perceptual span was only moderately stable for all grades. Comparing faster and slower readers as assessed at T1, in general, a pattern of stable between-group differences emerged rather than a compensatory pattern; second and third graders even showed a Matthew effect in reading rate and the perceptual span, respectively. |
Sara Spotorno; Guillaume S. Masson; Anna Montagnini Fixational saccades during grating detection and discrimination Journal Article In: Vision Research, vol. 118, pp. 105–118, 2016. @article{Spotorno2016, We investigated the patterns of fixational saccades in human observers performing two classical perceptual tasks: grating detection and discrimination. First, participants were asked to detect a vertical or tilted grating with one of three spatial frequencies and one of four luminance contrast levels. In the second experiment, participants had to discriminate the spatial frequency of two supra-threshold gratings. The gratings were always embedded in additive, high- or low-contrast pink noise. We observed that the patterns of fixational saccades were highly idiosyncratic among participants. Moreover, during the grating detection task, the amplitude and the number of saccades were inversely correlated with stimulus visibility. We did not find a systematic relationship between saccade parameters and grating frequency, apart from a slight decrease of saccade amplitude during grating discrimination with higher spatial frequencies. No consistent changes in the number and amplitude of fixational saccades with performance accuracy were reported. Surprisingly, during grating detection, saccade number and amplitude were similar in grating-with-noise and noise-only displays. Grating orientation did not affect substantially saccade direction in either task. The results challenge the idea that, when analyzing low-level spatial properties of visual stimuli, fixational saccades can be adapted in order to extract task-relevant information optimally. Rather, saccadic patterns seem to be overall modulated by task context, stimulus visibility and individual variability. |
William W. Sprague; Emily A. Cooper; Sylvain Reissier; Baladitya Yellapragada; Martin S. Banks The natural statistics of blur Journal Article In: Journal of Vision, vol. 16, no. 10, pp. 1–27, 2016. @article{Sprague2016, Blur from defocus can be both useful and detrimental for visual perception: It can be useful as a source of depth information and detrimental because it degrades image quality. We examined these aspects of blur by measuring the natural statistics of defocus blur across the visual field. Participants wore an eye-and-scene tracker that measured gaze direction, pupil diameter, and scene distances as they perfomed everyday tasks. We found that blur magnitude increases with increasing eccentricity. There is a vertical gradient in the distances that generate defocus blur: Blur below the fovea is generally due to scene points nearer than fixation; blur above the fovea is mostly due to points farther than fixation. There is no systematic horizontal gradient. Large blurs are generally caused by points farther rather than nearer than fixation. Consistent with the statistics, participants in a perceptual experiment perceived vertical blur gradients as slanted top-back whereas horizontal gradients were perceived equally as left-back and right-back. The tendency for people to see sharp as near and blurred as far is also consistent with the observed statistics. We calculated how many observations will be perceived as unsharp and found that perceptible blur is rare. Finally, we found that eye shape in ground-dwelling animals conforms to that required to put likely distances in best focus. |
Balaji Sriram; Philip M. Meier; Pamela Reinagel Temporal and spatial tuning of dorsal lateral geniculate nucleus neurons in unanesthetized rats Journal Article In: Journal of Neurophysiology, vol. 115, pp. 2658–2671, 2016. @article{Sriram2016, Visual response properties of neurons in the dorsolateral geniculate nucleus (dLGN) have been well de-scribed in several species, but not in rats. Analysis of responses from the unanesthetized rat dLGN will be needed to develop quantitative models that account for visual behavior of rats. We recorded visual responses from 130 single units in the dLGN of 7 unanesthetized rats. We report the response amplitudes, temporal frequency, and spatial frequency sensitivities in this population of cells. In response to 2-Hz visual stimulation, dLGN cells fired 15.9 ⫾ 11.4 spikes/s (mean ⫾ SD) modulated by 10.7 ⫾ 8.4 spikes/s about the mean. The optimal temporal frequency for full-field stimulation ranged from 5.8 to 19.6 Hz across cells. The temporal high-frequency cutoff ranged from 11.7 to 33.6 Hz. Some cells responded best to low temporal frequency stimulation (low pass), and others were strictly bandpass; most cells fell between these extremes. At 2- to 4-Hz temporal modulation, the spatial frequency of drifting grating that drove cells best ranged from 0.008 to 0.18 cycles per degree (cpd) across cells. The high-frequency cutoff ranged from 0.01 to 1.07 cpd across cells. The majority of cells were driven best by the lowest spatial frequency tested, but many were partially or strictly bandpass. We conclude that single units in the rat dLGN can respond vigorously to temporal modulation up to at least 30 Hz and spatial detail up to 1 cpd. Tuning properties were hetero- geneous, but each fell along a continuum; we found no obvious clustering into discrete cell types along these dimensions. |
Mathew Stange; Amanda Barry; Jolene Smyth; Kristen Olson Effects of smiley face scales on visual processing of satisfaction questions in web surveys Journal Article In: Social Science Computer Review, vol. 36, no. 6, pp. 756–766, 2016. @article{Stange2016, Web surveys permit researchers to use graphic or symbolic elements alongside the text of response options to help respondents process the categories. Smiley faces are one example used to communicate positive and negative domains. How respondents visually process these smiley faces, including whether they detract from the question's text, is understudied. We report the results of two eye-tracking experiments in which satisfaction questions were asked with and without smiley faces. Respondents to the questions with smiley faces spent less time reading the question stem and response option text than respondents to the questions without smiley faces, but the response distributions did not differ by version. We also find support that lower literacy respondents rely more on the smiley faces than higher literacy respondents. |
Maria Steffens; B. Becker; C. Neumann; Anna-Maria Kasparbauer; Inga Meyhöfer; Bernd Weber; Mitul A. Mehta; R. Hurlemann; Ulrich Ettinger Effects of ketamine on brain function during smooth pursuit eye movements Journal Article In: Human Brain Mapping, vol. 37, no. 11, pp. 4047–4060, 2016. @article{Steffens2016, The uncompetitive NMDA receptor antagonist ketamine has been proposed to model symptoms of psychosis. Smooth pursuit eye movements (SPEM) are an established biomarker of schizophrenia. SPEM performance has been shown to be impaired in the schizophrenia spectrum and during ketamine administration in healthy volunteers. However, the neural mechanisms mediating SPEM impairments during ketamine administration are unknown. In a counter-balanced, placebo-controlled, double-blind, within-subjects design, 27 healthy participants received intravenous racemic ketamine (100 ng/mL target plasma concentration) on one of two assessment days and placebo (intravenous saline) on the other. Participants performed a block-design SPEM task during functional magnetic resonance imaging (fMRI) at 3 Tesla field strength. Self-ratings of psychosis-like experiences were obtained using the Psychotomimetic States Inventory (PSI). Ketamine administration induced psychosis-like symptoms, during ketamine infusion, participants showed increased ratings on the PSI dimensions cognitive disorganization, delusional thinking, perceptual distortion and mania. Ketamine led to robust deficits in SPEM performance, which were accompanied by reduced blood oxygen level dependent (BOLD) signal in the SPEM network including primary visual cortex, area V5 and the right frontal eye field (FEF), compared to placebo. A measure of connectivity with V5 and FEF as seed regions, however, was not significantly affected by ketamine. These results are similar to the deviations found in schizophrenia patients. Our findings support the role of glutamate dysfunction in impaired smooth pursuit performance and the use of ketamine as a pharmacological model of psychosis, especially when combined with oculomotor biomarkers. |
Neil Stewart; Simon Gächter; Takao Noguchi; Timothy L. Mullett Eye movements in strategic choice Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 137–156, 2016. @article{Stewart2016, In risky and other multiattribute choices, the process of choosing is well described by random walk or drift diffusion models in which evidence is accumulated over time to threshold. In strategic choices, level-k and cognitive hierarchy models have been offered as accounts of the choice process, in which people simulate the choice processes of their opponents or partners. We recorded the eye movements in 2 × 2 symmetric games including dominance-solvable games like prisoner's dilemma and asymmetric coordination games like stag hunt and hawk–dove. The evidence was most consistent with the accumulation of payoff differences over time: we found longer duration choices with more fixations when payoffs differences were more finely balanced, an emerging bias to gaze more at the payoffs for the action ultimately chosen, and that a simple count of transitions between payoffs—whether or not the comparison is strategically informative—was strongly associated with the final choice. The accumulator models do account for these strategic choice process measures, but the level-k and cognitive hierarchy models do not. |
Neil Stewart; Frouke Hermens; William J. Matthews Eye movements in risky choice Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 116–136, 2016. @article{Stewart2016a, We asked participants to make simple risky choices while we recorded their eye movements. We built a complete statistical model of the eye movements and found very little systematic variation in eye movements over the time course of a choice or across the different choices. The only exceptions were finding more (of the same) eye movements when choice options were similar, and an emerging gaze bias in which people looked more at the gamble they ultimately chose. These findings are inconsistent with prospect theory, the priority heuristic, or decision field theory. However, the eye movements made during a choice have a large relationship with the final choice, and this is mostly independent from the contribution of the actual attribute values in the choice options. That is, eye movements tell us not just about the processing of attribute values but also are independently associated with choice. The pattern is simple—people choose the gamble they look at more often, independently of the actual numbers they see—and this pattern is simpler than predicted by decision field theory, decision by sampling, and the parallel constraint satisfaction model. |
Mallory C. Stites; Kara D. Federmeier; Kiel Christianson Do morphemes matter when reading compound words with transposed letters? Evidence from eye-tracking and event-related potentials Journal Article In: Language, Cognition and Neuroscience, pp. 1–23, 2016. @article{Stites2016, The current study investigates the online processing consequences of encountering compound words with transposed letters (TLs), to determine if cross-morpheme TLs are more disruptive to reading than those within a single morpheme, as would be predicted by accounts of obligatory morpho-orthopgrahic decomposition. Two measures of online processing, eye movements and event-related potentials (ERPs), were collected in separate experiments. Participants read sentences containing correctly spelled compound words (cupcake), or compounds with TLs occurring either across morphemes (cucpake) or within one morpheme (cupacke). Results showed that between- and within-morpheme transpositions produced equal processing costs in both measures, in the form of longer reading times (Experiment 1) and a late posterior positivity (Experiment 2) that did not differ between conditions. Findings converge to suggest that within- and between-morpheme TLs are equally disruptive to recognition, providing evidence against obligatory morpho-orthographic processing and in favour of whole-word access of English compound words during sentence reading. |
Viola S. Störmer; George A. Alvarez Attention alters perceived attractiveness Journal Article In: Psychological Science, vol. 27, no. 4, pp. 563–571, 2016. @article{Stoermer2016, Can attention alter the impression of a face? Previous studies showed that attention modulates the appearance of lower-level visual features. For instance, attention can make a simple stimulus appear to have higher contrast than it actually does. We tested whether attention can also alter the perception of a higher-order property—namely, facial attractiveness. We asked participants to judge the relative attractiveness of two faces after summoning their attention to one of the faces using a briefly presented visual cue. Across trials, participants judged the attended face to be more attractive than the same face when it was unattended. This effect was not due to decision or response biases, but rather was due to changes in perceptual processing of the faces. These results show that attention alters perceived facial attractiveness, and broadly demonstrate that attention can influence higher-level perception and may affect people's initial impressions of one another. |
Caleb E. Strait; Brianna J. Sleezer; Tommy C. Blanchard; Habiba Azab; Meghan D. Castagno; Benjamin Y. Hayden Neuronal selectivity for spatial positions of offers and choices in five reward regions Journal Article In: Journal of Neurophysiology, vol. 115, no. 3, pp. 1098–1111, 2016. @article{Strait2016, When we evaluate an option, how is the neural representation of its value linked to information that identifies it, such as its position in space? We hypothesized that value information and identity cues are not bound together at a particular point but are represented together at the single unit level throughout the entirety of the choice process. We examined neuronal responses in two-option gambling tasks with lateralized and asynchronous presentation of offers in five reward regions: orbitofrontal cortex (OFC, area 13), ventromedial prefrontal cortex (vmPFC, area 14), ventral striatum (VS), dorsal anterior cingulate cortex (dACC), and subgenual anterior cingulate cortex (sgACC, area 25). Neuronal responses in all areas are sensitive to the positions of both offers and of choices. This selectivity is strongest in reward-sensitive neurons, indicating that it is not a property of a specialized subpopulation of cells. We did not find consistent contralateral or any other organization to these responses, indicating that they may be difficult to detect with aggregate measures like neuro-imaging or studies of lesion effects. These results suggest that value coding is wed to factors that identify the object throughout the reward system and suggest a possible solution to the binding problem raised by abstract value encoding schemes. |
Gregory P. Strauss; Kathryn L. Ossenfort; Kayla M. Whearty In: PLoS ONE, vol. 11, no. 11, pp. e0162290, 2016. @article{Strauss2016, Multiple emotion regulation strategies have been identified and found to differ in their effectiveness at decreasing negative emotions. One reason for this might be that individual strategies are associated with differing levels of cognitive demand and require distinct patterns of visual attention to achieve their effects. In the current study, we tested this hypothesis in a sample of psychiatrically healthy participants (n = 25) who attempted to down-regulate negative emotion to photographs from the International Affective Picture System using cognitive reappraisal or distraction. Eye movements, pupil dilation, and subjective reports of negative emotionality were obtained for reappraisal, distraction, unpleasant passive viewing, and neutral passive viewing conditions. Behavioral results indicated that reappraisal and distraction successfully decreased self-reported negative affect relative to unpleasant passive viewing. Successful down regulation of negative affect was associated with different patterns of visual attention across regulation strategies. During reappraisal, there was an initial increase in dwell time to arousing scene regions and a subsequent shift away from these regions during later portions of the trial, whereas distraction was associated with reduced total dwell time to arousing interest areas throughout the entire stimulus presentation. Pupil dilation was greater for reappraisal than distraction or unpleasant passive viewing, suggesting that reappraisal may recruit more effortful cognitive control processes. Furthermore, greater decreases in self-reported negative emotion were associated with a lower proportion of dwell time within arousing areas of interest. These findings suggest that different emotion regulation strategies necessitate different patterns of visual attention to be effective and that individual differences in visual attention predict the extent to which individuals can successfully decrease negative emotion using reappraisal and distraction. |
Inga Meyhöfer; Katja Bertsch; Moritz Esser; Ulrich Ettinger Variance in saccadic eye movements reflects stable traits Journal Article In: Psychophysiology, vol. 53, no. 4, pp. 566–578, 2016. @article{Meyhoefer2016, Saccadic tasks are widely used to study cognitive processes, effects of pharmacological treatments, and mechanisms underlying psychiatric disorders. In genetic studies, it is assumed that saccadic endophenotypes are traits. While internal consistency and temporal stability of saccadic performance is high for most of the measures, the magnitude of underlying trait components has not been estimated, and influences of situational aspects and person by situation interactions have not been investigated. To do so, 68 healthy participants performed prosaccades, antisaccades, and memory-guided saccades on three occasions at weekly intervals at the same time of day. Latent state-trait modeling was applied to estimate the proportions of variance reflecting stable trait components, situational influences, and Person × Situation interaction effects. Mean variables for all saccadic tasks showed high to excellent reliabilities. Intraindividual standard deviations were found to be slightly less reliable. Importantly, an average of 60% of variance of a single measurement was explained by trans-situationally stable person effects, while situation aspects and interactions between person and situation were found to play a negligible role. We conclude that saccadic variables, in standard laboratory settings, represent highly reliable measures that are largely unaffected by situational influences. Extending previous reliability studies, these findings clearly demonstrate the trait-like nature of these measures and support their role as endophenotypes. |
Audrey L. Michal; David Uttal; Priti Shah; Steven L. Franconeri Visual routines for extracting magnitude relations Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 6, pp. 1802–1809, 2016. @article{Michal2016, Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple graphs, and two systematic patterns emerged: eye movements that followed the verbal order of the question (inspecting the "blueberry" value first) versus those that followed a left-first bias (regardless of the left value's identity). Question-order patterns led substantially to faster responses and increased in prevalence with age, whereas the left-first pattern led to far slower responses and was the dominant strategy for younger children. We argue that the optimal way to verify a verbally expressed relation'scon- sistency with visualization is for the eyes to mimic the verbal ordering but that this strategy requires executive control and coordination with language. |
Thomas Miconi; Laura Groomes; Gabriel Kreiman There's Waldo! A normalization model of visual search predicts single-trial human fixations in an object search task Journal Article In: Cerebral Cortex, vol. 26, no. 7, pp. 3064–3082, 2016. @article{Miconi2016, When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global "priority map" that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts signle-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitutes a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in arror and target-absent trials, in a search task involving complex objects. |
Evelyn Milburn; Tessa Warren; Michael Walsh Dickey World knowledge affects prediction as quickly as selectional restrictions: Evidence from the visual world paradigm Journal Article In: Language, Cognition and Neuroscience, vol. 31, no. 4, pp. 536–548, 2016. @article{Milburn2016, There has been considerable debate regarding the question of whether linguistic knowledge and world knowledge are separable and used differently during processing or not (Hagoort et al, 2004). Integration of word meaning and world knowledge in language comprehension (Matsuki et al, 2011). Event-based plausibility immediately influences on-line language comprehension (Paczynski et al, 2012). Multiple influences of semantic memory on sentence processing: Distinct effects of semantic relatedness on violations of real-world event/state knowledge and animacy selection restrictions (Warren et al, 2007). Investigating effects of selectional restriction violations and plausibility violation severity on eye movements in reading. Previous investigations into this question have provided mixed evidence as to whether violations of selectional restrictions are detected earlier than violations of world knowledge. We report a visual world eye-tracking study comparing the timing of facilitation contributed by selectional restrictions vs. world knowledge. College-aged adults (n = 36) viewed photographs of natural scenes while listening to sentences. Participants anticipated upcoming direct objects similarly regardless of whether facilitation was provided by only world knowledge or a combination of selectional restrictions and world knowledge. These results suggest that selectional restrictions are not available earlier in comprehension than world knowledge. |
Ravi D. Mill; Akira R. O'Connor; Ian G. Dobbins Pupil dilation during recognition memory: Isolating unexpected recognition from judgment uncertainty Journal Article In: Cognition, vol. 154, pp. 81–94, 2016. @article{Mill2016, Optimally discriminating familiar from novel stimuli demands a decision-making process informed by prior expectations. Here we demonstrate that pupillary dilation (PD) responses during recognition memory decisions are modulated by expectations, and more specifically, that pupil dilation increases for unexpected compared to expected recognition. Furthermore, multi-level modeling demonstrated that the time course of the dilation during each individual trial contains separable early and late dilation components, with the early amplitude capturing unexpected recognition, and the later trailing slope reflecting general judgment uncertainty or effort. This is the first demonstration that the early dilation response during recognition is dependent upon observer expectations and that separate recognition expectation and judgment uncertainty components are present in the dilation time course of every trial. The findings provide novel insights into adaptive memory-linked orienting mechanisms as well as the general cognitive underpinnings of the pupillary index of autonomic nervous system activity. |
Mark Mills; Olivia Wieda; Scott F. Stoltenberg; Michael D. Dodd Emotion moderates the association between HTR2A (rs6313) genotype and antisaccade latency Journal Article In: Experimental Brain Research, vol. 234, no. 9, pp. 2653–2665, 2016. @article{Mills2016, The serotonin system is heavily involved in cognitive and emotional control processes. Previous work has typically investigated this system's role in control processes separately for cognitive and emotional domains, yet it has become clear the two are linked. The present study, therefore, examined whether variation in a serotonin receptor gene (HTR2A, rs6313) moderated effects of emotion on inhibitory control. An emotional antisaccade task was used in which participants looked toward (prosaccade) or away (antisaccade) from a target presented to the left or right of a happy, angry, or neutral face. Overall, antisaccade latencies were slower for rs6313 C allele homozygotes than T allele carriers, with no effect of genotype on prosaccade latencies. Thus, C allele homozygotes showed relatively weak inhibitory control but intact reflexive control. Importantly, the emotional stimulus was either present during target presentation (overlap trials) or absent (gap trials). The gap effect (slowed latency in overlap versus gap trials) in antisaccade trials was larger with angry versus neutral faces in C allele homozygotes. This impairing effect of negative valence on inhibitory control was larger in C allele homozygotes than T allele carriers, suggesting that angry faces disrupted/competed with the control processes needed to generate an antisaccade to a greater degree in these individuals. The genotype difference in the negative valence effect on antisaccade latency was attenuated when trial N-1 was an antisaccade, indicating top-down regulation of emotional influence. This effect was reduced in C/C versus T/_ individuals, suggesting a weaker capacity to downregulate emotional processing of task-irrelevant stimuli. |
Wendy Ming; Dimitrios J. Palidis; Miriam Spering; Martin J. McKeown Visual contrast sensitivity in early-stage parkinson's disease Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 13, pp. 5696–5704, 2016. @article{Ming2016, Purpose: Visual impairments are frequent in Parkinson's disease (PD) and impact normal functioning in daily activities. Visual contrast sensitivity is a powerful nonmotor sign for discriminating PD patients from controls. However, it is usually assessed with static visual stimuli. Here we examined the interaction between perception and eye movements in static and dynamic contrast sensitivity tasks in a cohort of mildly impaired, early-stage PD patients. Methods: Patients (n = 13) and healthy age-matched controls (n = 12) viewed stimuli of various spatial frequencies (0-8 cyc/deg) and speeds (0°/s, 10°/s, 30°/s) on a computer monitor. Detection thresholds were determined by asking participants to adjust luminance contrast until they could just barely see the stimulus. Eye position was recorded with a video-based eye tracker. Results: Patients' static contrast sensitivity was impaired in the intermediate spatial-frequency range and this impairment correlated with fixational instability. However, dynamic contrast sensitivity and patients' smooth pursuit were relatively normal. An independent component analysis revealed contrast sensitivity profiles differentiating patients and controls. Conclusions: Our study simultaneously assesses perceptual contrast sensitivity and eye movements in PD, revealing a possible link between fixational instability and perceptual deficits. Spatiotemporal contrast sensitivity profiles may represent an easily measurable metric as a component of a broader combined biometric for nonmotor features observed in PD. |
Meghan B. Mitchell; Steven D. Shirk; Donald G. McLaren; Jessica S. Dodd; Ali Ezzati; Brandon A. Ally; Alireza Atri Recognition of faces and names: Multimodal physiological correlates of memory and executive function Journal Article In: Brain Imaging and Behavior, vol. 10, no. 2, pp. 408–423, 2016. @article{Mitchell2016, We sought to characterize electrophysiological, eye-tracking and behavioral correlates of face-name recognition memory in healthy younger adults using high-density electroencephalography (EEG), infrared eye-tracking (ET), and neuropsychological measures. Twenty-one participants first studied 40 face-name (FN) pairs; 20 were presented four times (4R) and 20 were shown once (1R). Recognition memory was assessed by asking participants to make old/new judgments for 80 FN pairs, of which half were previously studied items and half were novel FN pairs (N). Simultaneous EEG and ET recording were collected during recognition trials. Comparisons of event-related potentials (ERPs) for correctly identified FN pairs were compared across the three item types revealing classic ERP old/new effects including 1) relative positivity (1R > N) bi-frontally from 300 to 500 ms, reflecting enhanced familiarity, 2) relative positivity (4R > 1R and 4R > N) in parietal areas from 500 to 800 ms, reflecting enhanced recollection, and 3) late frontal effects (1R > N) from 1000 to 1800 ms in right frontal areas, reflecting post-retrieval monitoring. ET analysis also revealed significant differences in eye movements across conditions. Exploration of cross-modality relationships suggested associations between memory and executive function measures and the three ERP effects. Executive function measures were associated with several indicators of saccadic eye movements and fixations, which were also associated with all three ERP effects. This novel characterization of face-name recognition memory performance using simultaneous EEG and ET reproduced classic ERP and ET effects, supports the construct validity of the multimodal FN paradigm, and holds promise as an integrative tool to probe brain networks supporting memory and executive functioning. |
Aleksandra Mitrovic; Pablo P. L. Tinio; Helmut Leder In: Frontiers in Human Neuroscience, vol. 10, pp. 122, 2016. @article{Mitrovic2016, One of the key behavioral effects of attractiveness is increased visual attention to attractive people. This effect is often explained in terms of evolutionary adaptations, such as attractiveness being an indicator of good health. Other factors could influence this effect. In the present study, we explored the modulating role of sexual orientation on the effects of attractiveness on exploratory visual behavior. Heterosexual and homosexual men and women viewed natural-looking scenes that depicted either two women or two men who varied systematically in levels of attractiveness (based on a pre- study). Participants' eye movements and attractiveness ratings toward the faces of the depicted people were recorded. The results showed that although attractiveness had the largest influence on participants' behaviors, participants' sexual orientations strongly modulated the effects.With the exception of homosexual women, all participant groups looked longer and more often at attractive faces that corresponded with their sexual orientations. Interestingly, heterosexual and homosexual men and homosexual women looked longer and more often at the less attractive face of their non-preferred sex than the less attractive face of their preferred sex, evidence that less attractive faces of the preferred sex might have an aversive character. These findings provide evidence for the important role that sexual orientation plays in guiding visual exploratory behavior and evaluations of the attractiveness of others. |
Jeff Moher; Joo-Hyun Song Target selection biases from recent experience transfer across effectors Journal Article In: Attention, Perception, & Psychophysics, vol. 78, no. 2, pp. 415–426, 2016. @article{Moher2016, Target selection is often biased by an observer's recent experiences. However, not much is known about whether these selection biases influence behavior across different effectors. For example, does looking at a red object make it easier to subsequently reach towards another red object? In the current study, we asked observers to find the uniquely colored target object on each trial. Randomly intermixed pre-trial cues indicated the mode of action: either an eye movement or a visually guided reach movement to the target. In Experiment 1, we found that priming of popout, reflected in faster responses following repetition of the target color on consecutive trials, occurred regardless of whether the effector was repeated from the previous trial or not. In Experiment 2, we examined whether an inhibitory selection bias away from a feature could transfer across effectors. While priming of popout reflects both enhancement of the repeated target features and suppression of the repeated distractor features, the distractor previewing effect isolates a purely inhibitory component of target selection in which a previewed color is presented in a homogenous display and subsequently inhibited. Much like priming of popout, intertrial suppression biases in the distractor previewing effect transferred across effectors. Together, these results suggest that biases for target selection driven by recent trial history transfer across effectors. This indicates that representations in memory that bias attention towards or away from specific features are largely independent from their associated actions. |
Robert M. Mok; Nicholas E. Myers; George Wallis; Anna C. Nobre Behavioral and neural markers of flexible attention over working memory in aging Journal Article In: Cerebral Cortex, vol. 26, no. 4, pp. 1831–1842, 2016. @article{Mok2016, Working memory (WM) declines as we age and, because of its fundamental role in higher order cognition, this can have highly deleterious effects in daily life. We investigated whether older individuals benefit from flexible orienting of attention within WM to mitigate cognitive decline. We measured magnetoencephalography (MEG) in older adults performing a WM precision task with cues during the maintenance period that retroactively predicted the location of the relevant items for performance (retro-cues). WM performance of older adults significantly benefitted from retro-cues. Whereas WM maintenance declined with age, retro-cues conferred strong attentional benefits. A model-based analysis revealed an increase in the probability of recalling the target, a lowered probability of retrieving incorrect items or guessing, and an improvement in memory precision. MEG recordings showed that retro-cues induced a transient lateralization of alpha (8-14 Hz) and beta (15-30 Hz) oscillatory power. Interestingly, shorter durations of alpha/beta lateralization following retro-cues predicted larger cueing benefits, reinforcing recent ideas about the dynamic nature of access to WM representations. Our results suggest that older adults retain flexible control over WM, but individual differences in control correspond to differences in neural dynamics, possibly reflecting the degree of preservation of control in healthy aging. |
Charlotte B. Montgomery; Carrie Allison; Meng Chuan Lai; Sarah Cassidy; Peter E. Langdon; Simon Baron-Cohen Do adults with high functioning Autism or Asperger syndrome differ in empathy and emotion recognition? Journal Article In: Journal of Autism and Developmental Disorders, vol. 46, no. 6, pp. 1931–1940, 2016. @article{Montgomery2016, The present study examined whether adults with high functioning autism (HFA) showed greater difficulties in (1) their self-reported ability to empathise with others and/or (2) their ability to read mental states in others' eyes than adults with Asperger syndrome (AS). The Empathy Quotient (EQ) and ‘Reading the Mind in the Eyes' Test (Eyes Test) were compared in 43 adults with AS and 43 adults with HFA. No significant difference was observed on EQ score between groups, while adults with AS performed significantly better on the Eyes Test than those with HFA. This suggests that adults with HFA may need more support, particularly in mentalizing and complex emotion recognition, and raises questions about the existence of subgroups within autism spectrum conditions. |
Luis Morales; Daniela Paolieri; Paola E. Dussias; Jorge R. Valdés Kroff; Chip Gerfen; María Teresa Bajo The gender congruency effect during bilingual spoken-word recognition Journal Article In: Bilingualism: Language and Cognition, vol. 19, no. 2, pp. 294–310, 2016. @article{Morales2016, We investigate the 'gender-congruency' effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian-Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / 'find the scarf') and clicked on the object named in the instruction. Grammatical gender of the objects' name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. |
Michael Morgan; Kai Schreiber; J. A. Solomon Low-level mediation of directionally specific motion aftereffects: Motion perception is not necessary Journal Article In: Attention, Perception, & Psychophysics, vol. 78, no. 8, pp. 2621–2632, 2016. @article{Morgan2016, Previous psychophysical experiments with normal human observers have shown that adaptation to a moving dot stream causes directionally specific repulsion in the perceived angle of a subsequently viewed moving probe. In this study, we used a two-alternative forced choice task with roving pedestals to determine the conditions that are necessary and sufficient for producing directionally specific repulsion with compound adaptors, each of which contains two oppositely moving, differently colored component streams. Experiment 1 provided a demonstration of repulsion between single-component adaptors and probes moving at approximately 90° or 270°. In Experiment 2, oppositely moving dots in the adaptor were paired to preclude the appearance of motion. Nonetheless, repulsion remained strong when the angle between each probe stream and one component was approximately 30°. In Experiment 3, adapting dot pairs were kept stationary during their limited lifetimes. Their orientation content alone proved insufficient for producing repulsion. In Experiments 4–6, the angle between the probe and both adapting components was approximately 90° or 270°. Directional repulsion was found when observers were asked to visually track one of the adapting components (Exp. 6), but not when they were asked to attentionally track it (Exp. 5), nor while they passively viewed the adaptor (Exp. 4). Our results are consistent with a low-level mechanism for motion adaptation. This mechanism is not selective for stimulus color and is not susceptible to attentional modulation. The most likely cortical locus of adaptation is area V1. |
Sebastiaan Mathôt; Jean-Baptiste Melmi; Lotje Linden; Stefan Van Der Stigchel The mind-writing pupil: A human-computer interface based on decoding of covert attention through pupillometry Journal Article In: PLoS ONE, vol. 11, no. 2, pp. e0148805, 2016. @article{Mathot2016, We present a new human-computer interface that is based on decoding of attention through pupillometry. Our method builds on the recent finding that covert visual attention affects the pupillary light response: Your pupil constricts when you covertly (without looking at it) attend to a bright, compared to a dark, stimulus. In our method, participants covertly attend to one of several letters with oscillating brightness. Pupil size reflects the brightness of the selected letter, which allows us–with high accuracy and in real time–to determine which letter the par- ticipant intends to select. The performance of our method is comparable to the best covert- attention brain-computer interfaces to date, and has several advantages: no movement other than pupil-size change is required; no physical contact is required (i.e. no electrodes); it is easy to use; and it is reliable. Potential applications include: communication with totally locked-in patients, training of sustained attention, and ultra-secure password input. |
Nadine Matton; Pierre-Vincent Paubel; Julien Cegarra; Éric Raufaste Differences in multitask resource reallocation after change in task values Journal Article In: Human Factors, vol. 58, no. 8, pp. 1128–1142, 2016. @article{Matton2016, Objective: The objective was to characterize multitask resource reallocation strategies when managing subtasks with various assigned values. Background: When solving a resource conflict in multitasking, Salvucci and Taatgen predict a globally rational strategy will be followed that favors the most urgent subtask and optimizes global performance. However, Katidioti and Taatgen identified a locally rational strategy that optimizes only a subcomponent of the whole task, leading to detrimental consequences on global performance. Moreover, the question remains open whether expertise would have an impact on the choice of the strategy. Method: We adopted a multitask environment used for pilot selection with a change in emphasis on two out of four subtasks while all subtasks had to be maintained over a minimum performance. A laboratory eye-tracking study contrasted 20 recently selected pilot students considered as experienced with this task and 15 university students considered as novices. Results: When two subtasks were emphasized, novices focused their resources particularly on one high-value subtask and failed to prevent both low-value subtasks falling below minimum performance. On the contrary, experienced people delayed the processing of one low-value subtask but managed to optimize global performance. Conclusion: In a multitasking environment where some subtasks are emphasized, novices follow a locally rational strategy whereas experienced participants follow a globally rational strategy. Application: During complex training, trainees are only able to adjust their resource allocation strategy to subtask emphasis changes once they are familiar with the multitasking environment. |
Maria Matziridi; Eli Brenner; Jeroen B. J. Smeets Moving your head reduces perisaccadic compression Journal Article In: Journal of Vision, vol. 16, no. 13, pp. 1–8, 2016. @article{Matziridi2016, Flashes presented around the time of a saccade appear to be closer to the saccade endpoint than they really are. The resulting compression of perceived positions has been found to increase with the amplitude of the saccade. In most studies on perisaccadic compression the head is static, so the eye-in-head movement is equal to the change in gaze. What if moving the head causes part of the change in gaze? Does decreasing the eye-in-head rotation by moving the head decrease the compression of perceived positions? To find out, we asked participants to shift their gaze between two positions, either without moving their head or with the head contributing to the change in gaze. Around the time of the saccades we flashed bars that participants had to localize. When the head contributed to the change in gaze, the duration of the saccade was shorter and compression was reduced. We interpret this reduction in compression as being caused by a reduction in uncertainty about gaze position at the time of the flash. We conclude that moving one's head can reduce the systematic mislocalization of flashes presented around the time of saccades. |
Daniel R. McCloy; Eric D. Larson; Bonnie K. Lau; Adrian K. C. Lee Temporal alignment of pupillary response with stimulus events via deconvolution Journal Article In: The Journal of the Acoustical Society of America, vol. 139, no. 3, pp. EL57–EL62, 2016. @article{McCloy2016, Analysis of pupil dilation has been used as an index of attentional effort in the auditory domain. Previous work has modeled the pupillary response to attentional effort as a linear time-invariant system with a characteristic impulse response, and used deconvolution to estimate the attentional effort that gives rise to changes in pupil size. Here it is argued that one parameter of the impulse response (the latency of response maximum, t(max)) has been mis-estimated in the literature; a different estimate is presented, and it is shown how deconvolution with this value of t(max) yields more intuitively plausible and informative results. |
Brónagh McCoy; Jan Theeuwes Effects of reward on oculomotor control Journal Article In: Journal of Neurophysiology, vol. 116, no. 5, pp. 2453–2466, 2016. @article{McCoy2016, The present study examines the extent to which distractors that signal the availability of monetary reward on a given trial affect eye movements. We used a novel eye movement task in which observers had to follow a target around the screen while ignoring distractors presented at varying locations. We examined the effects of reward magnitude and distractor location on a host of oculomotor properties, including saccade latency, amplitude, landing position, curvature, and erroneous saccades toward the distractor. We found consistent effects of reward magnitude on classic oculomotor phenomena such as the remote distractor effect, the global effect, and oculomotor capture by the distractor. We also show that a distractor in the visual hemifield opposite to the target had a larger effect on oculomotor control than an equidistant distractor in the same hemifield as the target. Bayesian hierarchical drift diffusion modeling revealed large differences in drift rate depending on the reward value, location, and visual hemifield of the distractor stimulus. Our findings suggest that high reward distractors not only capture the eyes but also affect a multitude of oculomotor properties associated with oculomotor inhibition and control. |
Vincent B. McGinty; Antonio Rangel; William T. Newsome Orbitofrontal cortex value signals depend on fixation location during free viewing Journal Article In: Neuron, vol. 90, no. 6, pp. 1299–1311, 2016. @article{McGinty2016, In the natural world, monkeys and humans judge the economic value of numerous competing stimuli by moving their gaze from one object to another, in a rapid series of eye movements. This suggests that the primate brain processes value serially, and that value-coding neurons may be modulated by changes in gaze. To test this hypothesis, we presented monkeys with value-associated visual cues and took the unusual step of allowing unrestricted free viewing while we recorded neurons in the orbitofrontal cortex (OFC). By leveraging natural gaze patterns, we found that a large proportion of OFC cells encode gaze location and, that in some cells, value coding is amplified when subjects fixate near the cue. These findings provide the first cellular-level mechanism for previously documented behavioral effects of gaze on valuation and suggest a major role for gaze in neural mechanisms of valuation and decision-making under ecologically realistic conditions. |
Mel McKendrick; Stephen H. Butler; Madeleine A. Grealy The effect of self-referential expectation on emotional face processing Journal Article In: PLoS ONE, vol. 11, no. 5, pp. e0155576, 2016. @article{McKendrick2016, The role of self-relevance has been somewhat neglected in static face processing paradigms but may be important in understanding how emotional faces impact on attention, cognition and affect. The aim of the current study was to investigate the effect of self-relevant primes on processing emotional composite faces. Sentence primes created an expectation of the emotion of the face before sad, happy, neutral or composite face photos were viewed. Eye movements were recorded and subsequent responses measured the cognitive and affective impact of the emotion expressed. Results indicated that primes did not guide attention, but impacted on judgments of valence intensity and self-esteem ratings. Negative self-relevant primes led to the most negative self-esteem ratings, although the effect of the prime was qualified by salient facial features. Self-relevant expectations about the emotion of a face and subsequent attention to a face that is congruent with these expectations strengthened the affective impact of viewing the face. |
Catherine M. McMahon; Isabelle Boisvert; Peter Lissa; Louise Granger; Ronny Ibrahim; Chi Yhun Lo; Kelly Miles; Petra L. Graham Monitoring alpha oscillations and pupil dilation across a performance-intensity function Journal Article In: Frontiers in Psychology, vol. 7, pp. 745, 2016. @article{McMahon2016, Listening to degraded speech can be challenging and requires a continuous investment of cognitive resources, which is more challenging for those with hearing loss. However, while alpha power (8-12 Hz) and pupil dilation have been suggested as objective correlates of listening effort, it is not clear whether they assess the same cognitive processes involved, or other sensory and/or neurophysiological mechanisms that are associated with the task. Therefore, the aim of this study is to compare alpha power and pupil dilation during a sentence recognition task in 15 randomized levels of noise (-7dB to +7dB SNR) using highly intelligible (16 channel vocoded) and moderately intelligible (6 channel vocoded) speech. Twenty young normal hearing adults participated in the study; however, due to extraneous noise, data from 16 (10 females, 6 males; aged 19-28 years) was used in the EEG analysis and 10 in the pupil analysis. Behavioral testing of perceived effort and speech performance was assessed at 3 fixed SNRs per participant and was comparable to sentence recognition performance assessed in the physiological test session for both 16- and 6-channel vocoded sentences. Results showed a significant interaction between channel vocoding for both the alpha power and the pupil size changes. While both measures significantly decreased with more positive SNRs for the 16-channel vocoding, this was not observed with the 6-channel vocoding. The results of this study suggest that these measures may encode different processes involved in speech perception, which show similar trends for highly intelligible speech, but diverge for more spectrally degraded speech. The results to date suggest that these objective correlates of listening effort, and the cognitive processes involved in listening effort, are not yet sufficiently well understood to be used within a clinical setting. |
Andrey R. Nikolaev; Radha Nila Meghanathan; Cees Leeuwen Combining EEG and eye movement recording in free viewing: Pitfalls and possibilities Journal Article In: Brain and Cognition, vol. 107, pp. 55–83, 2016. @article{Nikolaev2016, Co-registration of EEG and eye movement has promise for investigating perceptual processes in free viewing conditions, provided certain methodological challenges can be addressed. Most of these arise from the self-paced character of eye movements in free viewing conditions. Successive eye movements occur within short time intervals. Their evoked activity is likely to distort the EEG signal during fixation. Due to the non-uniform distribution of fixation durations, these distortions are systematic, survive across-trials averaging, and can become a source of confounding. We illustrate this problem with effects of sequential eye movements on the evoked potentials and time-frequency components of EEG and propose a solution based on matching of eye movement characteristics between experimental conditions. The proposal leads to a discussion of which eye movement characteristics are to be matched, depending on the EEG activity of interest. We also compare segmentation of EEG into saccade-related epochs relative to saccade and fixation onsets and discuss the problem of baseline selection and its solution. Further recommendations are given for implementing EEG-eye movement co-registration in free viewing conditions. By resolving some of the methodological problems involved, we aim to facilitate the transition from the traditional stimulus-response paradigm to the study of visual perception in more naturalistic conditions. |
Jessie S. Nixon; Jacolien Rij; Peggy Mok; R. Harald Baayen; Yiya Chen The temporal dynamics of perceptual uncertainty: Eye movement evidence from Cantonese segment and tone perception Journal Article In: Journal of Memory and Language, vol. 90, pp. 103–125, 2016. @article{Nixon2016, Two visual world eyetracking experiments investigated how acoustic cue value and statistical variance affect perceptual uncertainty during Cantonese consonant (Experiment 1) and tone perception (Experiment 2). Participants heard low- or high-variance acoustic stimuli. Euclidean distance of fixations from target and competitor pictures over time was analysed using Generalised Additive Mixed Modelling. Distance of fixations from target and competitor pictures varied as a function of acoustic cue, providing evidence for gradient, nonlinear sensitivity to cue values. Moreover, cue value effects significantly interacted with statistical variance, indicating that the cue distribution directly affects perceptual uncertainty. Interestingly, the time course of effects differed between target distance and competitor distance models. The pattern of effects over time suggests a global strategy in response to the level of uncertainty: as uncertainty increases, verification looks increase accordingly. Low variance generally creates less uncertainty, but can lead to greater uncertainty in the face of unexpected speech tokens. |
Anna Nowakowska; Alasdair D. F. Clarke; Arash Sahraie; Amelia R. Hunt Inefficient search strategies in simulated hemianopia Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 11, pp. 1858–1872, 2016. @article{Nowakowska2016, We investigated whether healthy participants can spontaneously adopt effective eye movement strategies to compensate for information loss similar to that experienced by patients with damage to visual cortex (hemianopia). Visual information in 1 hemifield was removed or degraded while participants searched for an emotional face among neutral faces or a line tilted 45° to the right among lines of varying degree of tilt. A bias to direct saccades toward the sighted field was observed across all 4 experiments. The proportion of saccades directed toward the "blind" field increased with the amount of information available in that field, suggesting fixations are driven toward salient visual stimuli rather than toward locations that maximize information gain. In Experiments 1 and 2, the sighted-field bias had a minimal impact on search efficiency, because the target was difficult to find. However, the sighted-field bias persisted even when the target was visually distinct from the distractors and could easily be detected in the periphery (Experiments 3 and 4). This surprisingly inefficient search behavior suggests that eye movements are biased to salient visual stimuli even when it comes at a clear cost to search efficiency, and efficient strategies to compensate for visual deficits are not spontaneously adopted by healthy participants. |
Nazbanou Nozari; Daniel Mirman; Sharon L. Thompson-Schill The ventrolateral prefrontal cortex facilitates processing of sentential context to locate referents Journal Article In: Brain and Language, vol. 157-158, pp. 1–13, 2016. @article{Nozari2016, Left ventrolateral prefrontal cortex (VLPFC) has been implicated in both integration and conflict resolution in sentence comprehension. Most evidence in favor of the integration account comes from processing ambiguous or anomalous sentences, which also poses a demand for conflict resolution. In two eye-tracking experiments we studied the role of VLPFC in integration when demands for conflict resolution were minimal. Two closely-matched groups of individuals with chronic post-stroke aphasia were tested: the Anterior group had damage to left VLPFC, whereas the Posterior group had left temporo-parietal damage. In Experiment 1 a semantic cue (e.g., "She will eat the apple") uniquely marked the target (apple) among three distractors that were incompatible with the verb. In Experiment 2 phonological cues (e.g., "She will see an eagle."/"She will see a bear.") uniquely marked the target among three distractors whose onsets were incompatible with the cue (e.g., all consonants when the target started with a vowel). In both experiments, control conditions had a similar format, but contained no semantic or phonological contextual information useful for target integration (e.g., the verb "see", and the determiner "the"). All individuals in the Anterior group were slower in using both types of contextual information to locate the target than were individuals in the Posterior group. These results suggest a role for VLPFC in integration beyond conflict resolution. We discuss a framework that accommodates both integration and conflict resolution. |
Nazbanou Nozari; John C. Trueswell; Sharon L. Thompson-Schill In: Psychonomic Bulletin & Review, vol. 23, no. 6, pp. 1942–1953, 2016. @article{Nozari2016a, During sentence comprehension, real-time identification of a referent is driven both by local, context-independent lexical information and by more global sentential information related to the meaning of the utterance as a whole. This paper investigates the cognitive factors that limit the consideration of referents that are supported by local lexical information but not supported by more global sentential information. In an eye-tracking paradigm, participants heard sentences like "She will eat the red pear" while viewing four black-and-white (colorless) line-drawings. In the experimental condition, the display contained a "local attractor" (e.g., a heart), which was locally compatible with the adjective but incompatible with the context ("eat"). In the control condition, the local attractor was replaced by a picture which was incompatible with the adjective (e.g., "igloo"). A second factor manipulated contextual constraint, by using either a constraining verb (e.g., "eat"), or a non-constraining one (e.g., "see"). Results showed consideration of the local attractor, the magnitude of which was modulated by verb constraint, but also by each subject's cognitive control abilities, as measured in a separate Flanker task run on the same subjects. The findings are compatible with a processing model in which the interplay between local attraction, context, and domain-general control mechanisms determines the consideration of possible referents. |
Antje Nuthmann; George L. Malcolm Eye guidance during real-world scene search: The role color plays in central and peripheral vision Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–16, 2016. @article{Nuthmann2016, How does the availability of color across the visual field facilitate gaze during real-world search? To answer this question, the presence of color in central or peripheral vision was manipulated using a 5deg gaze-contingent window that followed participants' gaze. Accordingly, scenes were presented in full color (C), grey in central vision and colored in peripheral vision (G-C), colored in central vision and grey in peripheral vision (C-G), and in grey (G). The color conditions were crossed with a manipulation of the search cue: the search object was cued either with a word label or a picture of the target. Across color conditions, search was faster during target template guided search. Search time costs were observed in the C-G and G conditions, highlighting the importance of color in peripheral vision. In addition, a gaze-data based decomposition of search time revealed color-mediated effects on specific sub-processes of search. When color was not available in peripheral vision, it took longer to initiate search, and to locate the search object in the scene. When color was not available in central vision, however, the process of verifying the identity of the target was prolonged. In conclusion, color-information in peripheral vision facilitates saccade target selection. |
Antje Nuthmann; Françoise Vitu; Ralf Engbert; Reinhold Kliegl No evidence for a saccadic range effect for visually guided and memory-guided saccades in simple saccade-targeting tasks Journal Article In: PLoS ONE, vol. 11, no. 9, pp. e0162449, 2016. @article{Nuthmann2016a, Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task. |
Marcus Nyström; Dan Witzner Hansen; Richard Andersson; Ignace T. C. Hooge Why have microsaccades become larger? Investigating eye deformations and detection algorithms Journal Article In: Vision Research, vol. 118, pp. 17–24, 2016. @article{Nystroem2016, The reported size of microsaccades is considerably larger today compared to the initial era of microsaccade studies during the 1950s and 1960s. We investigate whether this increase in size is related to the fact that the eye-trackers of today measure different ocular structures than the older techniques, and that the movements of these structures may differ during a microsaccade. In addition, we explore the impact such differences have on subsequent analyzes of the eye-tracker signals. In Experiment I, the movement of the pupil as well as the first and fourth Purkinje reflections were extracted from series of eye images recorded during a fixation task. Results show that the different ocular structures produce different microsaccade signatures. In Experiment II, we found that microsaccade amplitudes computed with a common detection algorithm were larger compared to those reported by two human experts. The main reason was that the overshoots were not systematically detected by the algorithm and therefore not accurately accounted for. We conclude that one reason to why the reported size of microsaccades has increased is due to the larger overshoots produced by the modern pupil-based eye-trackers compared to the systems used in the classical studies, in combination with the lack of a systematic algorithmic treatment of the overshoot. We hope that awareness of these discrepancies in microsaccade dynamics across eye structures will lead to more generally accepted definitions of microsaccades. |
E. Oberwelland; Leonhard Schilbach; I. Barisic; Sarah C. Krall; K. Vogeley; Gereon R. Fink; B. Herpertz-Dahlmann; Kerstin Konrad; Martin Schulte-Rüther Look into my eyes: Investigating joint attention using interactive eye-tracking and fMRI in a developmental sample Journal Article In: NeuroImage, vol. 130, pp. 248–260, 2016. @article{Oberwelland2016, Joint attention, the shared attentional focus of at least two people on a third significant object, is one of the earliest steps in social development and an essential aspect of reciprocal interaction. However, the neural basis of joint attention (JA) in the course of development is completely unknown. The present study made use of an interactive eye-tracking paradigm in order to examine the developmental trajectories of JA and the influence of a familiar interaction partner during the social encounter. Our results show that across children and adolescents JA elicits a similar network of "social brain" areas as well as attention and motor control associated areas as in adults. While other-initiated JA particularly recruited visual, attention and social processing areas, self-initiated JA specifically activated areas related to social cognition, decision-making, emotions and motivational/reward processes highlighting the rewarding character of self-initiated JA. Activation was further enhanced during self-initiated JA with a familiar interaction partner. With respect to developmental effects, activation of the precuneus declined from childhood to adolescence and additionally shifted from a general involvement in JA towards a more specific involvement for self-initiated JA. Similarly, the temporoparietal junction (TPJ) was broadly involved in JA in children and more specialized for self-initiated JA in adolescents. Taken together, this study provides first-time data on the developmental trajectories of JA and the effect of a familiar interaction partner incorporating the interactive character of JA, its reciprocity and motivational aspects. |
Emily R. Oby; Sagi Perel; Patrick T. Sadtler; Douglas A. Ruff; Jessica L. Mischel; David F. Montez; Marlene R. Cohen; Aaron P. Batista; Steven M. Chase Extracellular voltage threshold settings can be tuned for optimal encoding of movement and stimulus parameters Journal Article In: Journal of Neural Engineering, vol. 13, no. 3, pp. 1–15, 2016. @article{Oby2016, OBJECTIVE: A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). APPROACH: We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. MAIN RESULTS: The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. SIGNIFICANCE: How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. |
Alexandra S. Mueller; Esther G. González; Chris McNorgan; Martin J. Steinbach; Brian Timney Effects of vertical direction and aperture size on the perception of visual acceleration Journal Article In: Perception, vol. 45, no. 6, pp. 670–683, 2016. @article{Mueller2016a, It is not well understood whether the distance over which moving stimuli are visible affects our sensitivity to the presence of acceleration or our ability to track such stimuli. It is also uncertain whether our experience with gravity creates anisotropies in how we detect vertical acceleration and deceleration. To address these questions, we varied the vertical extent of the aperture through which we presented vertically accelerating and decelerating random dot arrays. We hypothesized that observers would better detect and pursue accelerating and decelerating stimuli that extend over larger than smaller distances. In Experiment 1, we tested the effects of vertical direction and aperture size on acceleration and deceleration detection accuracy. Results indicated that detection is better for downward motion and for large apertures, but there is no difference between vertical acceleration and deceleration detection. A control experiment revealed that our manipulation of vertical aperture size affects the ability to track vertical motion. Smooth pursuit is better (i.e., with higher peak velocities) for large apertures than for small apertures. Our findings suggest that the ability to detect vertical acceleration and deceleration varies as a function of the direction and vertical over which an observer can track the moving stimulus. |
Stefanie Mueller; Katja Fiehler Mixed body- and gaze-centered coding of proprioceptive reach targets after effector movement Journal Article In: Neuropsychologia, vol. 87, pp. 63–73, 2016. @article{Mueller2016, Previous studies demonstrated that an effector movement intervening between encoding and reaching to a proprioceptive target determines the underlying reference frame: proprioceptive reach targets are represented in a gaze-independent reference frame if no movement occurs but are represented with respect to gaze after an effector movement (Mueller and Fiehler, 2014a). The present experiment explores whether an effector movement leads to a switch from a gaze-independent, body-centered reference frame to a gaze-dependent reference frame or whether a gaze-dependent reference frame is employed in addition to a gaze-independent, body-centered reference frame. Human participants were asked to reach in complete darkness to an unseen finger (proprioceptive target) of their left target hand indicated by a touch. They completed 2 conditions in which the target hand remained either stationary at the target location (stationary condition) or was actively moved to the target location, received a touch and was moved back before reaching to the target (moved condition). We dissociated the location of the movement vector relative to the body midline and to the gaze direction. Using correlation and regression analyses, we estimated the contribution of each reference frame based on horizontal reach errors in the stationary and moved conditions. Gaze-centered coding was only found in the moved condition, replicating our previous results. Body-centered coding dominated in the stationary condition while body- and gaze-centered coding contributed equally strong in the moved condition. Our results indicate a shift from body-centered to combined body- and gaze-centered coding due to an effector movement before reaching towards proprioceptive targets. |
Kinan Muhammed; Sanjay G. Manohar; Michael Ben Yehuda; Trevor T. J. Chong; George Tofaris; Graham Lennox; Marko Bogdanovic; Michele Hu; Masud Husain Reward sensitivity deficits modulated by dopamine are associated with apathy in Parkinson's disease Journal Article In: Brain, vol. 139, no. 10, pp. 2706–2721, 2016. @article{Muhammed2016, Apathy is a debilitating and under-recognized condition that has a significant impact in many neurodegenerative disorders. In Parkinson's disease, it is now known to contribute to worse outcomes and a reduced quality of life for patients and carers, adding to health costs and extending disease burden. However, despite its clinical importance, there remains limited understanding of mechanisms underlying apathy. Here we investigated if insensitivity to reward might be a contributory factor and examined how this relates to severity of clinical symptoms. To do this we created novel ocular measures that indexed motivation level using pupillary and saccadic response to monetary incentives, allowing reward sensitivity to be evaluated objectively. This approach was tested in 40 patients with Parkinson's disease, 31 elderly age-matched control participants and 20 young healthy volunteers. Thirty patients were examined ON and OFF their dopaminergic medication in two counterbalanced sessions, so that the effect of dopamine on reward sensitivity could be assessed. Pupillary dilation to increasing levels of monetary reward on offer provided quantifiable metrics of motivation in healthy subjects as well as patients. Moreover, pupillary reward sensitivity declined with age. In Parkinson's disease, reduced pupillary modulation by incentives was predictive of apathy severity, and independent of motor impairment and autonomic dysfunction as assessed using overnight heart rate variability measures. Reward sensitivity was further modulated by dopaminergic state, with blunted sensitivity when patients were OFF dopaminergic drugs, both in pupillary response and saccadic peak velocity response to reward. These findings suggest that reward insensitivity may be a contributory mechanism to apathy and provide potential new clinical measures for improved diagnosis and monitoring of apathy. |
Manon Mulckhuyse; Edwin S. Dalmaijer Distracted by danger: Temporal and spatial dynamics of visual selection in the presence of threat Journal Article In: Cognitive, Affective, & Behavioral Neuroscience, vol. 16, no. 2, pp. 315–324, 2016. @article{Mulckhuyse2016, Threatening stimuli are known to influence attentional and visual processes in order to prioritize selection. For example, previous research showed faster detection of threatening relative to nonthreatening stimuli. This has led to the proposal that threatening stimuli are prioritized automatically via a rapid subcortical route. However, in most studies, the threatening stimulus is always to some extent task relevant. Therefore, it is still unclear if threatening stimuli are automatically prioritized by the visual system. We used the additional singleton paradigm with task-irrelevant fear-conditioned distractors (CS+ and CS-) and indexed the time course of eye movement behavior. The results demonstrate automatic prioritization of threat. First, mean latency of saccades directed to the neutral target was increased in the presence of a threatening (CS+) relative to a nonthreatening distractor (CS-), indicating exogenous attentional capture and delayed disengagement of covert attention. Second, more error saccades were directed to the threatening than to the nonthreatening distractor, indicating a modulation of automatically driven saccades. Nevertheless, cumulative distributions of the saccade latencies showed no modulation of threat for the fastest goal-driven saccades, and threat did not affect the latency of the error saccades to the distractors. Together these results suggest that threatening stimuli are automatically prioritized in attentional and visual selection but not via faster processing. Rather, we suggest that prioritization results from an enhanced representation of the threatening stimulus in the oculomotor system, which drives attentional and visual selection. The current findings are interpreted in terms of a neurobiological model of saccade programming. |
Iris Mulders; Kriszta Szendroi Early association of prosodic focus with alleen 'only': Evidence from eye movements in the visual-world paradigm Journal Article In: Frontiers in Psychology, vol. 7, pp. 150, 2016. @article{Mulders2016, In three visual-world eye tracking studies, we investigated the processing of sentences containing the focus-sensitive operator alleen 'only' and different pitch accents, such as the Dutch Ik heb alleen SELDERIJ aan de brandweerman gegeven 'I only gave CELERY to the fireman' versus Ik heb alleen selderij aan de BRANDWEERMAN gegeven 'I only gave celery to the FIREMAN'. Dutch, like English, allows accent shift to express different focus possibilities. Participants judged whether these utterances match different pictures: in Experiment 1 the Early Stress utterance matched the picture, in Experiment 2 both the Early and Late Stress utterance did, and in Experiment 3 neither did. We found that eye-gaze patterns start to diverge across the conditions already as the indirect object is being heard. Our data also indicate that participants perform anticipatory eye-movements based on the presence of prosodic focus during auditory sentence processing. Our investigation is the first to report the effect of varied prosodic accent placement on different arguments in sentences with a semantic operator, alleen 'only', on the time course of looks in the visual world paradigm. Using an operator in the visual world paradigm allowed us to confirm that prosodic focus information immediately gets integrated into the semantic parse of the proposition. Our study thus provides further evidence for fast, incremental prosodic focus processing in natural language. |
Jana Annina Müller; Dorothea Wendt; Birger Kollmeier; Thomas Brand Comparing eye tracking with electrooculography for measuring individual sentence comprehension duration Journal Article In: PLoS ONE, vol. 11, no. 10, pp. e0164627, 2016. @article{Mueller2016b, The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations. |
Aidan P. Murphy; David A. Leopold; Glyn W. Humphreys; Andrew E. Welchman Lesions to right posterior parietal cortex impair visual depth perception from disparity but not motion cues Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 371, pp. 1–17, 2016. @article{Murphy2016, The posterior parietal cortex (PPC) is understood to be active when observers perceive three-dimensional (3D) structure. However, it is not clear how central this activity is in the construction of 3D spatial representations. Here, we examine whether PPC is essential for two aspects of visual depth perception by testing patients with lesions affecting this region. First, we measured subjects' ability to discriminate depth structure in various 3D surfaces and objects using binocular disparity. Patients with lesions to right PPC (N = 3) exhibited marked perceptual deficits on these tasks, whereas those with left hemisphere lesions (N = 2) were able to reliably discriminate depth as accurately as control subjects. Second, we presented an ambiguous 3D stimulus defined by structure from motion to determine whether PPC lesions influence the rate of bistable perceptual alternations. Patients' percept durations for the 3D stimulus were generally within a normal range, although the two patients with bilateral PPC lesions showed the fastest perceptual alternation rates in our sample. Intermittent stimulus presentation reduced the reversal rate similarly across subjects. Together, the results suggest that PPC plays a causal role in both inferring and maintaining the perception of 3D structure with stereopsis supported primarily by the right hemisphere, but do not lend support to the view that PPC is a critical contributor to bistable perceptual alternations. |
Peter R. Murphy; Evert Boonstra; Sander Nieuwenhuis Global gain modulation generates time-dependent urgency during perceptual choice in humans Journal Article In: Nature Communications, vol. 7, pp. 13526, 2016. @article{Murphy2016a, Decision-makers must often balance the desire to accumulate information with the costs of protracted deliberation. Optimal, reward-maximizing decision-making can require dynamic adjustment of this speed/accuracy trade-off over the course of a single decision. However, it is unclear whether humans are capable of such time-dependent adjustments. Here, we identify several signatures of time-dependency in human perceptual decision-making and highlight their possible neural source. Behavioural and model-based analyses reveal that subjects respond to deadline-induced speed pressure by lowering their criterion on accumulated perceptual evidence as the deadline approaches. In the brain, this effect is reflected in evidence-independent urgency that pushes decision-related motor preparation signals closer to a fixed threshold. Moreover, we show that global modulation of neural gain, as indexed by task-related fluctuations in pupil diameter, is a plausible biophysical mechanism for the generation of this urgency. These findings establish context-sensitive time-dependency as a critical feature of human decision-making. |
Andriy Myachykov; Rob Ellis; Angelo Cangelosi; Martin H. Fischer Ocular drift along the mental number line Journal Article In: Psychological Research, vol. 80, no. 3, pp. 379–388, 2016. @article{Myachykov2016, We examined the spontaneous association between numbers and space by documenting attention deployment and the time course of associated spatial-nu-merical mapping with and without overt oculomotor responses. In Experiment 1, participants maintained central fixation while listening to number names. In Experiment 2, they made horizontal target-direct saccades following auditory number presentation. In both experiments, we continuously measured spontaneous ocular drift in hori-zontal space during and after number presentation. Experiment 2 also measured visual-probe-directed sac-cades following number presentation. Reliable ocular drift congruent with a horizontal mental number line emerged during and after number presentation in both experiments. Our results provide new evidence for the implicit and automatic nature of the oculomotor resonance effect asso-ciated with the horizontal spatial-numerical mapping mechanism. |
Malik M. Naeem Mannan; Shinjung Kim; Myung Yung Jeong; M. Ahmad Kamran Hybrid EEG—Eye tracker: Automatic identification and removal of eye movement and blink artifacts from electroencephalographic signal Journal Article In: Sensors, vol. 16, pp. 241, 2016. @article{NaeemMannan2016, Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data. |
Chie Nakamura; Manabu Arai Persistence of initial misanalysis with no referential ambiguity Journal Article In: Cognitive Science, vol. 40, no. 4, pp. 909–940, 2016. @article{Nakamura2016, Previous research reported that in processing structurally ambiguous sentences comprehenders often preserve an initial incorrect analysis even after adopting a correct analysis following structural disambiguation. One criticism is that the sentences tested in previous studies involved referential ambiguity and allowed comprehenders to make inferences about the initial interpretation using pragmatic information, suggesting the possibility that the initial analysis persisted due to comprehenders' pragmatic inference but not to their failure to perform complete reanalysis of the initial misanalysis. Our study investigated this by testing locally ambiguous relative clause sentences in Japanese, in which the initial misinterpretation contradicts the correct interpretation. Our study using a self-paced reading technique demonstrated evidence for the persistence of the initial analysis with this structure. The results from an eye-tracking study further suggested that the phenomenon directly reflected the amount of support given to the initial incorrect analysis prior to disambiguating information: The more supported the incorrect main clause analysis was, the more likely comprehenders were to preserve the analysis even after the analysis was falsified. Our results thus demonstrated that the preservation of the initial analysis occurs not due to referential ambiguities but to comprehenders' difficulty to fully revise the highly supported initial interpretation. |
Hamidreza Namazi; Vladimir V. Kulish; Amin Akrami The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal Journal Article In: Scientific Reports, vol. 6, pp. 26639, 2016. @article{Namazi2016, One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex' visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. |