All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2015 |
Mithun Diwakar; Deborah L. Harrington; Jun Maruta; Jamshid Ghajar; Fady El-Gabalawy; Laura Muzzatti; Maurizio Corbetta; Ming-Xiong X. Huang; Roland R. Lee Filling in the gaps: Anticipatory control of eye movements in chronic mild traumatic brain injury Journal Article In: NeuroImage: Clinical, vol. 8, pp. 210–223, 2015. @article{Diwakar2015, A barrier in the diagnosis of mild traumatic brain injury (mTBI) stems from the lack of measures that are adequately sensitive in detecting mild head injuries. MRI and CT are typically negative in mTBI patients with persistent symptoms of post-concussive syndrome (PCS), and characteristic difficulties in sustaining attention often go undetected on neuropsychological testing, which can be insensitive to momentary lapses in concentration. Conversely, visual tracking strongly depends on sustained attention over time and is impaired in chronic mTBI patients, especially when tracking an occluded target. This finding suggests deficient internal anticipatory control in mTBI, the neural underpinnings of which are poorly understood. The present study investigated the neuronal bases for deficient anticipatory control during visual tracking in 25 chronic mTBI patients with persistent PCS symptoms and 25 healthy control subjects. The task was performed while undergoing magnetoencephalography (MEG), which allowed us to examine whether neural dysfunction associated with anticipatory control deficits was due to altered alpha, beta, and/or gamma activity. Neuropsychological examinations characterized cognition in both groups. During MEG recordings, subjects tracked a predictably moving target that was either continuously visible or randomly occluded (gap condition). MEG source-imaging analyses tested for group differences in alpha, beta, and gamma frequency bands. The results showed executive functioning, information processing speed, and verbal memory deficits in the mTBI group. Visual tracking was impaired in the mTBI group only in the gap condition. Patients showed greater error than controls before and during target occlusion, and were slower to resynchronize with the target when it reappeared. Impaired tracking concurred with abnormal beta activity, which was suppressed in the parietal cortex, especially the right hemisphere, and enhanced in left caudate and frontaloral areas. Regional beta-amplitude demonstrated high classification accuracy (92%) compared to eye-tracking (65%) and neuropsychological variables (80%). These findings show that deficient internal anticipatory control in mTBI is associated with altered beta activity, which is remarkably sensitive given the heterogeneity of injuries. |
Helen F. Dodd; Jennifer L. Hudson; Tracey A. Williams; Talia Morris; Rebecca S. Lazarus; Yulisha Byrow Anxiety and attentional bias in preschool-aged children: An eyetracking study Journal Article In: Journal of Abnormal Child Psychology, vol. 43, no. 6, pp. 1055–1065, 2015. @article{Dodd2015, Extensive research has examined attentional bias for threat in anxious adults and school-aged children but it is unclear when this anxiety-related bias is first established. This study uses eyetracking technology to assess attentional bias in a sample of 83 children aged 3 or 4 years. Of these, 37 (19 female) met criteria for an anxiety disorder and 46 (30 female) did not. Gaze was recorded during a free-viewing task with angry-neutral face pairs presented for 1250 ms. There was no indication of between-group differences in threat bias, with both anxious and non-anxious groups showing vigilance for angry faces as well as longer dwell times to angry over neutral faces. Importantly, however, the anxious participants spent significantly less time looking at the faces overall, when compared to the non-anxious group. The results suggest that both anxious and non-anxious preschool-aged children preferentially attend to threat but that anxious children may be more avoidant of faces than non-anxious children. |
Peter H. Donaldson; Caroline T. Gurvich; Joanne Fielding; Peter G. Enticott Exploring associations between gaze patterns and putative human mirror neuron system activity Journal Article In: Frontiers in Human Neuroscience, vol. 9, pp. 523, 2015. @article{Donaldson2015, The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18–40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor- evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern. |
Tom Foulsham; Maria Lock How the eyes tell lies: Social gaze during a preference task Journal Article In: Cognitive Science, vol. 39, no. 7, pp. 1704–1726, 2015. @article{Foulsham2015, Social attention is thought to require detecting the eyes of others and following their gaze. To be effective, observers must also be able to infer the person's thoughts and feelings about what he or she is looking at, but this has only rarely been investigated in laboratory studies. In this study, participants' eye movements were recorded while they chose which of four patterns they preferred. New observers were subsequently able to reliably guess the preference response by watching a replay of the fixations. Moreover, when asked to mislead the person guessing, participants changed their looking behavior and guessing success was reduced. In a second experiment, naïve participants could also guess the preference of the original observers but were unable to identify trials which were lies. These results confirm that people can spontaneously use the gaze of others to infer their judgments, but also that these inferences are open to deception. |
John J. Foxe; Sophie Molholm; Victor A. Del Bene; Hans Peter Frey; Natalie N. Russo; Daniella Blanco; Dave Saint-Amour; Lars A. Ross In: Cerebral Cortex, vol. 25, no. 2, pp. 298–312, 2015. @article{Foxe2015, Under noisy listening conditions, visualizing a speaker's articulations substantially improves speech intelligibility. This multisensory speech integration ability is crucial to effective communication, and the appropriate development of this capacity greatly impacts a child's ability to successfully navigate educational and social settings. Research shows that multisensory integration abilities continue developing late into childhood. The primary aim here was to track the development of these abilities in children with autism, since multisensory deficits are increasingly recognized as a component of the autism spectrum disorder (ASD) phenotype. The abilities of high-functioning ASD children (n = 84) to integrate seen and heard speech were assessed cross-sectionally, while environmental noise levels were systematically manipulated, comparing them with age-matched neurotypical children (n = 142). Severe integration deficits were uncovered in ASD, which were increasingly pronounced as background noise increased. These deficits were evident in school-aged ASD children (5–12 year olds), but were fully ameliorated in ASD children entering adolescence (13–15 year olds). The severity of multisensory deficits uncovered has important implications for educators and clinicians working in ASD. We consider the observation that the multisensory speech system recovers substantially in adolescence as an indication that it is likely amenable to intervention during earlier childhood, with potentially profound implications for the development of social communication abilities in ASD children. |
Alessio Fracasso; Lisandro N. Kaunitz; David Melcher Saccade kinematics modulate perisaccadic perception Journal Article In: Journal of Vision, vol. 15, no. 3, pp. 1–12, 2015. @article{Fracasso2015, Around the time of execution of an eye movement, participants systematically misperceive the spatial location of briefly flashed visual stimuli. This phenomenon, known as perisaccadic mislocalization, is thought to involve an active process that takes into account the motor plan (efference copy) of the upcoming saccade. While it has been proposed that the motor system anticipates and informs the visual system about the upcoming eye movements, at present the type and detail of information carried by this motor signal remains unclear. Some authors have argued that the efference copy conveys only coarse information about the direction of the eye movement, while a second theoretical view proposes that it provides specific details about the direction, amplitude, and velocity of the saccade to come. To test between these alternatives, we investigated the influence of saccade parameters on a perisaccadic unmasking task in which performance in discriminating the identity of a target (face or house) followed by a trailing mask is dramatically improved around the time of saccade onset. We found that the amplitude and peak velocity of the upcoming saccade modulated target perception, even for stimuli presented well before saccadic onset. We developed a predictive model for the generation of the efference copy that incorporates both saccade amplitude and saccade velocity planning prior to saccade execution. Overall, these results suggest that the efference copy stores specific information about the parameters of upcoming eye movement and that these parameters influence perception even prior to saccade onset |
Michael Frazier; Lauren Ackerman; Peter Baumann; David Potter; Masaya Yoshida Wh-filler-gap dependency formation guides reflexive antecedent search Journal Article In: Frontiers in Psychology, vol. 6, pp. 1504, 2015. @article{Frazier2015, Prior studies on online sentence processing have shown that the parser can resolve non-local dependencies rapidly and accurately. This study investigates the interaction between the processing of two such non-local dependencies: wh-filler-gap dependencies (WhFGD) and reflexive-antecedent dependencies. We show that reflexive-antecedent dependency resolution is sensitive to the presence of a WhFGD, and argue that the filler-gap dependency established by WhFGD resolution is selected online as the antecedent of a reflexive dependency. We investigate the processing of constructions like (1), where two NPs might be possible antecedents for the reflexive, namely which cowgirl and Mary. Even though Mary is linearly closer to the reflexive, the only grammatically licit antecedent for the reflexive is the more distant wh-NP, which cowgirl. (1). Which cowgirl did Mary expect to have injured herself due to negligence? Four eye-tracking text-reading experiments were conducted on examples like (1), differing in whether the embedded clause was non-finite (1 and 3) or finite (2 and 4), and in whether the tail of the wh-dependency intervened between the reflexive and its closest overt antecedent (1 and 2) or the wh-dependency was associated with a position earlier in the sentence (3 and 4). The results of Experiments 1 and 2 indicate the parser accesses the result of WhFGD formation during reflexive antecedent search. The resolution of a wh-dependency alters the representation that reflexive antecedent search operates over, allowing the grammatical but linearly distant antecedent to be accessed rapidly. In the absence of a long-distance WhFGD (Experiments 3 and 4), wh-NPs were not found to impact reading times of the reflexive, indicating that the parser's ability to select distant wh-NPs as reflexive antecedents crucially involves syntactic structure. |
Tom P. Freeman; Ravi K. Das; Sunjeev K. Kamboj; H. Valerie Curran Dopamine urges to smoke and the relative salience of drug versus non-drug reward Journal Article In: Social Cognitive and Affective Neuroscience, vol. 10, no. 1, pp. 85–92, 2015. @article{Freeman2015, When addicted individuals are exposed to drug-related stimuli, dopamine release is thought to mediate incentive salience attribution, increasing attentional bias, craving and drug seeking. It is unclear whether dopamine acts specifically on drug cues versus other rewards, and if these effects correspond with craving and other forms of cognitive bias. Here, we administered the dopamine D2/D3 agonist pramipexole (0.5 mg) to 16 tobacco smokers in a double-blind placebo-controlled crossover design. Visual fixations on smoking and money images were recorded alongside smoking urges and fluency tasks. Pramipexole attenuated a marked bias in initial orienting towards smoking relative to money but did not alter a maintained attentional bias towards smoking. Pramipexole decreased urges to smoke retrospectively after the task but not on a state scale. Fewer smoking words were generated after pramipexole but phonological and semantic fluency were preserved. Although these treatment effects did not correlate with each other, changes in initial orienting towards smoking and money were inversely related to baseline scores. In conclusion, pramipexole can reduce the salience of an addictive drug compared with other rewards and elicit corresponding changes in smoking urges and cognitive bias. These reward-specific and baseline-dependent effects support an 'inverted-U' shaped profile of dopamine in addiction. |
Steven Frisson About bound and scary books: The processing of book polysemies Journal Article In: Lingua, vol. 157, pp. 17–35, 2015. @article{Frisson2015, There are competing views on the on-line processing of polysemous words such as book, which have distinct but semantically related senses (as in bound book vs. scary book). According to a Sense-Enumeration Lexicon (SEL) view, different senses are represented separately, just as the different meanings of a homonym (e.g. bank). According to an underspecification view, initial processing does not distinguish between the different senses. According to a Relevance Theory (RT)-inspired view, the context will immediately guide interpretation to a specific sense. In Experiment 1, participants indicated whether an adjective-noun construction made sense or not. Switching from one sense to another was costly, but there was no effect of sense frequency (contra SEL). In Experiment 2, eye movements were recorded when participants read sentences in which a polyseme was disambiguated to a specific sense following a neutral context, a sense was repeated, or a sense was switched. The results showed no effect of sense dominance in the neutral condition, no advantage when a sense was repeated, and a cost when switched, especially when switching from a concrete to an abstract interpretation. These data cannot be fitted in an SEL or RT-inspired account, questioning the validity of both as a processing account. |
Adam Frost; Matthias Niemeier Suppression and reversal of motion perception around the time of the saccade Journal Article In: Frontiers in Systems Neuroscience, vol. 9, pp. 143, 2015. @article{Frost2015, We make fast, "saccadic" eye movements to capture finely resolved foveal snapshots of the world but these saccades cause motion artefacts. The artefacts go unnoticed, perhaps because the brain suppresses them through subcortical oculomotor signals feeding back into visual cortex. Opposing views, however, claim that passive mechanisms suffice: saccadic shearing forces might render the retina insensitive to the artefacts or post-saccadic snapshots might mask them before they enter consciousness. Crucially, only active suppression could explain perceptual changes that precede saccades but existing evidence for presaccadic misperception are ill-suited for addressing this issue: Previous studies have found misperceptions of space for objects briefly flashed before saccades, but perhaps only because observers confused the timing of flashes and saccades before they could be tested ("postdiction"), and presaccadic motion perception might have appeared to decline because motion stimuli persisted past eye movement onset. Here we addressed these concerns using briefly flashed two-frame animations (50 ms) to probe people's motion sensitivity during and around saccades. We found that sensitivity declined before saccade onset, even when the probe appeared entirely outside the saccade, and this sensitivity decline was present for motion in every direction relative to saccade, ruling out problems with postdiction. Intriguingly, brief periods during the saccade produced negative sensitivity as if motion was reversed, arguably due to postsaccadic enhancement. These data suggest that motion perception is minimized during saccades through active suppression, complementing neurophysiological findings of colliculo-pulvinar projections that suppress the cortical middle temporal area around the time of the saccade. |
Michele Furlan; Andrew T. Smith; Robin Walker Activity in the human superior colliculus relating to endogenous saccade preparation and execution Journal Article In: Journal of Neurophysiology, vol. 114, no. 2, pp. 1048–1058, 2015. @article{Furlan2015, In recent years a small number of studies have applied functional imaging techniques to investigate visual responses in the human superior colliculus (SC), but few have investigated its oculomotor functions. Here, in two experiments, we examined activity associated with endogenous saccade preparation. We used 3-T fMRI to record the hemodynamic activity in the SC while participants were either preparing or executing saccadic eye movements. Our results showed that not only executing a saccade (as previously shown) but also preparing a saccade produced an increase in the SC hemodynamic activity. The saccade-related activity was observed in the contralateral and to a lesser extent the ipsilateral SC. A second experiment further examined the contralateral mapping of saccade-related activity with a larger range of saccade amplitudes. Increased activity was again observed in both the contralateral and ipsilateral SC that was evident for large as well as small saccades. This suggests that the ipsilateral component of the increase in BOLD is not due simply to small-amplitude saccades producing bilateral activity in the foveal fixation zone. These studies provide the first evidence of presaccadic preparatory activity in the human SC and reveal that fMRI can detect activity consistent with that of buildup neurons found in the deeper layers of the SC in studies of nonhuman primates. |
Benjamin Gagl; Stefan Hawelka; Heinz Wimmer On sources of the word length effect in uoung readers Journal Article In: Scientific Studies of Reading, vol. 19, no. 4, pp. 289–306, 2015. @article{Gagl2015, We investigated how letter length, phoneme length, and consonant clusters contribute to the word length effect in 2nd- and 4th-grade children. They read words from three different conditions: In one condition, letter length increased but phoneme length did not due to multiletter graphemes (Haus-Bauch-Schach). In the remaining conditions, phoneme length increased in correspondence with letter length. One presented monosyllabic words with consonant clusters (Herbst); the other presented disyllabic words without consonant clusters (Kö.nig). Phoneme and letter length contributed to the length effect in naming latencies. Words with consonant clusters elicited the largest length effect. We interpreted this finding as reflecting difficulties of young readers with accessing the output phonology of the tightly coarticulated consonant clusters from the separate phonemes delivered from serial grapheme-to-phoneme conversions. Moreover, eye-movement data indicated that increased reading speed, accompanied with decreased word length effects, is due to more efficient grapheme-to-phoneme conversions rather than the emergence of whole-word recognition. |
Julián Espinosa; Ana Belén Roig; Jorge Pérez; David Mas In: BioMedical Engineering Online, vol. 14, no. 1, pp. 1–12, 2015. @article{Espinosa2015, BACKGROUND: The pupillary light reflex characterizes the direct and consensual response of the eye to the perceived brightness of a stimulus. It has been used as indicator of both neurological and optic nerve pathologies. As with other eye reflexes, this reflex constitutes an almost instantaneous movement and is linked to activation of the same midbrain area. The latency of the pupillary light reflex is around 200 ms, although the literature also indicates that the fastest eye reflexes last 20 ms. Therefore, a system with sufficiently high spatial and temporal resolutions is required for accurate assessment. In this study, we analyzed the pupillary light reflex to determine whether any small discrepancy exists between the direct and consensual responses, and to ascertain whether any other eye reflex occurs before the pupillary light reflex. METHODS: We constructed a binocular video-oculography system two high-speed cameras that simultaneously focused on both eyes. This was then employed to assess the direct and consensual responses of each eye using our own algorithm based on Circular Hough Transform to detect and track the pupil. Time parameters describing the pupillary light reflex were obtained from the radius time-variation. Eight healthy subjects (4 women, 4 men, aged 24-45) participated in this experiment. RESULTS: Our system, which has a resolution of 15 microns and 4 ms, obtained time parameters describing the pupillary light reflex that were similar to those reported in previous studies, with no significant differences between direct and consensual reflexes. Moreover, it revealed an incomplete reflex blink and an upward eye movement at around 100 ms that may correspond to Bell's phenomenon. CONCLUSIONS: Direct and consensual pupillary responses do not any significant temporal differences. The system and method described here could prove useful for further assessment of pupillary and blink reflexes. The resolution obtained revealed the existence reported here of an early incomplete blink and an upward eye movement. |
Jonas Everaert; Ernst H. W. Koster Interactions among emotional attention, encoding, and retrieval of ambiguous information: An eye-tracking study Journal Article In: Emotion, vol. 15, no. 5, pp. 539–543, 2015. @article{Everaert2015, Emotional biases in attention modulate encoding of emotional material into long-term memory, but little is known about the role of such attentional biases during emotional memory retrieval. The present study investigated how emotional biases in memory are related to attentional allocation during retrieval. Forty-nine individuals encoded emotionally positive and negative meanings derived from ambiguous information and then searched their memory for encoded meanings in response to a set of retrieval cues. The remember/know/new procedure was used to classify memories as recollection-based or familiarity-based, and gaze behavior was monitored throughout the task to measure attentional allocation. We found that a bias in sustained attention during recollection-based, but not familiarity-based, retrieval predicted subsequent memory bias toward positive versus negative material following encoding. Thus, during emotional memory retrieval, attention affects controlled forms of retrieval (i.e., recollection) but does not modulate relatively automatic, familiarity-based retrieval. These findings enhance understanding of how distinct components of attention regulate the emotional content of memories. Implications for theoretical models and emotion regulation are discussed. |
Michel Failing; Tom Nissens; Daniel Pearson; Mike Le Pelley; Jan Theeuwes Oculomotor capture by stimuli that signal the availability of reward Journal Article In: Journal of Neurophysiology, vol. 114, no. 4, pp. 2316–2327, 2015. @article{Failing2015, It is well known that eye movement patterns are influenced by both goal- and salience-driven factors. Recent studies, however, have demonstrated that objects that are nonsalient and task irrelevant can still capture our eyes if moving our eyes to those objects has previously produced reward. Here we demonstrate that training such an association between eye movements to an object and delivery of reward is not needed. Instead, an object that merely signals the availability of reward captures the eyes even when it is physically nonsalient and never relevant for the task. Furthermore, we show that oculomotor capture by reward is more reliably observed in saccades with short latencies. We conclude that a stimulus signaling high reward has the ability to capture the eyes independently of bottom-up physical salience or top-down task relevance and that the effect of reward affects early selection processes. |
Kaitlin Falkauskas; Victor Kuperman When experience meets language statistics: Individual variability in processing english compound words Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 6, pp. 1607–1627, 2015. @article{Falkauskas2015, Statistical patterns of language use demonstrably affect language comprehension and language production. This study set out to determine whether the variable amount of exposure to such patterns leads to individual differences in reading behavior as measured via eye-movements. Previous studies have demonstrated that more proficient readers are less influenced by distributional biases in language (e.g., frequency, predictability, transitional probability) than poor readers. We hypothesized that a probabilistic bias that is characteristic of written but not spoken language would preferentially affect readers with greater exposure to printed materials in general and to the specific pattern engendering the bias. Readers of varying reading experience were presented with sentences including English compound words that can occur in 2 spelling formats with differing probabilities: concatenated (windowsill, used 40% of the time) or spaced (window sill, 60%). Linear mixed effects multiple regression models fitted to the eye-movement measures showed that the probabilistic bias toward the presented spelling had a stronger facilitatory effect on compounds that occurred more frequently (in any spelling) or belonged to larger morphological families, and on readers with higher scores on a test of exposure-to-print. Thus, the amount of support toward the compound's spelling is effectively exploited when reading, but only when the spelling patterns are entrenched in an individual's mental lexicon via overall exposure to print and to compounds with alternating spelling. We argue that research on the interplay of language use and structure is incomplete without proper characterization of how particular individuals, with varying levels of experience and skill, learn these language structures. (PsycINFO Database Record (c) 2015 APA, all rights reserved) |
Thomas A. Farmer; Shaorong Yan; Klinton Bicknell; Michael K. Tanenhaus Form-to-expectation matching effects on first-pass eye movement measures during reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 4, pp. 958–976, 2015. @article{Farmer2015, Recent Electroencephalography/Magnetoencephalography (EEG/MEG) studies suggest that when contextual information is highly predictive of some property of a linguistic signal, expectations generated from context can be translated into surprisingly low-level estimates of the physical form-based properties likely to occur in subsequent portions of the unfolding signal. Whether form-based expectations are generated and assessed during natural reading, however, remains unclear. We monitored eye movements while participants read phonologically typical and atypical nouns in noun-predictive contexts (Experiment 1), demonstrating that when a noun is strongly expected, fixation durations on furst-pass eye movement measures, including first fixation duration, gaze durations, and go-past times, are shorter for nouns with category typical form-based features. In Experiments 2 and 3, typical and atypical nouns were placed in sentential contexts normed to create expectations of variable strength for a noun. Context and typicality interacted significantly at gaze duration. These results suggest that during reading, form-based expectations that are translated from higher-level category-based expectancies can facilitate the processing of a word in context, and that their effect on lexical processing is graded based on the strength of category expectancy. |
Heather J. Ferguson; Ian Apperly; Jumana Ahmad; Markus Bindemann; James Cane Task constraints distinguish perspective inferences from perspective use during discourse interpretation in a false belief task Journal Article In: Cognition, vol. 139, pp. 50–70, 2015. @article{Ferguson2015, Interpreting other peoples' actions relies on an understanding of their current mental states (e.g. beliefs, desires and intentions). In this paper, we distinguish between listeners' ability to infer others' perspectives and their explicit use of this knowledge to predict subsequent actions. In a visual-world study, two groups of participants (passive observers vs. active participants) watched short videos, depicting transfer events, where one character ('Jane') either held a true or false belief about an object's location. We tracked participants' eye-movements around the final visual scene, time-locked to related auditory descriptions (e.g. "Jane will look for the chocolates in the container on the left".). Results showed that active participants had already inferred the character's belief in the 1. s preview period prior to auditory onset, before it was possible to use this information to predict an outcome. Moreover, they used this inference to correctly anticipate reference to the object's initial location on false belief trials at the earliest possible point (i.e. from "Jane" onwards). In contrast, passive observers only showed evidence of a belief inference from the onset of "Jane", and did not show reliable use of this inference to predict Jane's behaviour on false belief trials until much later, when the location ("left/right") was auditorily available. These results show that active engagement in a task activates earlier inferences about others' perspectives, and drives immediate use of this information to anticipate others' actions, compared to passive observers, who are susceptible to influences from egocentric or reality biases. Finally, we review evidence that using other peoples' perspectives to predict their behaviour is more cognitively effortful than simply using one's own. |
Gerardo Fernández; Liliana R. Castro; Marcela Schumacher; Osvaldo E. Agamennoni Diagnosis of mild Alzheimer disease through the analysis of eye movements during reading Journal Article In: Journal of Integrative Neuroscience, vol. 14, no. 1, pp. 1–13, 2015. @article{Fernandez2015, Reading requires the integration of several central cognitive subsystems, ranging from attention and oculomotor control to word identification and language comprehension. Reading saccades and fixations contain information that can be correlated with word properties. When reading a sentence, the brain must decide where to direct the next saccade according to what has been read up to the actual fixation. In this process, the retrieval memory brings information about the current word features and attributes into working memory. According to this information, the prefrontal cortex predicts and triggers the next saccade. The frequency and cloze predictability of the fixated word, the preceding words and the upcoming ones affect when and where the eyes will move next. In this paper we present a diagnostic technique for early stage cognitive impairment detection by analyzing eye movements during reading proverbs. We performed a case-control study involving 20 patients with probable Alzheimer's disease and 40 age-matched, healthy control patients. The measurements were analyzed using linear mixed-effects models, revealing that eye movement behavior while reading can provide valuable information about whether a person is cognitively impaired. To the best of our knowledge, this is the first study using word-based properties, proverbs and linear mixed-effect models for identifying cognitive abnormalities. |
Gerardo Fernández; Marcela Schumacher; Liliana Castro; David Orozco; Osvaldo Agamennoni Patients with mild Alzheimer's disease produced shorter outgoing saccades when reading sentences Journal Article In: Psychiatry Research, vol. 229, no. 1-2, pp. 470–478, 2015. @article{Fernandez2015a, In the present work we analyzed forward saccades of thirty five elderly subjects (Controls) and of thirty five mild Alzheimer's disease (AD) during reading regular and high-predictable sentences. While they read, their eye movements were recorded. The pattern of forward saccade amplitudes as a function of word predictability was clearly longer in Controls. Our results suggest that Controls might use stored information of words for enhancing their reading performance. Further, cloze predictability increased outgoing saccades amplitudes, as this increase stronger in high-predictable sentences. Quite the contrary, patients with mild AD evidenced reduced forward saccades even at early stages of the disease. This reduction might reveal impairments in brain areas such as those corresponding to working memory, memory retrieval, and semantic memory functions that are already present at early stages of AD. Our findings might be relevant for expanding the options for the early detection and monitoring of in the early stages of AD. Furthermore, eye movements during reading could provide a new tool for measuring a drug's impact on patient's behavior. |
Phillip D. Fletcher; Jennifer M. Nicholas; Timothy J. Shakespeare; Laura E. Downey; Hannah L. Golden; Jennifer L. Agustus; Camilla N. Clark; Catherine J. Mummery; Jonathan M. Schott; Sebastian J. Crutch; Jason D. Warren Dementias show differential physiological responses to salient sounds Journal Article In: Frontiers in Behavioral Neuroscience, vol. 9, pp. 73, 2015. @article{Fletcher2015, Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching ("looming") or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia |
Rebecca M. Foerster; Werner X. Schneider Expectation violations in sensorimotor sequences: Shifting from LTM-based attentional selection to visual search Journal Article In: Annals of the New York Academy of Sciences, vol. 1339, no. 1, pp. 45–59, 2015. @article{Foerster2015, Long-term memory (LTM) delivers important control signals for attentional selection. LTM expectations have an important role in guiding the task-driven sequence of covert attention and gaze shifts, especially in well-practiced multistep sensorimotor actions. What happens when LTM expectations are disconfirmed? Does a sensory-based visual-search mode of attentional selection replace the LTM-based mode? What happens when prior LTM expectations become valid again? We investigated these questions in a computerized version of the number-connection test. Participants clicked on spatially distributed numbered shapes in ascending order while gaze was recorded. Sixty trials were performed with a constant spatial arrangement. In 20 consecutive trials, either numbers, shapes, both, or no features switched position. In 20 reversion trials, participants worked on the original arrangement. Only the sequence-affecting number switches elicited slower clicking, visual search-like scanning, and lower eye-hand synchrony. The effects were neither limited to the exchanged numbers nor to the corresponding actions. Thus, expectation violations in a well-learned sensorimotor sequence cause a regression from LTM-based attentional selection to visual search beyond deviant-related actions and locations. Effects lasted for several trials and reappeared during reversion. |
Rebecca M. Foerster; Werner X. Schneider Anticipatory eye movements in sensorimotor actions: On the role of guiding fixations during learning Journal Article In: Cognitive Processing, vol. 16, no. 1, pp. 227–231, 2015. @article{Foerster2015a, During object-based sensorimotor tasks, humans look at target locations for subsequent hand actions. These anticipatory eye movements or guiding fixations seem to be necessary for a successful perfor- mance. By practicing such a sensorimotor task, humans become faster and perform fewer guiding fixations (Foer- ster and Schneider, In Prep; Foerster et al. in J Vis 11(7):9:1–16, 2011). We aimed at clarifying whether this decrease in guiding fixations is the cause or effect of faster task completion time. Participants may learn to use less visual input (fewer fixations) allowing shorter completion times. Alternatively, participants may speed up their hand movements (e.g., more efficient motor control) leaving less time for visual intake. The latter would imply that the number of fixations is directly connected to task speed. We investigated the relationship between the number of fixa- tions and task speed in a computerized version of the number connection task (Foerster and Schneider in Ann N Y Acad Sci 2015. doi:10.1111/nyas.12729). Eye move- ments were recorded while participants clicked in ascend- ing order on nine numbered circles. In 90 learning trials, they clicked the sequence with a constant spatial configu- ration as fast as possible. In the subsequent experimental phase, they should perform 30 trials again under high- speed instruction and 30 trials under slow-speed instruc- tion. During slow-speed instruction, fixation rates were & Rebecca M. Foerster rebecca.foerster@uni-bielefeld.de 1 Department of Psychology, Bielefeld University, Bielefeld, Germany 2 Cognitive Interaction Technology - Center of Excellence (CITEC), Bielefeld University, P. O. Box 100131, 33501 Bielefeld, Germany lower with longer fixation durations and more fixations were performed than during high-speed instruction. The results suggest that the number of fixations depends on both the need for visual intake and task completion time. It seems that the decrease in anticipatory eye movements through sensorimotor learning is at the same time a result and a cause of faster task performance. |
Francesca Foppolo; Marco Marelli; Luisa Meroni; Andrea Gualmini Hey little sister, who's the only one? Modulating informativeness in the resolution of privative ambiguity Journal Article In: Cognitive science, vol. 39, no. 7, pp. 1646–1674, 2015. @article{Foppolo2015, We present two eye-tracking experiments on the interpretation of sentences like "The tall girl is (not) the only one that …," which are ambiguous between the anaphoric (the only girl that …) and the exophoric interpretation (the only individual that …). These interpretations differ in informativeness: in a positive context, the exophoric (strong) reading entails the anaphoric (weak), while in a negative context the entailment pattern is reversed and the anaphoric reading is the strongest one. We tested whether adults rely on considerations about informativeness in solving the ambiguity. The results show that participants interpreted one exophorically in both positive and negative contexts. Given these findings, we cast doubts on the idea that Informativeness plays a role in ambiguity resolution and proposes a Principle of Maximal Exploitation: When a context is provided, adults extend their domain of evaluation to include the whole scenario, independently from truth-conditional considerations about informativity and strength. |
Michele Fornaciai; Paola Binda Effect of saccade automaticity on perisaccadic space compression Journal Article In: Frontiers in Systems Neuroscience, vol. 9, pp. 127, 2015. @article{Fornaciai2015, Briefly presented stimuli occurring just before or during a saccadic eye movement are mislocalized, leading to a compression of visual space toward the target of the saccade. In most cases this has been measured in subjects over-trained to perform a stereotyped and unnatural task where saccades are repeatedly driven to the same location, marked by a highly salient abrupt onset. Here, we asked to what extent the pattern of perisaccadic mislocalization depends on this specific context. We addressed this question by studying perisaccadic localization in a set of participants with no prior experience in eye-movement research, measuring localization performance as they practiced the saccade task. Localization was marginally affected by practice over the course of the experiment and it was indistinguishable from the performance of expert observers. The mislocalization also remained similar when the expert observers were tested in a condition leading to less stereotypical saccadic behavior-with no abrupt onset marking the saccade target location. These results indicate that perisaccadic compression is a robust behavior, insensitive to the specific paradigm used to drive saccades and to the level of practice with the saccade task. |
Anouk J. Brouwer; Jeroen B. J. Smeets; Tjerk P. Gutteling; I. Toni; W. Pieter Medendorp The Müller-Lyer illusion affects visuomotor updating in the dorsal visual stream Journal Article In: Neuropsychologia, vol. 77, pp. 119–127, 2015. @article{Brouwer2015, To guide our actions, the brain has developed mechanisms to code target locations in egocentric coordinates (i.e., with respect to the observer), and to update these when the observer moves. The latter mechanism, called visuomotor updating, is implemented in the dorsal visual stream of the brain. In contrast, the ventral visual stream is assumed to transform target locations into an allocentric reference frame that is highly sensitive to visual contextual illusions. Here, we tested the effect of the Müller-Lyer illusion on visuomotor updating in a double-step saccade task. Using the same paradigm in a 3T fMRI scanner, we investigated the effect of the illusion on the neural correlate of the updating process. Participants briefly viewed the Brentano version of the Müller-Lyer illusion with a target at its middle vertex, while fixating at one of the two endpoints of the illusion. Shortly after the disappearance of the stimulus, the eyes' fixation point moved to a position outside the illusion. After a delay, participants made a saccade to the remembered position of the target. The landing position of this saccade was systematically displaced in a manner congruent with the perceptual illusion, showing that visuomotor updating is affected by the illusion. fMRI results showed that the BOLD response in the occipito-parietal cortex (area V7) and the intraparietal sulcus related to planning of the saccade to the updated target was also modulated by the configuration of the illusion. This suggests that the dorsal visual stream represents perceived rather than physical locations of remembered saccade targets. |
Tom A. Graaf; Felix Duecker; Martin H. P. Fernholz; Alexander T. Sack Spatially specific vs. unspecific disruption of visual orientation perception using chronometric pre-stimulus TMS Journal Article In: Frontiers in Behavioral Neuroscience, vol. 9, pp. 5, 2015. @article{Graaf2015, Transcranial magnetic stimulation (TMS) over occipital cortex can impair visual processing. Such ‘TMS masking' has repeatedly been shown at several stimulus onset asynchronies (SOAs), with TMS pulses generally applied after the onset of a visual stimulus. Following increased interest in the neuronal state-dependency of visual processing, we recently explored the efficacy of TMS at ‘negative SOAs', when no visual processing can yet occur. We could reveal pre-stimulus TMS disruption, with results moreover hinting at two separate mechanisms in occipital cortex biasing subsequent orientation perception. Here we extended this work, including a chronometric design to map the temporal dynamics of spatially specific and unspecific mechanisms of state-dependent visual processing, while moreover controlling for TMS-induced pupil covering. TMS pulses applied 60-40 ms prior to a visual stimulus decreased orientation processing independent of stimulus location, while a local suppressive effect was found for TMS applied 30-10 ms pre-stimulus. These results contribute to our understanding of spatiotemporal mechanisms in occipital cortex underlying the state-dependency of visual processing, providing a basis for future work to link pre-stimulus TMS suppression effects to other known visual biasing mechanisms. |
Victor Lafuente; Mehrdad Jazayeri; Michael N. Shadlen Representation of accumulating evidence for a decision in two parietal areas Journal Article In: Journal of Neuroscience, vol. 35, no. 10, pp. 4306–4318, 2015. @article{Lafuente2015, Decisions are often made by accumulating evidence for and against the alternatives. The momentary evidence represented by sensory neurons is accumulated by downstream structures to form a decision variable, linking the evolving decision to the formation of a motor plan. When decisions are communicated by eye movements, neurons in the lateral intraparietal area (LIP) represent the accumulation of evidence bearing on the potential targets for saccades. We now show that reach-related neurons from the medial intraparietal area (MIP) exhibit a gradual modulation of their firing rates consistent with the representation of an evolving decision variable. When decisions were communicated by saccades instead of reaches, decision-related activity was attenuated in MIP, whereas LIP neurons were active while monkeys communicated decisions by saccades or reaches. Thus, for decisions communicated by a hand movement, a parallel flow of sensory information is directed to parietal areas MIP and LIP during decision formation. |
Stefania Vito; Antimo Buonocore; Jean François Bonnefon; Sergio Della Sala Eye movements disrupt episodic future thinking Journal Article In: Memory, vol. 23, no. 6, pp. 796–805, 2015. @article{Vito2015, Remembering the past and imagining the future both rely on complex mental imagery. We considered the possibility that constructing a future scene might tap a component of mental imagery that is not as critical for remembering past scenes. Whereas visual imagery plays an important role in remembering the past, we predicted that spatial imagery plays a crucial role in imagining the future. For the purpose of teasing apart the different components underpinning scene construction in the two experiences of recalling episodic memories and shaping novel future events, we used a paradigm that might selectively affect one of these components (i.e., the spatial). Participants performed concurrent eye movements while remembering the past and imagining the future. These concurrent eye movements selectively interfere with spatial imagery, while sparing visual imagery. Eye movements prevented participants from imagining complex and detailed future scenes, but had no comparable effect on the recollection of past scenes. Similarities between remembering the past and imagining the future are coupled with some differences. The present findings uncover another fundamental divergence between the two processes. |
Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco Eye movements and manual interception of ballistic trajectories: effects of law of motion perturbations and occlusions Journal Article In: Experimental Brain Research, vol. 233, no. 2, pp. 359–374, 2015. @article{DelleMonache2015, Manual interceptions are known to depend critically on integration of visual feedback information and experience-based predictions of the interceptive event. Within this framework, coupling between gaze and limb movements might also contribute to the interceptive outcome, since eye movements afford acquisition of high-resolution visual information. We investigated this issue by analyzing subjects' head-fixed oculomotor behavior during manual interceptions. Subjects moved a mouse cursor to intercept computer-generated ballistic trajectories either congruent with Earth's gravity or perturbed with weightlessness (0g) or hypergravity (2g) effects. In separate sessions, trajectories were either fully visible or occluded before interception to enforce visual prediction. Subjects' oculomotor behavior was classified in terms of amounts of time they gazed at different visual targets and of overall number of saccades. Then, by way of multivariate analyses, we assessed the following: (1) whether eye movement patterns depended on targets' laws of motion and occlusions; and (2) whether interceptive performance was related to the oculomotor behavior. First, we found that eye movement patterns depended significantly on targets' laws of motion and occlusion, suggesting predictive mechanisms. Second, subjects coupled differently oculomotor and interceptive behavior depending on whether targets were visible or occluded. With visible targets, subjects made smaller interceptive errors if they gazed longer at the mouse cursor. Instead, with occluded targets, they achieved better performance by increasing the target's tracking accuracy and by avoiding gaze shifts near interception, suggesting that precise ocular tracking provided better trajectory predictions for the interceptive response. |
Denton J. DeLoss; Takeo Watanabe; George J. Andersen Improving vision among older adults: Behavioral training to improve sight Journal Article In: Psychological Science, vol. 26, no. 4, pp. 456–466, 2015. @article{DeLoss2015, A major problem for the rapidly growing population of older adults (age 65 and over) is age-related declines in vision, which have been associated with increased risk of falls and vehicle crashes. Research suggests that this increased risk is associated with declines in contrast sensitivity and visual acuity. We examined whether a perceptual-learning task could be used to improve age-related declines in contrast sensitivity. Older and younger adults were trained over 7 days using a forced-choice orientation-discrimination task with stimuli that varied in contrast with multiple levels of additive noise. Older adults performed as well after training as did college-age younger adults prior to training. Improvements transferred to performance for an untrained stimulus orientation and were not associated with changes in retinal illuminance. Improvements in far acuity in younger adults and in near acuity in older adults were also found. These findings indicate that behavioral interventions can greatly improve visual performance for older adults. |
Loni Desanghere; Jonathan J. Marotta The influence of object shape and center of mass on grasp and gaze Journal Article In: Frontiers in Psychology, vol. 6, pp. 1537, 2015. @article{Desanghere2015, Recent experiments examining where participants look when grasping an object found that fixations favour the eventual index finger landing position on the object. Even though the act of picking up an object must involve complex high-level computations such as the visual analysis of object contours, surface properties, knowledge of an object's function and center of mass (COM) location, these investigations have generally used simple symmetrical objects – where COM and horizontal midline overlap. Less research has been aimed at looking at how variations in object properties, such as differences in curvature and changes in COM location, affect visual and motor control. The purpose of this study was to examine grasp and fixation locations when grasping objects whose COM was positioned to the left or right of the objects horizontal midline (Experiment 1) and objects whose COM was moved progressively further from the midline of the objects based on the alteration of the object's shape (Experiment 2). Results from Experiment 1 showed that object COM position influenced fixation locations and grasp locations differently, with fixations not as tightly linked to index finger grasp locations as was previously reported with symmetrical objects. Fixation positions were also found to be more central on the non-symmetrical objects. This difference in gaze position may provide a more holistic view, which would allow both index finger and thumb positions to be monitored while grasping. Finally, manipulations of COM distance (Experiment 2) exerted marked effects on the visual analysis of the objects when compared to its influence on grasp locations, with fixation locations more sensitive to these manipulations. Together, these findings demonstrate how object features differentially influence gaze vs. grasp positions during object interaction. |
Pierce Edmiston; Gary Lupyan What makes words special? Words as unmotivated cues Journal Article In: Cognition, vol. 143, pp. 93–100, 2015. @article{Edmiston2015, Verbal labels, such as the words "dog" and "guitar," activate conceptual knowledge more effectively than corresponding environmental sounds, such as a dog bark or a guitar strum, even though both are unambiguous cues to the categories of dogs and guitars (Lupyan & Thompson-Schill, 2012). We hypothesize that this advantage of labels emerges because word-forms, unlike other cues, do not vary in a motivated way with their referent. The sound of a guitar cannot help but inform a listener to the type of guitar making it (electric, acoustic, etc.). The word "guitar" on the other hand, can leave the type of guitar unspecified. We argue that as a result, labels gain the ability to cue a more abstract mental representation, promoting efficient processing of category members. In contrast, environmental sounds activate representations that are more tightly linked to the specific cause of the sound. Our results show that upon hearing environmental sounds such as a dog bark or guitar strum, people cannot help but activate a particular instance of a category, in a particular state, at a particular time, as measured by patterns of response times on cue-picture matching tasks (Exps. 1-2) and eye-movements in a task where the cues are task-irrelevant (Exp. 3). In comparison, labels activate concepts in a more abstract, decontextualized way-a difference that we argue can be explained by labels acting as "unmotivated cues". |
S. Gareth Edwards; Lisa J. Stephenson; Mario Dalmaso; Andrew P. Bayliss Social orienting in gaze leading: A mechanism for shared attention Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 282, no. 1812, pp. 1–8, 2015. @article{Edwards2015, Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to 'gaze following', attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that 'follows' the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish 'shared attention' and maintain the ongoing interaction. |
Caroline Ego; Jean-Jacques Orban de Xivry; Marie-Cécile Nassogne; Demet Yüksel; Philippe Lefèvre; Philippe Lefe Spontaneous improvement in oculomotor function of children with cerebral palsy Journal Article In: Research in Developmental Disabilities, vol. 36, pp. 630–644, 2015. @article{Ego2015, Eye movements are essential to get a clear vision of moving objects. In the present study, we assessed quantitatively the oculomotor deficits of children with cerebral palsy (CP). We recorded eye movements of 51 children with cerebral palsy (aged 5-16 years) with relatively mild motor impairment and compared their performance with age-matched control and premature children. Overall eye movements of children with CP are unexpectedly close to those of controls even though some oculomotor parameters are biased by the side of hemiplegia. Importantly, the difference in performance between children with CP and controls decreases with age, demonstrating that the oculomotor function of children with CP develops as fast as or even faster than controls for some visual tracking parameters. That is, oculomotor function spontaneously improves over the course of childhood. This evolution highlights the ability of lesioned brain of children with CP to compensate for impaired motor function beyond what would be achieved by normal development on its own. |
Benedikt V. Ehinger; Peter Konig; José P. Ossandón Predictions of visual content across eye movements and their modulation by inferred information Journal Article In: Journal of Neuroscience, vol. 35, no. 19, pp. 7403–7413, 2015. @article{Ehinger2015, The brain is proposed to operate through probabilistic inference, testing and refining predictions about the world. Here, we search for neural activity compatible with the violation of active predictions, learned from the contingencies between actions and the consequent changes in sensory input. We focused on vision, where eye movements produce stimuli shifts that could, in principle, be predicted. We compared, in humans, error signals to saccade-contingent changes of veridical and inferred inputs by contrasting the electroencephalographic activity after saccades to a stimulus presented inside or outside the blind spot. We observed early (<250 ms) and late (>250 ms) error signals after stimulus change, indicating the violation of sensory and associative predictions, respectively. Remarkably, the late response was diminished for blind-spot trials. These results indicate that predictive signals occur across multiple levels of the visual hierarchy, based on generative models that differentiate between signals that originate from the outside world and those that are inferred. |
Abdurahman S. Elkhetali; Ryan J. Vaden; Sean M. Pool; Kristina M. Visscher Early visual cortex reflects initiation and maintenance of task set Journal Article In: NeuroImage, vol. 107, pp. 277–288, 2015. @article{Elkhetali2015, The human brain is able to process information flexibly, depending on a person's task. The mechanisms underlying this ability to initiate and maintain a task set are not well understood, but they are important for understanding the flexibility of human behavior and developing therapies for disorders involving attention. Here we investigate the differential roles of early visual cortical areas in initiating and maintaining a task set.Using functional Magnetic Resonance Imaging (fMRI), we characterized three different components of task set-related, but trial-independent activity in retinotopically mapped areas of early visual cortex, while human participants performed attention demanding visual or auditory tasks. These trial-independent effects reflected: (1) maintenance of attention over a long duration, (2) orienting to a cue, and (3) initiation of a task set. Participants performed tasks that differed in the modality of stimulus to be attended (auditory or visual) and in whether there was a simultaneous distractor (auditory only, visual only, or simultaneous auditory and visual). We found that patterns of trial-independent activity in early visual areas (V1, V2, V3, hV4) depend on attended modality, but not on stimuli. Further, different early visual areas play distinct roles in the initiation of a task set. In addition, activity associated with maintaining a task set tracks with a participant's behavior. These results show that trial-independent activity in early visual cortex reflects initiation and maintenance of a person's task set. |
Erica M. Ellis; Arielle Borovsky; Jeffrey L. Elman; Julia L. Evans Novel word learning: An eye-tracking study. Are 18-month-old late talkers really different from their typical peers? Journal Article In: Journal of Communication Disorders, vol. 58, pp. 43–157, 2015. @article{Ellis2015, Infants, 18-24 months old who have difficulty learning words compared to their peers are often referred to as "late talkers" (LTs). These children are at risk for continued language delays as they grow older. One critical question is how to best identify which LTs will have language disorders, such as Specific Language Impairment (SLI) at school age, in order to maximize the opportunity for early and appropriate intervention and support. Recent research suggests that LTs are not only slower to learn and speak words than their peers, but are also slower to recognize and interpret known words in real time. This investigation examined online moment-by-moment processing of novel word learning in 18-month-olds. A low vocabulary, late talking group (LT |
Ralf Engbert; Hans A. Trukenbrod; Simon Barthelmé; Felix A. Wichmann Spatial statistics and attentional dynamics in scene viewing Journal Article In: Journal of Vision, vol. 15, no. 1, pp. 1–17, 2015. @article{Engbert2015, In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision-making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here, we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data. |
Ian Donovan; Sarit F. A. Szpiro; Marisa Carrasco Exogenous attention facilitates location transfer of perceptual learning Journal Article In: Journal of Vision, vol. 15, no. 10, pp. 1–16, 2015. @article{Donovan2015, Perceptual skills can be improved through practice on a perceptual task, even in adulthood. Visual perceptual learning is known to be mostly specific to the trained retinal location, which is considered as evidence of neural plasticity in retinotopic early visual cortex. Recent findings demonstrate that transfer of learning to untrained locations can occur under some specific training procedures. Here, we evaluated whether exogenous attention facilitates transfer of perceptual learning to untrained locations, both adjacent to the trained locations (Experiment 1) and distant from them (Experiment 2). The results reveal that attention facilitates transfer of perceptual learning to untrained locations in both experiments, and that this transfer occurs both within and across visual hemifields. These findings show that training with exogenous attention is a powerful regime that is able to overcome the major limitation of location specificity. |
Jakub Dotlačil; Adrian Brasoveanu The manner and time course of updating quantifier scope representations in discourse Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 3, pp. 305–323, 2015. @article{Dotlacil2015, We present the results of two experiments, an eye-tracking study and a follow-up self-paced reading study, investigating the interpretation of quantifier scope in sentences with three quantifiers: two indefinites in subject and object positions and a universal distributive quantifier in ad-junct position. In addition to the fact that such three-way scope inter-actions have not been experimentally investigated before, they enable us to distinguish between different theories of quantifier scope interpretation in ways that are not possible when only simpler, two-way interactions are considered. The experiments show that contrary to underspecifica-tion theories of scope, a totally ordered scope-hierarchy representation is maintained and modified across sentences and this scope representation cannot be reduced to the truth-conditional/mental model representation of sentential meaning. The experiments also show that the processor uses scope-disambiguating information as early as possible to (re)analyze scope representation. |
Caroline B. Drucker; Monica L. Carlson; Koji Toda; Nicholas K. DeWind; Michael L. Platt Non-invasive primate head restraint using thermoplastic masks. Journal Article In: Journal of Neuroscience Methods, vol. 253, pp. 90–100, 2015. @article{Drucker2015, Background: The success of many neuroscientific studies depends upon adequate head fixation of awake, behaving animals. Typically, this is achieved by surgically affixing a head-restraint prosthesis to the skull. New Method: Here we report the use of thermoplastic masks to non-invasively restrain monkeys' heads. Mesh thermoplastic sheets become pliable when heated and can then be molded to an individual monkey's head. After cooling, the custom mask retains this shape indefinitely for day-to-day use. Results: We successfully trained rhesus macaques (Macaca mulatta) to perform cognitive tasks while wearing thermoplastic masks. Using these masks, we achieved a level of head stability sufficient for high-resolution eye-tracking and intracranial electrophysiology. Comparison with Existing Method: Compared with traditional head-posts, we find that thermoplastic masks perform at least as well during infrared eye-tracking and single-neuron recordings, allow for clearer magnetic resonance image acquisition, enable freer placement of a transcranial magnetic stimulation coil, and impose lower financial and time costs on the lab.Conclusions: We conclude that thermoplastic masks are a viable non-invasive form of primate head restraint that enable a wide range of neuroscientific experiments. |
Jon Andoni Duñabeitia; Albert Costa Lying in a native and foreign language Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 4, pp. 1124–1129, 2015. @article{Dunabeitia2015, This study explores the interaction between deceptive language and second language processing. One hundred participants were asked to produce veridical and false statements in either their first or second language. Pupil size, speech latencies, and utterance durations were analyzed. Results showed additive effects of statement veracity and the language in which these statements were produced. That is, false statements elicited larger pupil dilations and longer naming latencies compared with veridical statements, and statements in the foreign language elicited larger pupil dilations and longer speech durations and compared with first language. Importantly, these two effects did not interact, suggesting that the processing cost associated with deception is similar in a native and foreign language. The theoretical implications of these observations are discussed. |
Matt J. Dunn; Tom H. Margrain; J. Margaret Woodhouse; Jonathan T. Erichsen Visual processing in infantile nystagmus is not slow Journal Article In: Investigative Ophthalmology & Visual Science, vol. 56, no. 9, pp. 5094–5101, 2015. @article{Dunn2015, PURPOSE: Treatments for infantile nystagmus (IN) sometimes elicit subjective reports of improved visual function, yet quantifiable improvements in visual acuity, if any, are often negligible. One possibility is that these subjective "improvements" may relate to temporal, rather than spatial, visual function. This study aimed to ascertain the extent to which "time to see" might be increased in nystagmats, as compared to normally sighted controls. By assessing both eye movement and response time data, it was possible to determine whether delays in "time to see" were due solely to the eye movements, or to an underlying deficit in visual processing. METHODS: The time taken to respond to the orientation of centrally and peripherally presented gratings was measured in subjects with IN and normally sighted controls (both groups: n = 11). For each vertically displaced grating, the time until the target-acquiring saccade was determined, as was the time from the saccade until the subject's response. RESULTS: Nystagmats took approximately 60 ms longer than controls to execute target-acquiring saccades to vertically displaced targets (P = 0.010). However, the time from the end of the saccade until subjects responded was not significantly different between groups (P = 0.37). Despite this, nystagmats took longer to respond to gratings presented at fixation. CONCLUSIONS: Individuals with IN took longer to direct their gaze toward objects of interest. However, once a target was foveated, the time taken to process visual information and respond did not appear to differ from that of control subjects. Therefore, conscious visual processing in IN is not slow. |
Muriel Dysli; Fabian Keller; Mathias Abegg Acute onset incomitant image disparity modifies saccadic and vergence eye movements Journal Article In: Journal of Vision, vol. 15, no. 3, pp. 1–15, 2015. @article{Dysli2015, New-onset impairment of ocular motility will cause incomitant strabismus, i.e., a gaze-dependent ocular misalignment. This ocular misalignment will cause retinal disparity, that is, a deviation of the spatial position of an image on the retina of both eyes, which is a trigger for a vergence eye movement that results in ocular realignment. If the vergence movement fails, the eyes remain misaligned, resulting in double vision. Adaptive processes to such incomitant vergence stimuli are poorly understood. In this study, we have investigated the physiological oculomotor response of saccadic and vergence eye movements in healthy individuals after shifting gaze from a viewing position without image disparity into a field of view with increased image disparity, thus in conditions mimicking incomitance. Repetitive saccadic eye movements into a visual field with increased stimulus disparity lead to a rapid modification of the oculomotor response: (a) Saccades showed immediate disconjugacy (p < 0.001) resulting in decreased retinal image disparity at the end of a saccade. (b) Vergence kinetics improved over time (p < 0.001). This modified oculomotor response enables a more prompt restoration of ocular alignment in new-onset incomitance. |
R. Becket Ebitz; Michael L. Platt Neuronal activity in primate dorsal anterior cingulate cortex signals Task conflict and predicts adjustments in pupil-linked arousal Journal Article In: Neuron, vol. 85, no. 3, pp. 628–640, 2015. @article{Ebitz2015, Whether driving a car, shopping for food, or paying attention in a classroom of boisterous teenagers, it's often hard to maintain focus on goals in theface of distraction. Brain imaging studies in humans implicate the dorsal anterior cingulate cortex (dACC) in regulating the conflict between goals and distractors. Here we show that single dACC neurons signal conflict between task goals and distractors in the rhesus macaque, particularly for biologically relevant social stimuli. For some neurons, task conflict signals predicted subsequent changes in pupil size-a peripheral index of arousal linked to noradrenergic tone-associated with reduced distractor interference. dACC neurons also responded to errors, and these signals predicted adjustments in pupil size. These findings provide the first neurophysiological endorsement of the hypothesis that dACC regulates conflict, in part, via modulation of pupil-linked processes such as arousal. |
Miguel P. Eckstein; Wade Schoonveld; Sheng Zhang; Stephen C. Mack; Emre Akbas Optimal and human eye movements to clustered low value cues to increase decision rewards during search Journal Article In: Vision Research, vol. 113, pp. 137–154, 2015. @article{Eckstein2015, Rewards have important influences on the motor planning of primates and the firing of neurons coding visual information and action. When eye movements to a target are differentially rewarded across locations, primates execute saccades towards the possible target location with the highest expected value, a product of sensory evidence and potentially earned reward (saccade to maximum expected value model, sMEV). Yet, in the natural world eye movements are not directly rewarded. Their role is to gather information to support subsequent rewarded search decisions and actions. Less is known about the effects of decision rewards on saccades. We show that when varying the decision rewards across cued locations following visual search, humans can plan their eye movements to increase decision rewards. Critically, we report a scenario for which five of seven tested humans do not preferentially deploy saccades to the possible target location with the highest reward, a strategy which is optimal when rewarding eye movements. Instead, these humans make saccades towards lower value but clustered locations when this strategy optimizes decision rewards consistent with the preferences of an ideal Bayesian reward searcher that takes into account the visibility of the target across eccentricities. The ideal reward searcher can be approximated with a sMEV model with pooling of rewards from spatially clustered locations. We also find observers with systematic departures from the optimal strategy and inter-observer variability of eye movement plans. These deviations often reflect multiplicity of fixation strategies that lead to near optimal decision rewards but, for some observers, it relates to suboptimal choices in eye movement planning. |
2014 |
Douglas A. Ruff; Marlene R. Cohen Attention can either increase or decrease spike count correlations in visual cortex Journal Article In: Nature Neuroscience, vol. 17, no. 11, pp. 1591–1597, 2014. @article{Ruff2014a, Visual attention enhances the responses of visual neurons that encode the attended location. Several recent studies have shown that attention also decreases correlations between fluctuations in the responses of pairs of neurons (termed spike count correlation or r(SC)). These results are consistent with two hypotheses. First, attention-related changes in rate and r(SC) might be linked (perhaps through a common mechanism), with attention always decreasing r(SC). Second, attention might either increase or decrease r(SC), possibly depending on the role of the neurons in the behavioral task. We recorded simultaneously from dozens of neurons in area V4 while monkeys performed a discrimination task. We found strong evidence in favor of the second hypothesis, showing that attention can flexibly increase or decrease correlations depending on whether the neurons provide evidence for the same or opposite choices. These results place important constraints on models of the neuronal mechanisms underlying cognitive factors. |
Shery Thomas; Mervyn G. Thomas; Caroline Andrews; Wai-Man Chan; Frank A. Proudlock; Rebecca J. McLean; Archana Pradeep; Elizabeth C. Engle; Irene Gottlob Autosomal-dominant nystagmus, foveal hypoplasia and presenile cataract associated with a novel PAX6 mutation Journal Article In: European Journal of Human Genetics, vol. 22, no. 3, pp. 344–349, 2014. @article{Thomas2014a, Autosomal-dominant idiopathic infantile nystagmus has been linked to 6p12 (OMIM 164100), 7p11.2 (OMIM 608345) and 13q31-q33 (OMIM 193003). PAX6 (11p13, OMIM 607108) mutations can also cause autosomal-dominant nystagmus, typically in association with aniridia or iris hypoplasia. We studied a large multigenerational white British family with autosomal-dominant nystagmus, normal irides and presenile cataracts. An SNP-based genome-wide analysis revealed a linkage to a 13.4-MB region on chromosome 11p13 with a maximum lod score of 2.93. A mutation analysis of the entire coding region and splice junctions of the PAX6 gene revealed a novel heterozygous missense mutation (c.227C>G) that segregated with the phenotype and is predicted to result in the amino-acid substitution of proline by arginine at codon 76 p.(P76R). The amino-acid variation p.(P76R) within the paired box domain is likely to destabilise the protein due to steric hindrance as a result of the introduction of a polar and larger amino acid. Eye movement recordings showed a significant intrafamilial variability of horizontal, vertical and torsional nystagmus. High-resolution in vivo imaging of the retina using optical coherence tomography (OCT) revealed features of foveal hypoplasia, including rudimentary foveal pit, incursion of inner retinal layers, short photoreceptor outer segments and optic nerve hypoplasia. Thus, this study presents a family that segregates a PAX6 mutation with nystagmus and foveal hypoplasia in the absence of iris abnormalities. Moreover, it is the first study showing detailed characteristics using eye movement recordings of autosomal-dominant nystagmus in a multigenerational family with a novel PAX6 mutation. |
Rasmus Aamand; Yi-Ching Lynn Ho; Thomas Dalsgaard; Andreas Roepstorff; Torben E. Lund Dietary nitrate facilitates an acetazolamide-induced increase in cerebral blood flow during visual stimulation Journal Article In: Journal of Applied Physiology, vol. 116, no. 3, pp. 267–273, 2014. @article{Aamand2014, The carbonic anhydrase (CA) inhibitor acetazolamide (AZ) is used routinely to estimate cerebrovascular reserve capacity in patients, as it reliably increases cerebral blood flow (CBF). However, the mechanism by which AZ accomplishes this CBF increase is not entirely understood. We recently discovered that CA can produce nitric oxide (NO) from nitrite, and that AZ enhances this NO production in vitro. In fact, this interaction between AZ and CA accounted for a large part of AZ's vasodilatory action, which fits well with the known vasodilatory potency of NO. The present study aimed to assess whether AZ acts similarly in vivo in the human cerebrovascular system. Hence, we increased or minimized the dietary intake of nitrate in 20 healthy male participants, showed them a full-field flickering dartboard, and measured their CBF response to this visual stimulus with arterial spin labeling. Doing so, we found a significant positive interaction between the dietary intake of nitrate and the CBF modulation afforded by AZ during visual stimulation. In addition, but contrary to studies conducted in elderly participants, we report no effect of nitrate intake on resting CBF in healthy human participants. The present study provides in vivo support for an enhancing effect of AZ on the NO production from nitrite catalyzed by CA in the cerebrovascular system. Furthermore, our results, in combination with the results of other groups, indicate that nitrate may have significant importance to vascular function when the cerebrovascular system is challenged by age or disease. |
Jennifer Olejarczyk; Steven G. Luke; John M. Henderson Incidental memory for parts of scenes from eye movements Journal Article In: Visual Cognition, vol. 22, no. 7, pp. 975–995, 2014. @article{Olejarczyk2014, Incidental memory for parts of scenes was examined in two search experiments and one memory control experiment. Eye movements were recorded during the search experiments and used to select gaze-contingent sections from search scenes for a surprise memory recognition task. Results from the recognition task showed incidental memory was better for sections viewed longer and with multiple fixations. Sections not fixated during search were still recognized above chance as well. Differences in sections did not affect memory performance in a control experiment when viewing time was held constant. These results show that memory for parts of scenes can occur incidentally during search and encoding of tested sections is better with longer viewing time and with multiple fixations. Incidental memory for parts of scenes was examined in two search experiments and one memory control experiment. Eye movements were recorded during the search experiments and used to select gaze-contingent sections from search scenes for a surprise memory recognition task. Results from the recognition task showed incidental memory was better for sections viewed longer and with multiple fixations. Sections not fixated during search were still recognized above chance as well. Differences in sections did not affect memory performance in a control experiment when viewing time was held constant. These results show that memory for parts of scenes can occur incidentally during search and encoding of tested sections is better with longer viewing time and with multiple fixations. |
Rosanna K. Olsen; Mark Chiew; Bradley R. Buchsbaum; Jennifer D. Ryan The relationship between delay period eye movements and visuospatial memory Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–11, 2014. @article{Olsen2014, We investigated whether overt shifts of attention were associated with visuospatial memory performance. Participants were required to study the locations of a set of visual objects and subsequently detect changes to the spatial location of one of the objects following a brief delay period. Relational information regarding the locations among all of the objects could be used to support performance on the task (Experiment 1) or relational information was removed during test and location manipulation judgments had to be made for a singly presented target item (Experiment 2). We computed the similarity of the fixation patterns in space during the study phase to the fixations made during the delay period. Greater fixation pattern similarity across participants was associated with higher accuracy when relational information was available at test (Experiment 1); however, this association was not observed when the target item was presented in isolation during the test display (Experiment 2). Similarly, increased fixation pattern similarity on a given trial (within participants) was associated with successful task performance when the relations among studied items could be used for comparison (Experiment 1), but not when memory for absolute spatial location was assessed (Experiment 2). This pattern of behavior and performance on the two tasks suggested that eye movements facilitated memory for the relationships among objects. Shifts of attention through eye movements may provide a mechanism for the maintenance of relational visuospatial memory. |
Selim Onat; Alper Açik; Frank Schumann; Peter König The contributions of image content and behavioral relevancy to overt attention Journal Article In: PLoS ONE, vol. 9, no. 4, pp. e93254, 2014. @article{Onat2014, During free-viewing of natural scenes, eye movements are guided by bottom-up factors inherent to the stimulus, as well as top-down factors inherent to the observer. The question of how these two different sources of information interact and contribute to fixation behavior has recently received a lot of attention. Here, a battery of 15 visual stimulus features was used to quantify the contribution of stimulus properties during free-viewing of 4 different categories of images (Natural, Urban, Fractal and Pink Noise). Behaviorally relevant information was estimated in the form of topographical interestingness maps by asking an independent set of subjects to click at image regions that they subjectively found most interesting. Using a Bayesian scheme, we computed saliency functions that described the probability of a given feature to be fixated. In the case of stimulus features, the precise shape of the saliency functions was strongly dependent upon image category and overall the saliency associated with these features was generally weak. When testing multiple features jointly, a linear additive integration model of individual saliencies performed satisfactorily. We found that the saliency associated with interesting locations was much higher than any low-level image feature and any pair-wise combination thereof. Furthermore, the low-level image features were found to be maximally salient at those locations that had already high interestingness ratings. Temporal analysis showed that regions with high interestingness ratings were fixated as early as the third fixation following stimulus onset. Paralleling these findings, fixation durations were found to be dependent mainly on interestingness ratings and to a lesser extent on the low-level image features. Our results suggest that both low- and high-level sources of information play a significant role during exploration of complex scenes with behaviorally relevant information being more effective compared to stimulus features. |
K. Ooms; Philippe De Maeyer; V. Fack Study of the attentive behavior of novice and expert map users using eye tracking Journal Article In: Cartography and Geographic Information Science, vol. 41, no. 1, pp. 37–54, 2014. @article{Ooms2014, The aim of this paper is to gain better understanding of the way map users read and interpret the visual stimuli presented to them and how this can be influenced. In particular, the difference between expert and novice map users is considered. In a user study, the participants studied four screen maps which had been manipulated to introduce deviations. The eye movements of 24 expert and novice participants were tracked, recorded, and analyzed (both visually and statistically) based on a grid of Areas of Interest. These visual analyses are essential for studying the spatial dimension of maps to identify problems in design. In this research, we used visualization of eye movement metrics (fixation count and duration) in a 2D and 3D grid and a statistical comparison of the grid cells. The results show that the users' eye movements clearly reflect the main elements on the map. The users' attentive behavior is influenced by deviating colors, as their attention is drawn to it. This could also influence the users' interpretation process. Both user groups encountered difficulties when trying to interpret and store map objects that were mirrored. Insights into how different types of map users read and interpret map content are essential in this fast-evolving era of digital cartographic products. |
Isabel Orenes; David Beltrán; Carlos Santamaría How negation is understood: Evidence from the visual world paradigm Journal Article In: Journal of Memory and Language, vol. 74, pp. 36–45, 2014. @article{Orenes2014, This paper explores how negation (e.g., the figure is not red) is understood using the visual world paradigm. Our hypothesis is that people will switch to the alternative affirmative (e.g., a green figure) whenever possible, but will be able to maintain the negated argument (e.g., a non-red figure) when needed. To test this, we presented either a specific verbal context (binary: the figure could be red or green) or an unspecified verbal context (multary: the figure could be red or green or yellow or blue). Then, affirmative and negative sentences (e.g., the figure is (not) red) were heard while four figures were shown on the screen and eye movements were monitored. We found that people shifted their visual attention toward the alternative in the binary context, but focused on the negated argument in the multary context. Our findings corroborated our hypothesis and shed light on two issues that are currently under debate about how negation is represented and processed. Regarding representation, our results support the ideas that (1) the negative operator plays a role in the mental representation, and consequently a symbolic representation of negation is possible, and (2) it is not necessary to use a two-step process to represent and understand negation. |
Tania Ortuno; Kenneth L. Grieve; Ricardo Cao; Javier Cudeiro; Casto Rivadulla Bursting thalamic responses in awake monkey contribute to visual detection and are modulated by corticofugal feedback Journal Article In: Frontiers in Behavioral Neuroscience, vol. 8, pp. 198, 2014. @article{Ortuno2014, The lateral geniculate nucleus is the gateway for visual information en route to the visual cortex. Neural activity is characterized by the existence of two firing modes: burst and tonic. Originally associated with sleep, bursts have now been postulated to be a part of the normal visual response, structured to increase the probability of cortical activation, able to act as a "wake-up" call to the cortex. We investigated a potential role for burst in the detection of novel stimuli by recording neuronal activity in the lateral geniculate nucleus (LGN) of behaving monkeys during a visual detection task. Our results show that bursts are often the neuron's first response, and are more numerous in the response to attended target stimuli than to unattended distractor stimuli. Bursts are indicators of the task novelty, as repetition decreased bursting. Because the primary visual cortex is the major modulatory input to the LGN, we compared the results obtained in control conditions with those observed when cortical activity was reduced by TMS. This cortical deactivation reduced visual response related bursting by 90%. These results highlight a novel role for the thalamus, able to code higher order image attributes as important as novelty early in the thalamo-cortical conversation. |
Jorge Otero-Millan; Jose L. Alba Castro; Stephen L. Macknik; Susana Martinez-Conde Unsupervised clustering method to detect microsaccades Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–17, 2014. @article{OteroMillan2014, Microsaccades, small involuntary eye movements that occur once or twice per second during attempted visual fixation, are relevant to perception, cognition, and oculomotor control and present distinctive characteristics in visual and oculomotor pathologies. Thus, the development of robust and accurate microsaccade-detection techniques is important for basic and clinical neuroscience research. Due to the diminutive size of microsaccades, however, automatic and reliable detection can be difficult. Current challenges in microsaccade detection include reliance on set, arbitrary thresholds and lack of objective validation. Here we describe a novel microsaccade-detecting method, based on unsupervised clustering techniques, that does not require an arbitrary threshold and provides a detection reliability index. We validated the new clustering method using real and simulated eye-movement data. The clustering method reduced detection errors by 62% for binocular data and 78% for monocular data, when compared to standard contemporary microsaccade-detection techniques. Further, the clustering method's reliability index was correlated with the microsaccade-detection error rate, suggesting that the reliability index may be used to determine the comparative precision of eye-tracking devices. |
Jorge Otero-Millan; Stephen L. Macknik; Susana Martinez-Conde Fixational eye movements and binocular vision Journal Article In: Frontiers in Integrative Neuroscience, vol. 8, pp. 52, 2014. @article{OteroMillan2014a, During attempted visual fixation, small involuntary eye movements –called fixational eye movements–continuously change of our gaze's position. Disagreement between the left and right eye positions during such motions can produce diplopia (double vision). Thus, the ability to properly coordinate the two eyes during gaze fixation is critical for stable perception. For the last 50 years, researchers have studied the binocular characteristics of fixational eye movements. Here we review classical and recent studies on the binocular coordination (i.e. degree of conjugacy) of each fixational eye movement type: microsaccades, drift and tremor, and its perceptual contribution to increasing or reducing binocular disparity. We also discuss how amblyopia and other visual pathologies affect the binocular coordination of fixational eye movements. |
Julie Mercier; Irina Pivneva; Debra Titone Individual differences in inhibitory control relate to bilingual spoken word processing Journal Article In: Bilingualism: Language and Cognition, vol. 17, no. 1, pp. 89–117, 2014. @article{Mercier2014, We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., field) and looked at pictures corresponding to the target, a within-language competitor (feet), a French cross-language competitor (fille "girl"), or both, and unrelated filler pictures. We derived cognitive and oculomotor inhibitory control measures from a battery of inhibitory control tasks. Increased cognitive inhibitory control was linked to less within-language competition for all bilinguals, and less cross-language competition for native French low-English-exposure bilinguals. Increased oculomotor inhibitory control was linked to less within-language competition for all native French bilinguals, and less cross-language competition for native French low-English-exposure bilinguals. The results extend previous findings (Blumenfeld & Marian, 2011), and suggest that individual differences in inhibitory control relate to bilingual spoken word processing. © Cambridge University Press 2013. |
Julia D. I. Meuwese; H. Steven Scholte; Victor A. F. Lamme Latent memory of unattended stimuli reactivated by practice: An fMRI study on the role of consciousness and attention in learning Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e90098, 2014. @article{Meuwese2014, Although we can only report about what is in the focus of our attention, much more than that is actually processed. And even when attended, stimuli may not always be reportable, for instance when they are masked. A stimulus can thus be unreportable for different reasons: the absence of attention or the absence of a conscious percept. But to what extent does the brain learn from exposure to these unreportable stimuli? In this fMRI experiment subjects were exposed to textured figure-ground stimuli, of which reportability was manipulated either by masking (which only interferes with consciousness) or with an inattention paradigm (which only interferes with attention). One day later learning was assessed neurally and behaviorally. Positive neural learning effects were found for stimuli presented in the inattention paradigm; for attended yet masked stimuli negative adaptation effects were found. Interestingly, these inattentional learning effects only became apparent in a second session after a behavioral detection task had been administered during which performance feedback was provided. This suggests that the memory trace that is formed during inattention is latent until reactivated by behavioral practice. However, no behavioral learning effects were found, therefore we cannot conclude that perceptual learning has taken place for these unattended stimuli. |
R. Chris Miall; Se-Ho Nam; J. Tchalenko The influence of stimulus format on drawing-a functional imaging study of decision making in portrait drawing Journal Article In: NeuroImage, vol. 102, pp. 608–619, 2014. @article{Miall2014, To copy a natural visual image as a line drawing, visual identification and extraction of features in the image must be guided by top-down decisions, and is usually influenced by prior knowledge. In parallel with other behavioral studies testing the relationship between eye and hand movements when drawing, we report here a functional brain imaging study in which we compared drawing of faces and abstract objects: the former can be strongly guided by prior knowledge, the latter less so. To manipulate the difficulty in extracting features to be drawn, each original image was presented in four formats including high contrast line drawings and silhouettes, and as high and low contrast photographic images. We confirmed the detailed eye-hand interaction measures reported in our other behavioral studies by using in-scanner eye-tracking and recording of pen movements with a touch screen. We also show that the brain activation pattern reflects the changes in presentation formats. In particular, by identifying the ventral and lateral occipital areas that were more highly activated during drawing of faces than abstract objects, we found a systematic increase in differential activation for the face-drawing condition, as the presentation format made the decisions more challenging. This study therefore supports theoretical models of how prior knowledge may influence perception in untrained participants, and lead to experience-driven perceptual modulation by trained artists. |
Paul G. Middlebrooks; Jeffrey D. Schall Response inhibition during perceptual decision making in humans and macaques Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 2, pp. 353–366, 2014. @article{Middlebrooks2014, Response inhibition in stop signal tasks has been explained as the outcome of a race between GO and STOP processes (e.g., Logan, 1981). Response choice in two-alternative perceptual categorization tasks has been explained as the outcome of an accumulation of evidence for the alternative responses. To begin unifying these two powerful investigation frameworks, we obtained data from humans and macaque monkeys performing a stop signal task with responses guided by perceptual categorization and variable degrees of difficulty, ranging from low to high accuracy. Comparable results across species reinforced the validity of this animal model. Response times and errors increased with categorization difficulty. The probability of failing to inhibit responses on stop signal trials increased with stop signal delay, and the response times for failed stop signal trials were shorter than those for trials with no stop signal. Thus, the Logan race model could be applied to estimate the duration of the stopping process. We found that the duration of the STOP process did not vary across a wide range of discrimination accuracies. This is consistent with the functional, and possibly mechanistic, independence of choice and inhibition mechanisms. |
Sébastien Miellet; Roberto Caldara; Christopher Gillberg; Monika Raju; Helen Minnis Disinhibited reactive attachment disorder symptoms impair social judgements from faces Journal Article In: Psychiatry Research, vol. 215, no. 3, pp. 747–752, 2014. @article{Miellet2014, Typically developing adults and children can rapidly reach consensus regarding the trustworthiness of unfamiliar faces. Maltreated children can have problems with trusting others, yet those with the disinhibited form of reactive attachment disorder (dRAD) can be indiscriminately friendly. Whether children with dRAD symptoms appraise and conform to typical judgements about trustworthiness of faces is still unknown. We recorded eye movements of 10 maltreated dRAD children and 10 age and gender matched typically developing control children while they made social judgements from faces. Children were presented with a series of pairs of faces previously judged by adults to have high or low attractiveness or trustworthiness ratings. Typically developing children reached a consensus regarding which faces were the most trustworthy and attractive. There was less agreement among the children with dRAD symptoms. Judgments from the typically developing children showed a strong correlation between the attractiveness and trustworthiness tasks. This was not the case for the dRAD group, who showed less agreement and no significant correlation between trustworthiness and attractiveness judgments. Finally, both groups of children sampled the eye region to perform social judgments. Our data offer a unique insight in children with dRAD symptoms, providing novel and important knowledge for their rehabilitation. |
M. Miller; L. Chukoskie; M. Zinni; Jeanne Townsend; D. Trauner Dyspraxia, motor function and visual–motor integration in autism Journal Article In: Behavioural Brain Research, vol. 269, no. 4, pp. 95–102, 2014. @article{Miller2014, This project assessed dyspraxia in high-functioning school aged children with autism with a focus on Ideational Praxis. We examined the association of specific underlying motor function including eye movement with ideational dyspraxia (sequences of skilled movements) as well as the possible role of visual-motor integration in dyspraxia. We found that compared to IQ-, sex- and age-matched typically developing children, the children with autism performed significantly worse on: Ideational and Buccofacial praxis; a broad range of motor tests, including measures of simple motor skill, timing and accuracy of saccadic eye movements and motor coordination; and tests of visual-motor integration. Impairments in individual children with autism were heterogeneous in nature, although when we examined the praxis data as a function of a qualitative measure representing motor timing, we found that children with poor motor timing performed worse on all praxis categories and had slower and less accurate eye movements while those with regular timing performed as well as typical children on those same tasks. Our data provide evidence that both motor function and visual-motor integration contribute to dyspraxia. We suggest that dyspraxia in autism involves cerebellar mechanisms of movement control and the integration of these mechanisms with cortical networks implicated in praxis. |
Mark Mills; Kevin B. Smith; John R. Hibbing; Michael D. Dodd The politics of the face-in-the-crowd Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 3, pp. 1199–1213, 2014. @article{Mills2014, Recent work indicates that the more conservative one is, the faster one is to fixate on negative stimuli, whereas the less conservative one is, the faster one is to fixate on positive stimuli. The present series of experiments used the face-in-the-crowd paradigm to examine whether variability in the efficiency with which positive and negative stimuli are detected underlies such speed differences. Participants searched for a discrepant facial expression (happy or angry) amid a varying number of neutral distractors (Experiments 1 and 4). A combination of response time and eye movement analyses indicated that variability in search efficiency explained speed differences for happy expressions, whereas variability in post-selectional processes explained speed differences for angry expressions. These results appear to be emotionally mediated as search performance did not vary with political temperament when displays were inverted (Experiment 2) or when controlled processing was required for successful task performance (Experiment 3). Taken together, the present results suggest political temperament is at least partially instantiated by attentional biases for emotional material. |
Haijing Niu; Hao Li; Li Sun; Yongming Su; Jing Huang; Yan Song Visual learning alters the spontaneous activity of the resting human brain: An fNIRS study Journal Article In: BioMed Research International, pp. 1–9, 2014. @article{Niu2014, Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning. |
Takao Noguchi; Neil Stewart In the attraction, compromise, and similarity effects, alternatives are repeatedly compared in pairs on single dimensions Journal Article In: Cognition, vol. 132, no. 1, pp. 44–56, 2014. @article{Noguchi2014, In multi-alternative choice, the attraction, compromise, and similarity effects demonstrate that the value of an alternative is not independent of the other alternatives in the choice-set. Rather, these effects suggest that a choice is reached through the comparison of alternatives. We investigated exactly how alternatives are compared against each other using eye-movement data. The results indicate that a series of comparisons is made in each choice, with a pair of alternatives compared on a single attribute dimension in each comparison. We conclude that psychological models of choice should be based on these single-attribute pairwise comparisons. |
Jared M. Novick; Erika K. Hussey; Susan Teubner-Rhodes; J. Isaiah Harbison; Michael F. Bunting Clearing the garden-path: Improving sentence processing through cognitive control training Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 2, pp. 186–217, 2014. @article{Novick2014, How do general-purpose cognitive abilities affect language processing and comprehension? Recent research emphasises a role for cognitive control*also called executive function (EF)*when individuals must override early parsing decisions as new evidence conflicts with their developing interpretation. We tested if training on non-syntactic EF tasks improves readers' ability to recover from misanalysis during language processing. Participants completed pre/post-reading assessments containing temporarily ambiguous sentences susceptible to misinterpretation. Performance increases on a training task targeting conflict-resolution processes (n-back with ‘‘lures'') predicted improvements in garden-path recovery. N-back responders*those demonstrating reliable training gains* significantly increased their comprehension accuracy across assessments. Their posttest eye-movement patterns also revealed significantly improved real-time revision following entry into disambiguating sentence regions where cognitive control is hypothesised to engage. Untrained participants and n-back non-responders showed no performance changes. The results provide insight into how nonlinguistic functions contribute to parsing and interpretation and suggest that certain language skills are amenable to improvement via domain-general EF training. |
Antje Nuthmann How do the regions of the visual field contribute to object search in real-world scenes? Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 342–360, 2014. @article{Nuthmann2014, An important factor constraining visual search performance is the inhomogeneity of the visual system. Engaging participants in a scene search task, the present study explored how the different regions of the visual field contribute to search. Gaze-contingent Blindspots and Spotlights were implemented to determine the absolute and relative importance of the different visual regions for object-in-scene search. Three Blindspot/Spotlight radii (1.6°, 2.9°, and 4.1°) were used to differentiate between foveal, parafoveal, and peripheral vision. When searching the scene with artificially impaired foveal or central vision (Blindspots), search performance was surprisingly unimpaired. Foveal vision was not necessary to attain normal search performance. When high-resolution scene information was withheld in both foveal and parafoveal vision (4.1° Blindspot), target localization was unimpaired but it took longer to verify the identity of the target. Artificially impairing extrafoveal scene analysis (Spotlights) affected attentional selection and visual processing; shrinking the Spotlight of high resolution led to longer search times, shorter saccades, and more and longer fixations. The 4.1° radius was identified as the crossover point of equal search times in Blindspot and Spotlight conditions. However, a gaze-data based decomposition of search times into behaviorally defined epochs revealed differences in particular subprocesses of search. |
Antje Nuthmann; Madeleine E. L. Beveridge; Richard C. Shillcock A binocular moving window technique to study the roles of the two eyes in reading Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 259–282, 2014. @article{Nuthmann2014b, Readers utilize parafoveal information about upcoming words and read less well when this information is denied. McConkie and Rayner (1975) enabled this issue to be explored by developing the moving window paradigm in which the experimenter varies the amount or the quality of the parafoveal information available around the current fixation point. We present a novel binocular version of the moving window technique to study the roles of the two eyes in reading, and we describe a basic experiment allowed by this technique. In the binocular moving window paradigm, each eye contributes its own window to a composite binocular window onto the text. We studied the reading of single lines of text in three conditions: no windows, a symmetrical 8-letters-left and 8-letters-right window for each eye, and a leftward-skewed 14-letters-left and 2-letters-right window for each eye. Note that both eyes saw the composite window onto the text. We tested the hypothesis that readers could be encouraged to generate a greater binocular disparity to augment their window onto the text and to provide a greater preview for one eye. The data offered limited support for this prediction. We observed considerable individual differences in both baseline fixation disparity and in readers' response to the critical asymmetric [14,2] window. |
Antje Nuthmann; Ellen Matthias Time course of pseudoneglect in scene viewing Journal Article In: Cortex, vol. 52, pp. 113–119, 2014. @article{Nuthmann2014a, When we view the visual world, our eyes move from one location to another about three times each second. When looking at pictures of natural scenes, neurologically intact individuals show a leftward bias in the direction of their first eye movement. The present study investigates the time course of this pseudoneglect and how it depends on task-related control. Eye movements were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, esthetic preference judgment, object-in-scene search). In the memorization and preference tasks, pseudoneglect had a maximum extent of about 1° and lasted for about 1500msec, or 5 fixations. The effect was somewhat reduced in the preference task, which gave subjects free reign to fixate anywhere they wanted to. During scene search, a task that is guided primarily by top-down control, observers also showed a distinct pseudoneglect. Strikingly, a leftward bias was present even when the search object was located in the right hemispace. Search performance was not affected by the observed spatial asymmetries. The effects likely arise from a right-hemisphere dominance for visuo-spatial attention. |
Naotoshi Abekawa; Toshio Inui; Hiroaki Gomi Eye-hand coordination in on-line visuomotor adjustments Journal Article In: NeuroReport, vol. 25, no. 7, pp. 441–445, 2014. @article{Abekawa2014, When we perform a visually guided reaching action, the brain coordinates our hand and eye movements. Eye-hand coordination has been examined widely, but it remains unclear whether the hand and eye motor systems are coordinated during on-line visuomotor adjustments induced by a target jump during a reaching movement. As such quick motor responses are required when we interact with dynamic environments, eye and hand movements could be coordinated even during on-line motor control. Here, we examine the relationship between online hand adjustment and saccadic eye movement. In contrast to the well-known temporal order of eye and hand initiations where the hand follows the eyes, we found that on-line hand adjustment was initiated before the saccade onset. Despite this order reversal, a correlation between hand and saccade latencies was observed, suggesting that the hand motor system is not independent of eye control even when the hand response was induced before the saccade. Moreover, the latency of the hand adjustment with saccadic eye movement was significantly shorter than that with eye fixation. This hand latency modulation cannot be ascribed to any changes of visual or oculomotor reafferent information as the saccade was not yet initiated when the hand adjustment started. Taken together, the hand motor system would receive preparation signals rather than reafference signals of saccadic eye movements to provide quick manual adjustments of the goal-directed eye-hand movements. |
Irene Ablinger; Walter Huber; Ralph Radach Eye movement analyses indicate the underlying reading strategy in the recovery of lexical readers Journal Article In: Aphasiology, vol. 28, no. 6, pp. 640–657, 2014. @article{Ablinger2014, Background: Psycholinguistic error analysis of dyslexic responses in various reading tasks provides the primary basis for clinically discriminating subtypes of pathological reading. Within this framework, phonology-related errors are indicative of a sequential word processing strategy, whereas lexical and semantic errors are associated with a lexical reading strategy. Despite the large number of published intervention studies, relatively little is known about changes in error distributions during recovery in dyslexic patients.Aims: The main purpose of the present work was to extend the scope of research on the time course of recovery in readers with acquired dyslexia, using eye tracking methodology to examine word processing in real time. The guiding hypothesis was that in lexical readers a reduction of lexical errors and an emerging predominant production of phonological errors should be associated with a change to a more segmental moment-to-moment reading behaviour.Methods & Procedures: Five patients participated in an eye movement supported reading intervention, where both lexical and segmental reading was facilitated. Reading performance was assessed before (T1) and after (T2) therapy intervention via recording of eye movements. Analyses included a novel way to examine the spatiotemporal dynamics of processing using distributions of fixation positions as different time intervals. These subdistributions reveal the gradual shifting of fixation positions during word processing, providing an adequate metric for objective classification of online reading strategies.Outcome & Results: Therapy intervention led to improved reading accuracy in all subjects. In three of five participants, analyses revealed a restructuring in the underlying reading mechanisms from predominantly lexical to more segmental word processing. In contrast, two subjects maintained their lexical reading procedures. Importantly, the fundamental assumption that a high number of phonologically based reading errors must be associated with segmental word processing routines, while the production of lexical errors is indicative of a holistic reading strategy could not be verified.Conclusions: Our results indicate that despite general improvements in reading performance, only some patients reorganised their word identification process. Contradictive data raise doubts on the validity of psycholinguistic error analysis as an exclusive indicator of changes in reading strategy. We suggest this traditional approach to combine with innovative eye tracking methodology in the interest of more comprehensive diagnostic strategies. |
Irene Ablinger; Kerstin Heyden; Christian Vorstius; Katja Halm; Walter Huber; Ralph Radach An eye movement based reading intervention in lexical and segmental readers with acquired dyslexia Journal Article In: Neuropsychological Rehabilitation, vol. 24, no. 6, pp. 833–867, 2014. @article{Ablinger2014a, Due to their brain damage, aphasic patients with acquired dyslexia often rely to a greater extent on lexical or segmental reading procedures. Thus, therapy intervention is mostly targeted on the more impaired reading strategy. In the present work we introduce a novel therapy approach based on real-time measurement of patients' eye movements as they attempt to read words. More specifically, an eye movement contingent technique of stepwise letter de-masking was used to support sequential reading, whereas fixation-dependent initial masking of non-central letters stimulated a lexical (parallel) reading strategy. Four lexical and four segmental readers with acquired central dyslexia received our intensive reading intervention. All participants showed remarkable improvements as evident in reduced total reading time, a reduced number of fixations per word and improved reading accuracy. Both types of intervention led to item-specific training effects in all subjects. A generalisation to untrained items was only found in segmental readers after the lexical training. Eye movement analyses were also used to compare word processing before and after therapy, indicating that all patients, with one exclusion, maintained their preferred reading strategy. However, in several cases the balance between sequential and lexical processing became less extreme, indicating a more effective individual interplay of both word processing routes. |
Alper Açık; Andreas Bartel; Peter König Real and implied motion at the center of gaze Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–19, 2014. @article{Acik2014, Even though the dynamicity of our environment is a given, much of what we know on fixation selection comes from studies of static scene viewing. We performed a direct comparison of fixation selection on static and dynamic visual stimuli and investigated how far identical mechanisms drive these. We recorded eye movements while participants viewed movie clips of natural scenery and static frames taken from the same movies. Both were presented in the same high spatial resolution (1080 · 1920 pixels). The static condition allowed us to check whether local movement features computed from movies are salient even when presented as single frames. We observed that during the first second of viewing, movement and static features are equally salient in both conditions. Furthermore, predictability of fixations based on movement features decreased faster when viewing static frames as compared with viewing movie clips. Yet even during the later portion of static-frame viewing, the predictive value of movement features was still high above chance. Moreover, we demonstrated that, whereas the sets of movement and static features were statistically dependent within these sets, respectively, no dependence was observed between the two sets. Based on these results, we argue that implied motion is predictive of fixation similarly to real movement and that the onset of motion in natural stimuli is more salient than ongoing movement is. The present results allow us to address to what extent and when static image viewing is similar to the perception of a dynamic environment. |
John F. Ackermann; M. S. Landy Statistical templates for visual search Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–17, 2014. @article{Ackermann2014, How do we find a target embedded in a scene? Within the framework of signal detection theory, this task is carried out by comparing each region of the scene with a "template," i.e., an internal representation of the search target. Here we ask what form this representation takes when the search target is a complex image with uncertain orientation. We examine three possible representations. The first is the matched filter. Such a representation cannot account for the ease with which humans can find a complex search target that is rotated relative to the template. A second representation attempts to deal with this by estimating the relative orientation of target and match and rotating the intensity-based template. No intensity-based template, however, can account for the ability to easily locate targets that are defined categorically and not in terms of a specific arrangement of pixels. Thus, we define a third template that represents the target in terms of image statistics rather than pixel intensities. Subjects performed a two-alternative, forced-choice search task in which they had to localize an image that matched a previously viewed target. Target images were texture patches. In one condition, match images were the same image as the target and distractors were a different image of the same textured material. In the second condition, the match image was of the same texture as the target (but different pixels) and the distractor was an image of a different texture. Match and distractor stimuli were randomly rotated relative to the target. We compared human performance to pixel-based, pixel-based with rotation, and statistic-based search models. The statistic-based search model was most successful at matching human performance. We conclude that humans use summary statistics to search for complex visual targets. |
Hamed Zivari Adab; Ivo D. Popivanov; Wim Vanduffel; Rufin Vogels Perceptual learning of simple stimuli modifies stimulus representations in posterior inferior temporal cortex Journal Article In: Journal of Cognitive Neuroscience, vol. 26, no. 10, pp. 2187–2200, 2014. @article{Adab2014, Practicing simple visual detection and discrimination tasks improves performance, a signature of adult brain plasticity. The neural mechanisms that underlie these changes in performance are still unclear. Previously, we reported that practice in discriminating the orientation of noisy gratings (coarse orientation discrimination) increased the ability of single neurons in the early visual area V4 to discriminate the trained stimuli. Here, we ask whether practice in this task also changes the stimulus tuning properties of later visual cortical areas, despite the use of simple grating stimuli. To identify candidate areas, we used fMRI to map activations to noisy gratings in trained rhesus monkeys, revealing a region in the posterior inferior temporal (PIT) cortex. Subsequent single unit record- ings in PIT showed that the degree of orientation selectivity was similar to that of area V4 and that the PIT neurons discriminated the trained orientations better than the untrained orientations. Unlike in previous single unit studies of perceptual learning in early visual cortex, more PIT neurons preferred trained compared with untrained orientations. The effects of training on the responses to the grating stimuli were also present when the animals were performing a difficult orthogo- nal task in which the grating stimuli were task-irrelevant, suggesting that the training effect does not need attention to be expressed. The PIT neurons could support orientation discrimination at low signal-to-noise levels. These findings suggest that extensive practice in discriminating simple grating stimuli not only affects early visual cortex but also changes the stimulus tuning of a late visual cortical area. |
Jos J. Adam; Thamar J. H. Bovend'Eerdt; Fren T. Y. Smulders; Pascal W. M. Van Gerven Strategic flexibility in response preparation: Effects of cue validity on reaction time and pupil dilation Journal Article In: Journal of Cognitive Psychology, vol. 26, no. 2, pp. 166–177, 2014. @article{Adam2014, This study examined the ability of participants to strategically adapt their level of response preparation to the predictive value of preparatory cues. Participants performed the finger-precuing task under three levels of cue validity: 100, 75 and 50% valid. Response preparation was indexed by means of reaction time (RT) and pupil dilation, the latter providing a psychophysiological index of invested effort. Results showed a systematic increase in RT benefits (generated by valid cues) and RT costs (generated by invalid cues) with increments in the predictive value of cues. Converging with these behavioural effects, pupil dilation also increased systematically with greater cue validity during the cue-stimulus interval, suggesting more effortful response preparation with increases in cue validity. Together, these findings confirm the hypothesis that response preparation is flexible and that it can be strategically allocated in proportion to the relative frequency of valid/invalid preparatory cues. |
Elsa Ahlén; Charlotte S. Hills; Hashim M. Hanif; Cristina Rubino; Jason J. S. Barton Learning to read upside-down: A study of perceptual expertise and its acquisition Journal Article In: Experimental Brain Research, vol. 232, no. 3, pp. 1025–1036, 2014. @article{Ahlen2014, Reading is an expert visual and ocular motor function, learned mainly in a single orientation. Characterizing the features of this expertise can be accomplished by contrasts between reading of normal and inverted text, in which perceptual but not linguistic factors are altered. Our goal was to examine this inversion effect in healthy subjects reading text, to derive behavioral and ocular motor markers of perceptual expertise in reading, and to study these parameters before and after training with inverted reading. Seven subjects engaged in a 10-week program of 30 half-hour sessions of reading inverted text. Before and after training, we assessed reading of upright and inverted single words for response time and word-length effects, as well as reading of paragraphs for time required, accuracy, and ocular motor parameters. Before training, inverted reading was characterized by long reading times and large word-length effects, with eye movements showing more and longer fixations, more and smaller forward saccades, and more regressive saccades. Training partially reversed many of these effects in single word and text reading, with the best gains occurring in reading aloud time and proportion of regressive saccades and the least change in forward saccade amplitude. We conclude that reading speed and ocular motor parameters can serve as markers of perceptual expertise during reading and that training with inverted text over 10 weeks results in significant gains of reading expertise in this unfamiliar orientation. This approach may be useful in the rehabilitation of patients with hemianopic dyslexia. |
Sheeraz Ahmad; He Huang; Angela J. Yu Cost-sensitive Bayesian control policy in human active sensing Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 955, 2014. @article{Ahmad2014, An important but poorly understood aspect of sensory processing is the role of active sensing, the use of self-motion such as eye or head movements to focus sensing resources on the most rewarding or informative aspects of the sensory environment. Here, we present behavioral data from a visual search experiment, as well as a Bayesian model of within-trial dynamics of sensory processing and eye movements. Within this Bayes-optimal inference and control framework, which we call C-DAC (Context-Dependent Active Controller), various types of behavioral costs, such as temporal delay, response error, and sensor repositioning cost, are explicitly minimized. This contrasts with previously proposed algorithms that optimize abstract statistical objectives such as anticipated information gain (Infomax) (Butko and Movellan, 2010) and expected posterior maximum (greedy MAP) (Najemnik and Geisler, 2005). We find that C-DAC captures human visual search dynamics better than previous models, in particular a certain form of "confirmation bias" apparent in the way human subjects utilize prior knowledge about the spatial distribution of the search target to improve search speed and accuracy. We also examine several computationally efficient approximations to C-DAC that may present biologically more plausible accounts of the neural computations underlying active sensing, as well as practical tools for solving active sensing problems in engineering applications. To summarize, this paper makes the following key contributions: human visual search behavioral data, a context-sensitive Bayesian active sensing model, a comparative study between different models of human active sensing, and a family of efficient approximations to the optimal model. |
Noor Z. Al Dahhan; George K. Georgiou; Rickie Hung; Douglas P. Munoz; Rauno Parrila; John R. Kirby Eye movements of university students with and without reading difficulties during naming speed tasks Journal Article In: Annals of Dyslexia, vol. 64, no. 2, pp. 137–150, 2014. @article{AlDahhan2014, Although naming speed (NS) has been shown to predict reading into adulthood and differentiate between adult dyslexics and controls, the question remains why NS is related to reading. To address this question, eye movement methodology was combined with three letter NS tasks (the original letter NS task by Denckla & Rudel, Cortex 10:186-202, 1974, and two more developed by Compton, The Journal of Special Education 37:81-94, 2003, with increased phonological or visual similarity of the letters). Twenty undergraduate students with reading difficulties (RD) and 27 without (NRD) were tested on letter NS tasks (eye movements were recorded during the NS tasks), phonological processing, and reading fluency. The results indicated first that the RD group was slower than the NRD group on all NS tasks with no differences between the NS tasks. In addition, the NRD group had shorter fixation durations, longer saccades, and fewer saccades and fixations than the RD group. Fixation duration and fixation count were significant predictors of reading fluency even after controlling for phonological processing measures. Taken together, these findings suggest that the NS-reading relationship is due to two factors: less able readers require more time to acquire stimulus information during fixation and they make more saccades. |
Erman Misirlisoy; Frouke Hermens; Matthew Stavrou; Jennifer Pennells; Robin Walker Spatial primes produce dissociated inhibitory effects on saccadic latencies and trajectories Journal Article In: Vision Research, vol. 96, pp. 1–7, 2014. @article{Misirlisoy2014, In masked priming, a briefly presented prime can facilitate or inhibit responses to a subsequent target. In most instances, targets with an associated response that is congruent with the prime direction speed up reaction times to the target (a positive compatibility effect; PCE). However, under certain circumstances, slower responses for compatible primes are obtained (a negative compatibility effect; NCE). NCEs can be found when a long pre-target delay is used. During the delay, inhibition is assumed to take place, and therefore an effect on saccade trajectories may also be expected. In a previous study, we found the effects of inhibition on response times and trajectories to be dissociated, but this experiment varied the timing of several aspects of the stimulus sequence and it is therefore unclear what caused the dissociation. In the present study, we varied only one aspect of the timing, but replicated the dissociation. By varying just the pre-target delay, we found a PCE for a short delay, and an NCE for a long delay, but saccade trajectories deviated away from prime directions in both conditions. This suggests dissociated inhibitory effects of primes on response times and saccade trajectories. |
Takashi Mitsuda; Mackenzie G. Glaholt Gaze bias during visual preference judgements: Effects of stimulus category and decision instructions Journal Article In: Visual Cognition, vol. 22, no. 1, pp. 11–29, 2014. @article{Mitsuda2014, Prior research has demonstrated that during two-alternative decision making, gaze is biased towards the alternative that is eventually chosen. The Gaze Cascade model proposed by Shimojo, Simion, Shimojo, and Scheier (2003) predicts a larger bias for decisions requiring one to choose the item that is liked the most versus decisions that require one to choose the item that is disliked most. More recently, Park, Shimojo, and Shimojo (2010) showed that preference formation operates differently during decisions among faces and scenes, which suggests that gaze bias might differ depending on whether the decision stimuli are faces or scenes. In the present study we tested these two hypotheses in a within-subject design. Eye movements were monitored while participants (n = 48) made two-alternative Like or Dislike decisions among pairs of faces or scenes. We found remarkably little influence of stimulus type on gaze bias for either decision task, which disconfirms the hypothesis that gaze bias operates differently for faces than scenes. In contrast, we found that gaze bias was stronger for Like decisions than Dislike decisions. To further account for this effect we examined the decision time course, which revealed that this task effect is primarily related to biases in the placement, and duration, of the final dwell prior to response, although there was evidence that the bias began earlier for Like decisions. Implications for mechanisms of gaze allocation during multi-alternative decision making are discussed. |
Matthias Mittner; Wouter Boekel; Adrienne M. Tucker; Brandon M. Turner; Andrew Heathcote; Birte U. Forstmann When the brain takes a break: A model-based analysis of mind aandering Journal Article In: Journal of Neuroscience, vol. 34, no. 49, pp. 16286–16295, 2014. @article{Mittner2014, Mind wandering is an ubiquitous phenomenon in everyday life. In the cognitive neurosciences, mind wandering has been associated with several distinct neural processes, most notably increased activity in the default mode network (DMN), suppressed activity within the anti-correlated (task-positive) network (ACN), and changes in neuromodulation. By using an integrative multimodal approach combining machine-learning techniques with modeling of latent cognitive processes, we show that mind wandering in humans is characterized by inefficiencies in executive control (task-monitoring) processes. This failure is predicted by a single-trial signature of (co)activations in the DMN, ACN, and neuromodulation, and accompanied by a decreased rate of evidence accumulation and response thresholds in the cognitive model. |
Kenichiro Miura; Ryota Hashimoto; Michiko Fujimoto; Hidenaga Yamamori; Yuka Yasuda; Kazutaka Ohi; Satomi Umeda-Yano; Masaki Fukunaga; Masao Iwase; Masatoshi Takeda An integrated eye movement score as a neurophysiological marker of schizophrenia Journal Article In: Schizophrenia Research, vol. 160, no. 1-3, pp. 228–229, 2014. @article{Miura2014, In this study, we aimed to create an integrated eye movement rating scale that indicates the degree to which the eye movements are abnormal, and examined the relationship between the eye movement score and conventional scales of symptom severity and social and cognitive functioning in patients with schizophrenia. The eye movements of 40 patients with schizophrenia and 69 healthy subjects aged 15 to 68 years old were recorded. All subjects were biologically unrelated and were of Japanese descent. The integrated eye movement score represents the dimensions of schizophrenia that are, in large part, different from the dimensions represented by the conventional scales. Therefore, the score will effectively assist physicians' diagnosis together with conventional symptom/functioning scales, and, in particular, may be useful in the early diagnosis of schizophrenia or its prodrome where subjective symptoms are relatively obscure. The significance of eye movement scores primarily lies in diagnosis of schizophrenia, although this score is not to replace DSM-V diagnostic criteria or any other criteria that rely on history, observation and self-report. A similar methodology using eye movement characteristics may be usable to distinguish schizophrenia from bipolar mania, schizoaffective disorder and autism and so on, and future studies should test this possibility. |
Koji Miwa; Ton Dijkstra; Patrick Bolger; R. Harald Baayen Reading English with Japanese in mind: Effects of frequency, phonology, and meaning in different-script bilinguals Journal Article In: Bilingualism: Language and Cognition, vol. 17, no. 3, pp. 445–463, 2014. @article{Miwa2014, Previous priming studies suggest that, even for bilinguals of languages with different scripts, non-selective lexical activation arises. This lexical decision eye-tracking study examined contributions of frequency, phonology, and meaning of L1 Japanese words on L2 English word lexical decision processes, using mixed-effects regression modeling. The response times and eye fixation durations of late bilinguals were co-determined by L1 Japanese word frequency and cross-language phonological and semantic similarities, but not by a dichotomous factor encoding cognate status. These effects were not observed for native monolingual readers and were confirmed to be genuine bilingual effects. The results are discussed based on the Bilingual Interactive Activation model (BIA+, Dijkstra & Van Heuven, 2002) under the straightforward assumption that English letter units do not project onto Japanese word units. © Cambridge University Press 2013. |
Koji Miwa; Gary Libben; Ton Dijkstra; Harald Baayen The time-course of lexical activation in Japanese morphographic word recognition: Evidence for a character-driven processing model Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 1, pp. 79–113, 2014. @article{Miwa2014a, This lexical decision study with eye tracking of Japanese two-kanji-character words investigated the order in which a whole two-character word and its morphographic constituents are activated in the course of lexical access, the relative contributions of the left and the right characters in lexical decision, the depth to which semantic radicals are processed, and how nonlinguistic factors affect lexical processes. Mixed-effects regression analyses of response times and subgaze durations (i.e., first-pass fixation time spent on each of the two characters) revealed joint contributions of morphographic units at all levels of the linguistic structure with the magnitude and the direction of the lexical effects modulated by readers' locus of attention in a left-to-right preferred processing path. During the early time frame, character effects were larger in magnitude and more robust than radical and whole-word effects, regardless of the font size and the type of nonwords. Extending previous radical-based and character-based models, we propose a task/decision-sensitive character-driven processing model with a level-skipping assumption: Connections from the feature level bypass the lower radical level and link up directly to the higher character level. |
Tobias Moehler; Katja Fiehler Effects of spatial congruency on saccade and visual discrimination performance in a dual-task paradigm Journal Article In: Vision Research, vol. 105, pp. 100–111, 2014. @article{Moehler2014, The present study investigated the coupling of selection-for-perception and selection-for-action during saccadic eye movement planning in three dual-task experiments. We focused on the effects of spatial congruency of saccade target (ST) location and discrimination target (DT) location and the time between ST-cue and Go-signal (SOA) on saccadic eye movement performance. In two experiments, participants performed a visual discrimination task at a cued location while programming a saccadic eye movement to a cued location. In the third experiment, the discrimination task was not cued and appeared at a random location. Spatial congruency of ST-location and DT-location resulted in enhanced perceptual performance irrespective of SOA. Perceptual performance in spatially incongruent trials was above chance, but only when the DT-location was cued. Saccade accuracy and precision were also affected by spatial congruency showing superior performance when the ST- and DT-location coincided. Saccade latency was only affected by spatial congruency when the DT-cue was predictive of the ST-location. Moreover, saccades consistently curved away from the incongruent DT-locations. Importantly, the effects of spatial congruency on saccade parameters only occurred when the DT-location was cued; therefore, results from experiments 1 and 2 are due to the endogenous allocation of attention to the DT-location and not caused by the salience of the probe. The SOA affected saccade latency showing decreasing latencies with increasing SOA. In conclusion, our results demonstrate that visuospatial attention can be voluntarily distributed upon spatially distinct perceptual and motor goals in dual-task situations, resulting in a decline of visual discrimination and saccade performance. |
Jeff Moher; Joo-Hyun Song Target selection bias transfers across different response actions Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 3, pp. 1117–1130, 2014. @article{Moher2014, Target selection is biased by recent experience. For example, a selected target feature may be stored in memory and bias selection on future trials, such that objects matching that feature are "primed" for selection. In the present study, we examined the role of action history in selection biases. Participants searched for a uniquely colored object. Pretrial cues indicated whether participants should respond with a keypress or a reach movement. If the representation of the feature that biases selection is critically bound with its associated action, we would expect priming effects to be restricted to cases where both the response mode and target color are repeated. However, we found that responses to the target were faster when the target color was repeated, even when the response switched from a reach to a keypress, or vice versa. Priming effects were even observed after "no-go" trials in which a response was withheld, and priming effects transferred across response modes when eye movement recordings ensured that participants did not saccade to the target. These results demonstrate that target features are represented in memory separately from their associated actions and can bias selection on subsequent trials even when a different mode of action output is required. |
Robert J. Molitor; Philip C. Ko; Erin P. Hussey; Brandon A. Ally Memory‐related eye movements challenge behavioral measures of pattern completion and pattern separation Journal Article In: Hippocampus, vol. 24, no. 6, pp. 666–672, 2014. @article{Molitor2014, The hippocampus creates distinct episodes from highly similar events through a process called pattern separation and can retrieve memories from partial or degraded cues through a process called pattern completion. These processes have been studied in humans using tasks where participants must distinguish studied items from perceptually similar lure items. False alarms to lures (incorrectly reporting a perceptually similar item as previously studied) are thought to reflect pattern completion, a retrieval-based process. However, false alarms to lures could also result from insufficient encoding of studied items, leading to impoverished memory of item details and a failure to correctly reject lures. The current study investigated the source of lure false alarms by comparing eye movements during the initial presentation of items to eye movements made during the later presentation of item repetitions and similar lures in order to assess mnemonic processing at encoding and retrieval, respectively. Relative to other response types, lure false alarms were associated with fewer fixations to the initially studied items, suggesting that false alarms result from impoverished encoding. Additionally, lure correct rejections and lure false alarms garnered more fixations than hits, denoting additional retrieval-related processing. The results suggest that measures of pattern separation and completion in behavioral paradigms are not process-pure. |
P. Moon; J. Muday; S. Raynor; J. Schirillo; C. Boydston; M. S. Fairbanks; R. P. Taylor Fractal images induce fractal pupil dilations and constrictions Journal Article In: International Journal of Psychophysiology, vol. 93, no. 3, pp. 316–321, 2014. @article{Moon2014, Fractals are self-similar structures or patterns that repeat at increasingly fine magnifications. Research has revealed fractal patterns in many natural and physiological processes. This article investigates pupillary size over time to determine if their oscillations demonstrate a fractal pattern. We predict that pupil size over time will fluctuate in a fractal manner and this may be due to either the fractal neuronal structure or fractal properties of the image viewed. We present evidence that low complexity fractal patterns underlie pupillary oscillations as subjects view spatial fractal patterns. We also present evidence implicating the autonomic nervous system's importance in these patterns. Using the variational method of the box-counting procedure we demonstrate that low complexity fractal patterns are found in changes within pupil size over time in millimeters (mm) and our data suggest that these pupillary oscillation patterns do not depend on the fractal properties of the image viewed. |
Sarah R. Moore; Yu Fu; Richard A. Depue Social traits modulate attention to affiliative cues Journal Article In: Frontiers in Psychology, vol. 5, pp. 649, 2014. @article{Moore2014, Neurobehavioral models of personality suggest that the salience assigned to particular classes of stimuli vary as a function of traits that reflect both the activity of neurobiological encoding and relevant social experience. In turn, this joint influence modulates the extent that salience influences attentional processes, and hence learning about and responding to those stimuli. Applying this model to the domain of social valuation, we assessed the differential effects on attentional guidance by affiliative cues of (i) a higher-order temperament trait (Social Closeness), and (ii) attachment style in a sample of 57 women. Attention to affiliative pictures paired with either incentive or neutral pictures was assessed using camera eye-tracking. Trait social closeness and attachment avoidance interacted to modulate fixation frequency on affiliative but not on incentive pictures, suggesting that both traits influence the salience assigned to affiliative cues specifically. |
Stéphanie M. Morand; Monika Harvey; Marie-Hélène Grosbras Parieto-occipital cortex shows early target selection to faces in a reflexive orienting task Journal Article In: Cerebral Cortex, vol. 24, no. 4, pp. 898–907, 2014. @article{Morand2014, It is well established that human faces induce stronger involuntary orienting responses than other visual objects. Yet, the timing of this preferential orienting response at the neural level is still unknown. Here, we used an antisaccade paradigm to investigate the neural dynamics preceding the onset of reflexive and voluntary saccades elicited by human faces and nonface visual objects, normalized for their global low-level visual properties. High-density event-related potentials (ERPs) were recorded in observers as they performed interleaved pro- and antisaccades toward a lateralized target. For reflexive saccades, we report an ERP modulation specific to faces as early as 40–60 ms following stimulus onset over parieto-occipital sites, further predicting the speed of saccade execution. This was not linked to differences in the programming of the saccadic eye movements, as it occurred early in time. For the first time, we present electrophysiological evidence of early target selection to faces in reflexive orienting responses over parieto-occipital cortex that facilitates the triggering of saccades toward faces. We argue for a 2-stage process in the representation of a face in involuntary spatial orienting with an initial, rapid implicit processing of the visual properties of a face, followed by subsequent stimulus categorization depicted by the N170 component. |
Elior Moreh; Tal Seidel Malkinson; Ehud Zohary; Nachum Soroker Visual memory in unilateral spatial neglect: Immediate recall versus delayed recognition Journal Article In: Journal of Cognitive Neuroscience, vol. 26, pp. 2155–2170, 2014. @article{Moreh2014, Patients with unilateral spatial neglect (USN) often show impaired performance in spatial working memory tasks, apart from the difficulty retrieving “left-sided” spatial data from long-term memory, shown in the “piazza effect” by Bisiach and colleagues. This studyʼs aim was to compare the effect of the spatial position of a visual object on immediate and delayed memory performance in USN patients. Specifically, immediate verbal recall performance, tested using a simultaneous presen- tation of four visual objects in four quadrants, was compared with memory in a later-provided recognition task, in which objects were individually shown at the screen center. Unlike healthy controls, USN patients showed a left-side disadvantage and a vertical bias in the immediate free recall task (69% vs. 42% recall for right- and left-sided objects, respectively). In the rec- ognition task, the patients correctly recognized half of “old” items, and their correct rejection rate was 95.5%. Importantly, when the analysis focused on previously recalled items (in the immediate task), no statistically significant difference was found in the delayed recognition of objects according to their original quadrant of presentation. Furthermore, USN patients were able to recollect the correct original location of the recog- nized objects in 60% of the cases, well beyond chance level. This suggests that the memory trace formed in these cases was not only semantic but also contained a visuospatial tag. Finally, successful recognition of objects missed in recall trials points to formation ofmemory traces for neglected contralesional objects, which may become accessible to retrieval processes in explicit memory. |
Michael Morgan A bias-free measure of retinotopic tilt adaptation Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–9, 2014. @article{Morgan2014, The traditional method of single stimuli for measuring perceptual illusions and context effects confounds perceptual effects with changes in the observer's decision criterion. By deciding consciously or unconsciously to select one of the two response alternatives more than the other when unsure of the correct response, the observer can shift his or her psychometric function in a manner indistinguishable from a genuine perceptual shift. Here, a spatial two-alternative forced-choice method is described to measure a perceptual aftereffect by its influence on the shape of the psychometric function rather than the mean. The method was tested by measuring the effect of motion adaptation on the apparent Vernier offset of stationary Gabor patterns. The shift due to adaptation was found to be comparable in size to the internal noise, estimated from the slope of the psychometric function. By moving the eyes between adaptation and test, it was determined that adaptation is retinotopic rather than spatiotopic. |
Stefanie Mueller; Katja Fiehler Effector movement triggers gaze-dependent spatial coding of tactile and proprioceptive-tactile reach targets Journal Article In: Neuropsychologia, vol. 62, no. 1, pp. 184–193, 2014. @article{Mueller2014, Reaching in space requires that the target and the hand are represented in the same coordinate system. While studies on visually-guided reaching consistently demonstrate the use of a gaze-dependent spatial reference frame, controversial results exist in the somatosensory domain. We investigated whether effector movement (eye or arm/hand) after target presentation and before reaching leads to gaze-dependent coding of somatosensory targets. Subjects reached to a felt target while directing gaze towards one of seven fixation locations. Touches were applied to the fingertip(s) of the left hand (proprioceptive-tactile targets) or to the dorsal surface of the left forearm (tactile targets). Effector movement was varied in terms of movement of the target limb or a gaze shift. Horizontal reach errors systematically varied as a function of gaze when a movement of either the target effector or gaze was introduced. However, we found no effect of gaze on horizontal reach errors when a movement was absent before the reach. These findings were comparable for tactile and proprioceptive-tactile targets. Our results suggest that effector movement promotes a switch from a gaze-independent to a gaze-dependent representation of somatosensory reach targets. |
Stefanie Mueller; Katja Fiehler Gaze-dependent spatial updating of tactile targets in a localization task Journal Article In: Frontiers in Psychology, vol. 5, pp. 66, 2014. @article{Mueller2014a, There is concurrent evidence that visual reach targets are represented with respect to gaze. For tactile reach targets, we previously showed that an effector movement leads to a shift from a gaze-independent to a gaze-dependent reference frame. Here we aimed to unravel the influence of effector movement (gaze shift) on the reference frame of tactile stimuli using a spatial localization task (yes/no paradigm). We assessed how gaze direction (fixation left/right) alters the perceived spatial location (point of subjective equality) of sequentially presented tactile standard and visual comparison stimuli while effector movement (gaze fixed/shifted) and stimulus order (vis-tac/tac-vis) were varied. In the fixed-gaze condition, subjects maintained gaze at the fixation site throughout the trial. In the shifted-gaze condition, they foveated the first stimulus, then made a saccade toward the fixation site where they held gaze while the second stimulus appeared. Only when an effector movement occurred after the encoding of the tactile stimulus (shifted-gaze, tac-vis), gaze similarly influenced the perceived location of the tactile and the visual stimulus. In contrast, when gaze was fixed or a gaze shift occurred before encoding of the tactile stimulus, gaze differentially affected the perceived spatial relation of the tactile and the visual stimulus suggesting gaze-dependent coding of only one of the two stimuli. Consistent with previous findings this implies that visual stimuli vary with gaze irrespective of whether gaze is fixed or shifted. However, a gaze-dependent representation of tactile stimuli seems to critically depend on an effector movement (gaze shift) after tactile encoding triggering spatial updating of tactile targets in a gaze-dependent reference frame. Together with our recent findings on tactile reaching, the present results imply similar underlying reference frames for tactile spatial perception and action. |
Romy Müller; Jens R. Helmert; Sebastian Pannasch Limitations of gaze transfer: Without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do Journal Article In: Acta Psychologica, vol. 152, pp. 19–28, 2014. @article{Mueller2014b, Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. |
Aidan P. Murphy; David A. Leopold; Andrew E. Welchman Perceptual memory drives learning of retinotopic biases for bistable stimuli Journal Article In: Frontiers in Psychology, vol. 5, pp. 60, 2014. @article{Murphy2014, The visual system exploits past experience at multiple timescales to resolve perceptual$backslash$r$backslash$n ambiguity in the retinal image. For example, perception of a bistable stimulus can be$backslash$r$backslash$n biased towards one interpretation over another when preceded by a brief presentation of a$backslash$r$backslash$n disambiguated version of the stimulus (positive priming) or through intermittent$backslash$r$backslash$n presentations of the ambiguous stimulus (stabilization). Similarly, prior presentations of$backslash$r$backslash$n unambiguous stimuli can be used to explicitly “train” a long-lasting association between$backslash$r$backslash$n a percept and a retinal location (perceptual association). These phenonema have typically$backslash$r$backslash$n been regarded as independent processes, with short-term biases attributed to perceptual$backslash$r$backslash$n memory and longer-term biases described as associative learning. Here we tested for$backslash$r$backslash$n interactions between these two forms of experience-dependent perceptual bias and$backslash$r$backslash$n demonstrate that short-term processes strongly influence long-term outcomes. We first$backslash$r$backslash$n demonstrate that the establishment of long-term perceptual contingencies does not require$backslash$r$backslash$n explicit training by unambiguous stimuli, but can arise spontaneously during the periodic$backslash$r$backslash$n presentation of brief, ambiguous stimuli. Using rotating Necker cube stimuli, we$backslash$r$backslash$n observed enduring, retinotopically specific perceptual biases that were expressed from$backslash$r$backslash$n the outset and remained stable for up to forty minutes, consistent with the known$backslash$r$backslash$n phenomenon of perceptual stabilization. Further, bias was undiminished after a break$backslash$r$backslash$n period of five minutes, but was readily reset by interposed periods of continuous, as$backslash$r$backslash$n opposed to periodic, ambiguous presentation. Taken together, the results demonstrate that$backslash$r$backslash$n perceptual biases can arise naturally and may principally reflect the brain's tendency to$backslash$r$backslash$n favor recent perceptual interpretation at a given retinal location. Further, they suggest that$backslash$r$backslash$n an association between retinal location and perceptual state, rather than a physical stimulus, is sufficient to generate long-term biases in perceptual organization. |
Peter R. Murphy; Joachim Vandekerckhove; Sander Nieuwenhuis Pupil-linked arousal determines variability in perceptual decision making Journal Article In: PLoS Computational Biology, vol. 10, no. 9, pp. e1003854, 2014. @article{Murphy2014a, Decision making between several alternatives is thought to involve the gradual accumulation of evidence in favor of each available choice. This process is profoundly variable even for nominally identical stimuli, yet the neuro-cognitive substrates that determine the magnitude of this variability are poorly understood. Here, we demonstrate that arousal state is a powerful determinant of variability in perceptual decision making. We measured pupil size, a highly sensitive index of arousal, while human subjects performed a motion-discrimination task, and decomposed task behavior into latent decision making parameters using an established computational model of the decision process. In direct contrast to previous theoretical accounts specifying a role for arousal in several discrete aspects of decision making, we found that pupil diameter was uniquely related to a model parameter representing variability in the rate of decision evidence accumulation: Periods of increased pupil size, reflecting heightened arousal, were characterized by greater variability in accumulation rate. Pupil diameter also correlated trial-by-trial with specific patterns of behavior that collectively are diagnostic of changing accumulation rate variability, and explained substantial individual differences in this computational quantity. These findings provide a uniquely clear account of how arousal state impacts decision making, and may point to a relationship between pupil-linked neuromodulation and behavioral variability. They also pave the way for future studies aimed at augmenting the precision with which people make decisions. |