All EyeLink Eye Tracker Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2013 |
Ian G. M. Cameron; Donald C. Brien; Kira Links; Sarah Robichaud; Jennifer D. Ryan; Douglas P. Munoz; Tiffany W. Chow Changes to saccade behaviors in parkinson's disease following dancing and observation of dancing Journal Article In: Frontiers in Neurology, vol. 4, pp. 22, 2013. @article{Cameron2013, BACKGROUND: The traditional view of Parkinson's disease (PD) as a motor disorder only treated by dopaminergic medications is now shifting to include non-pharmacologic interventions. We have noticed that patients with PD obtain an immediate, short-lasting benefit to mobility by the end of a dance class, suggesting some mechanism by which dancing reduces bradykinetic symptoms. We have also found that patients with PD are unimpaired at initiating highly automatic eye movements to visual stimuli (pro-saccades) but are impaired at generating willful eye movements away from visual stimuli (anti-saccades). We hypothesized that the mechanisms by which a dance class improves movement initiation may generalize to the brain networks impacted in PD (frontal lobe and basal ganglia, BG), and thus could be assessed objectively by measuring eye movements, which rely on the same neural circuitry. METHODS: Participants with PD performed pro- and anti-saccades before, and after, a dance class. "Before" and "after" saccade performance measurements were compared. These measurements were then contrasted with a control condition (observing a dance class in a video), and with older and younger adult populations, who rested for an hour between measurements. RESULTS: We found an improvement in anti-saccade performance following the observation of dance (but not following dancing), but we found a detriment in pro-saccade performance following dancing. CONCLUSION: We suggest that observation of dance induced plasticity changes in frontal-BG networks that are important for executive control. Dancing, in contrast, increased voluntary movement signals that benefited mobility, but interfered with the automaticity of efficient pro-saccade execution. |
Anneloes R. Canestrelli; Willem M. Mak; Ted J. M. Sanders Causal connectives in discourse processing: How differences in subjectivity are reflected in eye movements Journal Article In: Language and Cognitive Processes, vol. 28, no. 9, pp. 1394–1413, 2013. @article{Canestrelli2013, Causal connectives are often considered to provide crucial information about the discourse structure; they signal a causal relation between two text segments. However, in many languages of the world causal connectives specialise in either subjective or objective causal relations. We investigate whether this type of (discourse) information is used during the online processing of causal connectives by focusing on the Dutch connectives want and omdat, both translated by because. In three eye-tracking studies we demonstrate that the Dutch connective want, which is a prototypical marker of subjective CLAIM-ARGUMENT relations, leads to an immediate processing disadvantage compared to omdat, a prototypical marker of objective CONSEQUENCE-CAUSE relations. This effect was observed at the words immediately following the connective, at which point readers cannot yet establish the causal relation on the basis of the content, which means that the effect is solely induced by the connectives. In Experiment 2 we demonstrate that this effect is related to the representation of the first clause of a want relation as a mental state. In Experiment 3, we show that the use of omdat in relations that do not allow for a CONSEQUENCE-CAUSE interpretation leads to serious processing difficulties at the end of those relations. On the basis of these results, we argue that want triggers a subjective mental state interpretation of S1, whereas omdat triggers the construction of an objective CONSEQUENCE-CAUSE relation. These results illustrate that causal connectives provide subtle information about semantic-pragmatic distinctions between types of causal relations, which immediately influences online processing. |
Almudena Capilla; Pascal Belin; Joachim Gross The early spatio-temporal correlates and task independence of cerebral voice processing studied with MEG Journal Article In: Cerebral Cortex, vol. 23, no. 6, pp. 1388–1395, 2013. @article{Capilla2013, Functional magnetic resonance imaging studies have repeatedly provided evidence for temporal voice areas (TVAs) with particular sensitivity to human voices along bilateral mid/anterior superior temporal sulci and superior temporal gyri (STS/STG). In contrast, electrophysiological studies of the spatio-temporal correlates of cerebral voice processing have yielded contradictory results, finding the earliest correlates either at ∼300-400 ms, or earlier at ∼200 ms ("fronto-temporal positivity to voice", FTPV). These contradictory results are likely the consequence of different stimulus sets and attentional demands. Here, we recorded magnetoencephalography activity while participants listened to diverse types of vocal and non-vocal sounds and performed different tasks varying in attentional demands. Our results confirm the existence of an early voice-preferential magnetic response (FTPVm, the magnetic counterpart of the FTPV) peaking at about 220 ms and distinguishing between vocal and non-vocal sounds as early as 150 ms after stimulus onset. The sources underlying the FTPVm were localized along bilateral mid-STS/STG, largely overlapping with the TVAs. The FTPVm was consistently observed across different stimulus subcategories, including speech and non-speech vocal sounds, and across different tasks. These results demonstrate the early, largely automatic recruitment of focal, voice-selective cerebral mechanisms with a time-course comparable to that of face processing. |
Rodrigo A. Cárdenas; Lauren Julius Harris; Mark W. Becker Sex differences in visual attention toward infant faces Journal Article In: Evolution and Human Behavior, vol. 34, no. 4, pp. 280–287, 2013. @article{Cardenas2013, Parental care and alloparental care are major evolutionary dimensions of the biobehavioral repertoire of many species, including human beings. Despite their importance in the course of human evolution and the likelihood that they have significantly shaped human cognition, the nature of the cognitive mechanisms underlying alloparental care is still largely unexplored. In this study, we examined whether one such cognitive mechanism is a visual attentional bias toward infant features, and if so, whether and how it is related to the sex of the adult and the adult's self-reported interest in infants. We used eye-tracking to measure the eye movements of nulliparous undergraduates while they viewed pairs of faces consisting of one adult face (a man or woman) and one infant face (a boy or girl). Subjects then completed two questionnaires designed to measure their interest in infants. Results showed, consistent with the significance of alloparental care in human evolution, that nulliparous adults have an attentional bias toward infants. Results also showed that women's interest in and attentional bias towards infants were stronger and more stable than men's. These findings are consistent with the hypothesis that, due to their central role in infant care, women have evolved a greater and more stable sensitivity to infants. The results also show that eye movements can be successfully used to assess individual differences in interest in infants. © 2013 Elsevier Inc. |
Maria Nella Carminati; Pia Knoeferle Effects of speaker emotional facial expression and listener age on incremental sentence processing Journal Article In: PLoS ONE, vol. 8, no. 9, pp. e72559, 2013. @article{Carminati2013, We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age = 23) and older (N = 32, Mean age = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events. Participants' eye fixations on the pictures while processing the sentence were increased when the speaker's face was (vs. wasn't) emotionally congruent with the sentence. The enhancement occurred from the early stages of referential disambiguation and was modulated by age. For the older adults it was more pronounced with positive faces, and for the younger ones with negative faces. These findings demonstrate for the first time that emotional facial expressions, similarly to previously-studied speaker cues such as eye gaze and gestures, are rapidly integrated into sentence processing. They also provide new evidence for positivity effects in older adults during situated sentence processing. |
Thomas C. Cassey; David R. Evens; Rafal Bogacz; James A. R. Marshall; Casimir J. H. Ludwig Adaptive sampling of information during perceptual decision-making Journal Article In: PLoS ONE, vol. 8, no. 11, pp. e78993, 2013. @article{Cassey2013, In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two- alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy. |
Mara Breen; Charles Clifton Stress matters revisited: A boundary change experiment Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 10, pp. 1896–1909, 2013. @article{Breen2013, Breen and Clifton (Stress matters: Effects of anticipated lexical stress on silent reading. Journal of Memory and Language, 2011, 64, 153-170) argued that readers' eye movements during silent reading are influenced by the stress patterns of words. This claim was supported by the observation that syntactic reanalysis that required concurrent metrical reanalysis (e.g., a change from the noun form of abstract to the verb form) resulted in longer reading times than syntactic reanalysis that did not require metrical reanalysis (e.g., a change from the noun form of report to the verb form). However, the data contained a puzzle: The disruption appeared on the critical word (abstract, report) itself, although the material that forced the part of speech change did not appear until the next region. Breen and Clifton argued that parafoveal preview of the disambiguating material triggered the revision and that the eyes did not move on until a fully specified lexical representation of the critical word was achieved. The present experiment used a boundary change paradigm in which parafoveal preview of the disambiguating region was prevented. Once again, an interaction was observed: Syntactic reanalysis resulted in particularly long reading times when it also required metrical reanalysis. However, now the interaction did not appear on the critical word, but only following the disambiguating region. This pattern of results supports Breen and Clifton's claim that readers form an implicit metrical representation of text during silent reading. |
Julie Brisson; Marc Mainville; Dominique Mailloux; Christelle Beaulieu; Josette Serres; Sylvain Sirois Pupil diameter measurement errors as a function of gaze direction in corneal reflection eyetrackers Journal Article In: Behavior Research Methods, vol. 45, no. 4, pp. 1322–1331, 2013. @article{Brisson2013, Pupil dilation is a useful, noninvasive technique for measuring the change in cognitive load. Since it is implicit and nonverbal, it is particularly useful with preverbal or nonverbal participants. In cognitive psychology, pupil dilation is most often measured by corneal reflection eye-tracking devices. The present study investigates the effect of gaze position on pupil size estimation by three common eye-tracking systems. The task consisted of a simple object pursuit situation, as a sphere rotated around the display screen. Systematic errors of pupil size estimation were found with all three systems. Implications for task-elicited pupillometry, especially for gaze-contingent studies such as object tracking or reading, are discussed. |
Jon Brock; Samantha Bzishvili Deconstructing Frith and Snowling's homograph-reading task: Implications for autism spectrum disorders Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 9, pp. 1764–1773, 2013. @article{Brock2013, The poor performance of autistic individuals on a test of homograph reading is widely interpreted as evidence for a reduction in sensitivity to context termed "weak central coherence". To better understand the cognitive processes involved in completing the homograph-reading task, we monitored the eye movements of nonautistic adults as they completed the task. Using single trial analysis, we determined that the time between fixating and producing the homograph (eye-to-voice span) increased significantly across the experiment and predicted accuracy of homograph pronunciation, suggesting that participants adapted their reading strategy to minimize pronunciation errors. Additionally, we found evidence for interference from previous trials involving the same homograph. This progressively reduced the initial advantage for dominant homograph pronunciations as the experiment progressed. Our results identify several additional factors that contribute to performance on the homograph reading task and may help to reconcile the findings of poor performance on the test with contradictory findings from other studies using different measures of context sensitivity in autism. The results also undermine some of the broader theoretical inferences that have been drawn from studies of autism using the homograph task. Finally, we suggest that this approach to task deconstruction might have wider applications in experimental psychology. |
Susanne Brouwer; Holger Mitterer; Falk Huettig Discourse context and the recognition of reduced and canonical spoken words Journal Article In: Applied Psycholinguistics, vol. 34, no. 3, pp. 519–539, 2013. @article{Brouwer2013, In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor and an unrelated distractor. Target sentences were presented in isolation or with a wider discourse context. Experiment 1 revealed that target recognition was facilitated by wider discourse information. Importantly, the recognition of reduced forms improved significantly when preceded by strongly rather than by weakly supportive discourse contexts. This was not the case for canonical forms: listeners' target word recognition was not dependent on the degree of supportive context. Experiment 2 showed that the differential context effects in Experiment 1 were not due to an additional amount of speaker information. Thus, these data suggest that in natural settings a strongly supportive discourse context is more important for the recognition of reduced forms than the recognition of canonical forms. |
Harriet R. Brown; Karl J. Friston The functional anatomy of attention: A DCM study Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 784, 2013. @article{Brown2013, Recent formulations of attention—in terms of predictive coding—associate attentional gain with the expected precision of sensory information. Formal models of the Posner paradigm suggest that validity effects can be explained in a principled (Bayes optimal) fashion in terms of a cue-dependent setting of precision or gain on the sensory channels reporting anticipated target locations, which is updated selectively by invalid targets. This normative model is equipped with a biologically plausible process theory in the form of predictive coding, where precision is encoded by the gain of superficial pyramidal cells reporting prediction error. We used dynamic causal modeling to assess the evidence in magnetoencephalographic responses for cue-dependent and top-down updating of superficial pyramidal cell gain. Bayesian model comparison suggested that it is almost certain that differences in superficial pyramidal cells gain—and its top-down modulation—contribute to observed responses; and we could be more than 80% certain that anticipatory effects on post-synaptic gain are limited to visual (extrastriate) sources. These empirical results speak to the role of attention in optimizing perceptual inference and its formulation in terms of predictive coding. |
Enzo P. Brunetti; Pedro E. Maldonado; Francisco Aboitiz Phase synchronization of delta and theta oscillations increase during the detection of relevant lexical information Journal Article In: Frontiers in Psychology, vol. 4, pp. 308, 2013. @article{Brunetti2013, During monitoring of the discourse, the detection of the relevance of incoming lexical information could be critical forits incorporation to updatemental representations inmemory. Because, in these situations, the relevance for lexical information is defined by abstract rules that are maintained in memory, a central aspect to elucidate is how an abstract level of knowledge maintained in mind mediates the detection of the lower-level semantic information. In the present study, we propose that neuronal oscillations participate in the detection of relevant lexical information, based on “kept in mind” rules deriving from more abstract semantic information. We tested our hypothesis using an experimental paradigm that restricted the detection of relevance to inferences based on explicit information, thus controlling for ambiguities derived from implicit aspects. We used a categorization task, in which the semantic relevance was previously defined based on the congruency between a kept in mind category (abstract knowledge), and the lexical semantic information presented.Ourresultsshowthatduringthedetectionoftherelevantlexicalinformation,phase synchronization of neuronal oscillations selectively increases in delta and theta frequency bands during the interval of semantic analysis. These increments occurred irrespective of the semantic category maintained in memory, had a temporal profile specific for each subject, and were mainly induced, as they had no effect on the evoked mean global field power. Also, recruitment of an increased number of pairs of electrodes was a robust observation during the detection of semantic contingent words. These results are consistent with the notion that the detection of relevant lexical information based on a particular semantic rule, could be mediated by increasing the global phase synchronization of neuronal oscillations, which may contribute to the recruitment of an extended number of cortical regions. |
Janet H. Bultitude; Stefan Van der Stigchel; Tanja C. W. Nijboer Prism adaptation alters spatial remapping in healthy individuals: Evidence from double-step saccades Journal Article In: Cortex, vol. 49, no. 3, pp. 759–770, 2013. @article{Bultitude2013, The visual system is able to represent and integrate large amounts of information as we move our gaze across a scene. This process, called spatial remapping, enables the construction of a stable representation of our visual environment despite constantly changing retinal images. Converging evidence implicates the parietal lobes in this process, with the right hemisphere having a dominant role. Indeed, lesions to the right parietal lobe (e.g., leading to hemispatial neglect) frequently result in deficits in spatial remapping. Research has demonstrated that recalibrating visual, proprioceptive and motor reference frames using prism adaptation ameliorates neglect symptoms and induces neglect-like performance in healthy people - one example of the capacity for rapid neural plasticity in response to new sensory demands. Because of the influence of prism adaptation on parietal functions, the present research investigates whether prism adaptation alters spatial remapping in healthy individuals. To this end twenty-eight undergraduates completed blocks of a double-step saccade (DSS) task after sham adaptation and adaptation to leftward- or rightward-shifting prisms. The results were consistent with an impairment in spatial remapping for left visual field targets following adaptation to leftward-shifting prisms. These results suggest that temporarily realigning spatial representations using sensory-motor adaptation alters right-hemisphere remapping processes in healthy individuals. The implications for the possible mechanisms of the amelioration of hemispatial neglect after prism adaptation are discussed. |
Antimo Buonocore; Robert D. McIntosh Attention modulates saccadic inhibition magnitude Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 6, pp. 1051–1059, 2013. @article{Buonocore2013, Visual transient events during ongoing eye movement tasks inhibit saccades within a precise temporal window, spanning from around 60-120 ms after the event, having maximum effect at around 90 ms. It is not yet clear to what extent this saccadic inhibition phenomenon can be modulated by attention. We studied the saccadic inhibition induced by a bright flash above or below fixation, during the preparation of a saccade to a lateralized target, under two attentional manipulations. Experiment 1 demonstrated that exogenous precueing of a distractor's location reduced saccadic inhibition, consistent with inhibition of return. Experiment 2 manipulated the relative likelihood that a distractor would be presented above or below fixation. Saccadic inhibition magnitude was relatively reduced for distractors at the more likely location, implying that observers can endogenously suppress interference from specific locations within an oculomotor map. We discuss the implications of these results for models of saccade target selection in the superior colliculus. |
Wesley K. Burge; Lesley A. Ross; Franklin R. Amthor; William G. Mitchell; Alexander Zotov; Kristina M. Visscher Processing speed training increases the efficiency of attentional resource allocation in young adults Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 684, 2013. @article{Burge2013, Cognitive training has been shown to improve performance on a range of tasks. However, the mechanisms underlying these improvements are still unclear. Given the wide range of transfer effects, it is likely that these effects are due to a factor common to a wide range of tasks. One such factor is a participant's efficiency in allocating limited cognitive resources. The impact of a cognitive training program, Processing Speed Training (PST), on the allocation of resources to a set of visual tasks was measured using pupillometry in 10 young adults as compared to a control group of a 10 young adults (n = 20). PST is a well-studied computerized training program that involves identifying simultaneously presented central and peripheral stimuli. As training progresses, the task becomes increasingly more difficult, by including peripheral distracting stimuli and decreasing the duration of stimulus presentation. Analysis of baseline data confirmed that pupil diameter reflected cognitive effort. After training, participants randomized to PST used fewer attentional resources to perform complex visual tasks as compared to the control group. These pupil diameter data indicated that PST appears to increase the efficiency of attentional resource allocation. Increases in cognitive efficiency have been hypothesized to underlie improvements following experience with action video games, and improved cognitive efficiency has been hypothesized to underlie the benefits of PST in older adults. These data reveal that these training schemes may share a common underlying mechanism of increasing cognitive efficiency in younger adults. |
Melanie R. Burke; P. Bramley; Claudia C. Gonzalez; D. J. McKeefry The contribution of the right supra-marginal gyrus to sequence learning in eye movements Journal Article In: Neuropsychologia, vol. 51, no. 14, pp. 3048–3056, 2013. @article{Burke2013, We investigated the role of the human right Supra-Marginal Gyrus (SMG) in the generation of learned eye movement sequences. Using MRI-guided transcranial magnetic stimulation (TMS) we disrupted neural activity in the SMG whilst human observers performed saccadic eye movements to multiple presentations of either predictable or random target sequences. For the predictable sequences we observed shorter saccadic latencies from the second presentation of the sequence. However, these anticipatory improvements in performance were significantly reduced when TMS was delivered to the right SMG during the inter-trial retention periods. No deficits were induced when TMS was delivered concurrently with the onset of the target visual stimuli. For the random version of the task, neither delivery of TMS to the SMG during the inter-trial period nor during the presentation of the target visual stimuli produced any deficit in performance that was significantly different from the no-TMS or control conditions. These findings demonstrate that neural activity within the right SMG is causally linked to the ability to perform short latency predictive saccades resulting from sequence learning. We conclude that neural activity in rSMG constitutes an instruction set with spatial and temporal directives that are retained and subsequently released for predictive motor planning and responses. |
John Christie; Matthew D. Hilchey; Raymond M. Klein Inhibition of return is at the midpoint of simultaneous cues Journal Article In: Attention, Perception, & Psychophysics, vol. 75, no. 8, pp. 1610–1618, 2013. @article{Christie2013, When multiple cues are presented simultaneously, Klein, Christie, and Morris (Psychonomic Bulletin & Review 12:295-300, 2005) found a gradient of inhibition (of return, IOR), with the slowest simple manual detection responses occurring to targets in the direction of the center of gravity of the cues. Here, we explored the possibility of extending this finding to the saccade response modality, using methods of data analysis that allowed us to consider the relative contributions of the distance from the target to the center of gravity of the array of cues and the nearest element in the cue array. We discovered that the bulk of the IOR effect with multiple cues, in both the previous and present studies, can be explained by the distance between the target and the center of gravity of the cue array. The present results are consistent with the proposal advanced by Klein et al., (2005) suggesting that this IOR effect is due to population coding in the oculomotor pathways (e.g., the superior colliculus) driving the eye movement system toward the center of gravity of the cued array. |
Harald Clahsen; Loay Balkhair; John Sebastian Schutter; Ian Cunnings The time course of morphological processing in a second language Journal Article In: Second Language Research, vol. 29, no. 1, pp. 7–31, 2013. @article{Clahsen2013, We report findings from psycholinguistic experiments investigating the detailed timing of processing morphologically complex words by proficient adult second (L2) language learners of English in comparison to adult native (L1) speakers of English. The first study employed the masked priming technique to investigate -ed forms with a group of advanced Arabic-speaking learners of English. The results replicate previously found L1/L2 differences in morphological priming, even though in the present experiment an extra temporal delay was offered after the presentation of the prime words.$backslash$nThe second study examined the timing of constraints against inflected forms inside derived words in English using the eye-movement monitoring technique and an additional acceptability judgment task with highly advanced Dutch L2 learners of English in comparison to adult L1 English controls. Whilst offline the L2 learners performed native-like, the eye-movement data showed that their online processing was not affected by the morphological constraint against regular plurals inside derived words in the same way as in native speakers. Taken together, these findings indicate that L2 learners are not just slower than native speakers in processing morphologically complex words, but that the L2 comprehension system employs real-time grammatical analysis (in this case, morphological information) less than the L1 system. |
Alasdair D. F. Clarke; Moreno I. Coco; Frank Keller The impact of attentional, linguistic, and visual features during object naming Journal Article In: Frontiers in Psychology, vol. 4, pp. 927, 2013. @article{Clarke2013, Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. |
Charles Clifton Situational context affects definiteness preferences: Accommodation of presuppositions Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 2, pp. 487–501, 2013. @article{Clifton2013, In 4 experiments, we used self-paced reading and eye tracking to demonstrate that readers are, under some conditions, sensitive to the presuppositions of definite versus indefinite determiner phrases (DPs). Reading was faster when the context stereotypically provided a single possible referent for a definite DP or multiple possible referents for an indefinite DP than when context and DP definiteness were mismatched. This finding goes beyond previous evidence that definite DPs are processed more rapidly than are indefinite DPs when there is a unique or familiar referent in the context, showing that readers are sensitive to the semantics and pragmatics of (in)definiteness. However, the finding was obtained only when readers had to perform a simple arithmetic task between reading a sentence and seeing a question about it. The intervening task may have encouraged them to process the sentence more deeply in order to form a representation that would persist while doing the arithmetic. The methodological implications of this observation are discussed. |
Brian A. Coffman; Piyadasa Kodituwakku; Elizabeth L. Kodituwakku; Lucinda Romero; Nirupama Muniswamy Sharadamma; David Stone; Julia M. Stephen Primary visual response (M100) delays in adolescents with FASD as measured with MEG Journal Article In: Human Brain Mapping, vol. 34, no. 11, pp. 2852–2862, 2013. @article{Coffman2013, Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. |
Andrew L. Cohen Software for the automatic correction of recorded eye fixation locations in reading experiments Journal Article In: Behavior Research Methods, vol. 45, no. 3, pp. 679–683, 2013. @article{Cohen2013, Because the recorded location of an eyetracking fixation is not a perfect measure of the actual fixated location, the recorded fixation locations must be adjusted before analysis. Fixations are typically corrected manually. Making such changes, however, is time-consuming and necessarily involves a subjective component. The goal of this article is to introduce software to automate parts of the correction process. The initial focus is on the correction of vertical locations and the removal of outliers and ambiguous fixations in reading experiments. The basic idea behind the algorithm is to use linear regression to assign each fixation to a text line and to identify outliers. The freely available software is implemented as a function, fix_align.R, written in R. |
A. R. Bogadhi; Anna Montagnini; Guillaume S. Masson Dynamic interaction between retinal and extraretinal signals in motion integration for smooth pursuit Journal Article In: Journal of Vision, vol. 13, no. 13, pp. 1–26, 2013. @article{Bogadhi2013, Due to the aperture problem, the initial direction of tracking responses to a translating bar is biased towards the direction orthogonal to the bar. This observation offers a powerful way to explore the interactions between retinal and extraretinal signals in controlling our actions. We conducted two experiments to probe these interactions by briefly (200 and 400 ms) blanking the moving target (458 or 1358 tilted bar) during steady state (Experiment 1) and at different moments during the early phase of pursuit (Experiment 2). In Experiment 1, we found a marginal but statistically significant directional bias on target reappearance for all subjects in at least one blank condition (200 or 400 ms). In Experiment 2, no systematic significant directional bias was observed at target reappearance after a blank. These results suggest that the weighting of retinal and extraretinal signals is dynamically modulated during the different phases of pursuit. Based on our previous theoretical work on motion integration, we propose a new closed- loop two-stage recurrent Bayesian model where retinal and extraretinal signals are dynamically weighted based on their respective reliabilities and combined to compute the visuomotor drive. With a single free parameter, the model reproduces many aspects of smooth pursuit observed across subjects during and immediately after target blanking. It provides a new theoretical framework to understand how different signals are dynamically combined based on their relative reliability to adaptively control our actions. Overall, the model and behavioral results suggest that human subjects rely more strongly on prediction during theearly phasethaninthe steady state phase of pursuit. |
B. Bonev; Lewis L. Chuang; F. Escolano How do image complexity, task demands and looking biases influence human gaze behavior? Journal Article In: Pattern Recognition Letters, vol. 34, no. 7, pp. 723–730, 2013. @article{Bonev2013, In this paper we propose an information-theoretic approach to understand eye-movement patterns, in relation to the task performed and image complexity. We commence with the analysis of the distributions and amplitudes of eye-movement saccades, performed across two different image-viewing tasks: free viewing and visual search. Our working hypothesis is that the complexity of image information and task demands should interact. This should be reflected in the Markovian pattern of short and long saccades. We compute high-order Markovian models of performing a large saccade after many short ones and also propose a novel method for quantifying image complexity. The analysis of the interaction between high-order Markovianity, task and image complexity supports our hypothesis. |
Robert W. Booth; Ulrich W. Weger The function of regressions in reading: Backward eye movements allow rereading Journal Article In: Memory & Cognition, vol. 41, no. 1, pp. 82–97, 2013. @article{Booth2013, Standard text reading involves frequent eye movements that go against normal reading order. The function of these "regressions" is still largely unknown. The most obvious explanation is that regressions allow for the rereading of previously fixated words. Alternatively, physically returning the eyes to a word's location could cue the reader's memory for that word, effectively aiding the comprehension process via location priming (the "deictic pointer hypothesis"). In Experiment 1, regression frequency was reduced when readers knew that information was no longer available for rereading. In Experiment 2, readers listened to auditorily presented text while moving their eyes across visual placeholders on the screen. Here, rereading was impossible, but deictic pointers remained available, yet the readers did not make targeted regressions in this experiment. In Experiment 3, target words in normal sentences were changed after reading. Where the eyes later regressed to these words, participants generally remained unaware of the change, and their answers to comprehension questions indicated that the new meaning of the changed word was what determined their sentence representations. These results suggest that readers use regressions to reread words and not to cue their memory for previously read words. |
Sabine Born; Ulrich Ansorge; Dirk Kerzel Predictability of spatial and non-spatial target properties improves perception in the pre-saccadic interval Journal Article In: Vision Research, vol. 91, pp. 93–101, 2013. @article{Born2013, In a dual-task paradigm with a perceptual discrimination task and a concurrent saccade task, we examined participants' ability to make use of prior knowledge of a critical property of the perceptual target to improve discrimination. Previous research suggests that during a short time window before a saccade, covert attention is imperatively directed towards the saccade target location. Consequently, discrimination of perceptual targets at the saccade target location is better than at other locations. We asked whether the obligatory pre-saccadic attention shift prevents perceptual benefits arising for perceptual target stimuli with predictable as opposed to non-predictable properties. We compared conditions in which the color or location of the perceptual target was constant to conditions in which those properties varied randomly across trials. In addition to the expected improvements of perception at the saccade target location, we found perception to be better with constant than with random properties of the perceptual target. Thus, color or location information about an upcoming perceptual target facilitates perception even while spatial attention is shifted to the saccade target. The improvement occurred irrespective of the saccade target location, which suggests that the underlying mechanism is independent of the pre-saccadic attention shift, but alternative interpretations are discussed as well. |
Arielle Borovsky; Erin Burns; Jeffrey L. Elman; Julia L. Evans Lexical activation during sentence comprehension in adolescents with history of specific language impairment Journal Article In: Journal of Communication Disorders, vol. 46, no. 5-6, pp. 413–427, 2013. @article{Borovsky2013, One remarkable characteristic of speech comprehension in typically developing (TD) children and adults is the speed with which the listener can integrate information across multiple lexical items to anticipate upcoming referents. Although children with Specific Language Impairment (SLI) show lexical deficits (Sheng & McGregor, 2010) and slower speed of processing (Leonard et al., 2007), relatively little is known about how these deficits manifest in real-time sentence comprehension. In this study, we examine lexical activation in the comprehension of simple transitive sentences in adolescents with a history of SLI and age-matched, TD peers. Participants listened to sentences that consisted of the form, Article-Agent-Action-Article-Theme, (e.g., The pirate chases the ship) while viewing pictures of four objects that varied in their relationship to the Agent and Action of the sentence (e.g., Target, Agent-Related, Action-Related, and Unrelated). Adolescents with SLI were as fast as their TD peers to fixate on the sentence's final item (the Target) but differed in their post-action onset visual fixations to the Action-Related item. Additional exploratory analyses of the spatial distribution of their visual fixations revealed that the SLI group had a qualitatively different pattern of fixations to object images than did the control group. The findings indicate that adolescents with SLI integrate lexical information across words to anticipate likely or expected meanings with the same relative fluency and speed as do their TD peers. However, the failure of the SLI group to show increased fixations to Action-Related items after the onset of the action suggests lexical integration deficits that result in failure to consider alternate sentence interpretations.Learning outcomes: As a result of this paper, the reader will be able to describe several benefits of using eye-tracking methods to study populations with language disorders. They should also recognize several potential explanations for lexical deficits in SLI, including possible reduced speed of processing, and degraded lexical representations. Finally, they should recall the main outcomes of this study, including that adolescents with SLI show different timing and location of eye-fixations while interpreting sentences than their age-matched peers. © 2013. |
S. E. Bosch; Sebastiaan F. W. Neggers; Stefan Van der Stigchel The role of the frontal eye fields in oculomotor competition: Image-guided TMS enhances contralateral target selection Journal Article In: Cerebral Cortex, vol. 23, no. 4, pp. 824–832, 2013. @article{Bosch2013, In order to execute a correct eye movement to a target in a search display, a saccade program toward the target element must be activated, while saccade programs toward distracting elements must be inhibited. The aim of the present study was to elucidate the role of the frontal eye fields (FEFs) in oculomotor competition. Functional magnetic resonance imaging-guided single-pulse transcranial magnetic stimulation (TMS) was administered over either the left FEF, the right FEF, or the vertex (control site) at 3 time intervals after target presentation, while subjects performed an oculomotor capture task. When TMS was applied over the FEF contralateral to the visual field where a target was presented, there was less interference of an ipsilateral distractor compared with FEF stimulation ipsilateral to the target's visual field or TMS over vertex. Furthermore, TMS over the FEFs decreased latencies of saccades to the contralateral visual field, irrespective of whether the saccade was directed to the target or to the distractor. These findings show that single-pulse TMS over the FEFs enhances the selection of a target in the contralateral visual field and decreases saccade latencies to the contralateral visual field. |
Oliver Bott The processing domain of aspectual interpretation Journal Article In: Studies in Linguistics and Philosophy, vol. 93, pp. 195–229, 2013. @article{Bott2013, In the semantic literature lexical aspect is often treated as a property of VPs or even of whole sentences. Does the interpretation of lexical aspect – contrary to the incrementality assumption commonly made in psycholinguistics – have to wait until the verb and all its arguments are present? To address this issue, we conducted an offline study, two self-paced reading experiments and an eyetracking experiment to investigate aspectual mismatch and aspectual coercion in German sentences while manipulating the position of the mismatching or coercing stimulus. Our findings provide evidence that mismatch detection and aspectual repair depend on a complete verb-argument structure. When the verb didn't receive all its (minimally required) arguments no mismatch or coercion effects showed up at the mismatching or coercing stimulus. Effects were delayed until a later point after all the arguments had been encountered. These findings have important consequences for semantic theory and for processing accounts of aspectual semantics. As far as semantic theory is concerned, it has to model lexical aspect as a supralexical property coming only into play at the sentence level. For theories of semantic processing the results are even more striking because they indicate that (at least some) semantic phenomena are processed on a more global level than it would be expected assuming incremental semantic interpretation. |
Wei-Ying Chen; Piers D. Howe; Alex O. Holcombe Resource demands of object tracking and differential allocation of the resource Journal Article In: Attention, Perception, & Psychophysics, vol. 75, no. 4, pp. 710–725, 2013. @article{Chen2013a, The attentional processes for tracking moving objects may be largely hemisphere-specific. Indeed, in our first two experiments the maximum object speed (speed limit) for tracking targets in one visual hemifield (left or right) was not significantly affected by a requirement to track additional targets in the other hemifield. When the additional targets instead occupied the same hemifield as the original targets, the speed limit was reduced. At slow target speeds, however, adding a second target to the same hemifield had little effect. At high target speeds, the cost of adding a same-hemifield second target was approximately as large as would occur if observers could only track one of the targets. This shows that performance with a fast-moving target is very sensitive to the amount of resource allocated. In a third experiment, we investigated whether the resources for tracking can be distributed unequally between two targets. The speed limit for a given target was higher if the second target was slow rather than fast, suggesting that more resource was allocated to the faster of the two targets. This finding was statistically significant only for targets presented in the same hemifield, consistent with the theory of independent resources in the two hemifields. Some limited evidence was also found for resource sharing across hemifields, suggesting that attentional tracking resources may not be entirely hemifield-specific. Together, these experiments indicate that the largely hemisphere-specific tracking resource can be differentially allocated to faster targets. |
Joey T. Cheng; Jessica L. Tracy; Tom Foulsham; Alan Kingstone; Joseph Henrich Two ways to the top: Evidence that dominance and prestige are distinct yet viable avenues to social rank and influence Journal Article In: Journal of Personality and Social Psychology, vol. 104, no. 1, pp. 103–125, 2013. @article{Cheng2013, The pursuit of social rank is a recurrent and pervasive challenge faced by individuals in all human societies. Yet, the precise means through which individuals compete for social standing remains unclear. In 2 studies, we investigated the impact of 2 fundamental strategies-Dominance (the use of force and intimidation to induce fear) and Prestige (the sharing of expertise or know-how to gain respect)-on the attainment of social rank, which we conceptualized as the acquisition of (a) perceived influence over others (Study 1), (b) actual influence over others' behaviors (Study 1), and (c) others' visual attention (Study 2). Study 1 examined the process of hierarchy formation among a group of previously unacquainted individuals, who provided round-robin judgments of each other after completing a group task. Results indicated that the adoption of either a Dominance or Prestige strategy promoted perceptions of greater influence, by both group members and outside observers, and higher levels of actual influence, based on a behavioral measure. These effects were not driven by popularity; in fact, those who adopted a Prestige strategy were viewed as likable, whereas those who adopted a Dominance strategy were not well liked. In Study 2, participants viewed brief video clips of group interactions from Study 1 while their gaze was monitored with an eye tracker. Dominant and Prestigious targets each received greater visual attention than targets low on either dimension. Together, these findings demonstrate that Dominance and Prestige are distinct yet viable strategies for ascending the social hierarchy, consistent with evolutionary theory. |
Dana L. Chesney; Nicole M. McNeil; James R. Brockmole; Ken Kelley An eye for relations: Eye-tracking indicates long-term negative effects of operational thinking on understanding of math equivalence Journal Article In: Memory & Cognition, vol. 41, no. 7, pp. 1079–1095, 2013. @article{Chesney2013, Prior knowledge in the domain of mathematics can sometimes interfere with learning and performance in that domain. One of the best examples of this phenomenon is in students' difficulties solving equations with operations on both sides of the equal sign. Elementary school children in the U.S. typically acquire incorrect, operational schemata rather than correct, relational schemata for interpreting equations. Researchers have argued that these operational schemata are never unlearned and can continue to affect performance for years to come, even after relational schemata are learned. In the present study, we investigated whether and how operational schemata negatively affect undergraduates' performance on equations. We monitored the eye movements of 64 undergraduate students while they solved a set of equations that are typically used to assess children's adherence to operational schemata (e.g., 3 + 4 + 5 = 3 + __). Participants did not perform at ceiling on these equations, particularly when under time pressure. Converging evidence from performance and eye movements showed that operational schemata are sometimes activated instead of relational schemata. Eye movement patterns reflective of the activation of relational schemata were specifically lacking when participants solved equations by adding up all the numbers or adding the numbers before the equal sign, but not when they used other types of incorrect strategies. These findings demonstrate that the negative effects of acquiring operational schemata extend far beyond elementary school. |
Kimberly S. Chiew; Todd S. Braver Temporal dynamics of motivation-cognitive control interactions revealed by high-resolution pupillometry Journal Article In: Frontiers in Psychology, vol. 4, pp. 15, 2013. @article{Chiew2013, Motivational manipulations, such as the presence of performance-contingent reward incentives, can have substantial influences on cognitive control. Previous evidence suggests that reward incentives may enhance cognitive performance specifically through increased preparatory, or proactive, control processes. The present study examined reward influences on cognitive control dynamics in the AX-Continuous Performance Task (AX-CPT), using high-resolution pupillometry. In the AX-CPT, contextual cues must be actively maintained over a delay in order to appropriately respond to ambiguous target probes. A key feature of the task is that it permits dissociable characterization of preparatory, proactive control processes (i.e., utilization of context) and reactive control processes (i.e., target-evoked interference resolution). Task performance profiles suggested that reward incentives enhanced proactive control (context utilization). Critically, pupil dilation was also increased on reward incentive trials during context maintenance periods, suggesting trial-specific shifts in proactive control, particularly when context cues indicated the need to overcome the dominant target response bias. Reward incentives had both transient (i.e., trial-by-trial) and sustained (i.e., block-based) effects on pupil dilation, which may reflect distinct underlying processes. The transient pupillary effects were present even when comparing against trials matched in task performance, suggesting a unique motivational influence of reward incentives. These results suggest that pupillometry may be a useful technique for investigating reward motivational signals and their dynamic influence on cognitive control. |
Wonil Choi; Peter C. Gordon Coordination of word recognition and oculomotor control during reading: The role of implicit lexical decisions Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 1032–1046, 2013. @article{Choi2013, The coordination of word-recognition and oculomotor processes during reading was evaluated in eye-tracking experiments that examined how word skipping, where a word is not fixated during first-pass reading, is affected by the lexical status of a letter string in the parafovea and ease of recognizing that string. Ease of lexical recognition was manipulated through target-word frequency (Experiment 1) and through repetition priming between prime-target pairs embedded in a sentence (Experiment 2). Using the gaze-contingent boundary technique the target word appeared in the parafovea either with full preview or with transposed-letter (TL) preview. The TL preview strings were nonwords in Experiment 1 (e.g., bilnk created from the target blink), but were words in Experiment 2 (e.g., sacred created from the target scared). Experiment 1 showed greater skipping for high-frequency than low-frequency target words in the full preview condition, but not in the TL preview (nonword) condition. Experiment 2 showed greater skipping for target words that repeated an earlier prime word than for those that did not, with this repetition priming occurring both with preview of the full target and with preview of the target's TL neighbor word. However, time to progress from the word after the target was greater following skips of the TL preview word, whose meaning was anomalous in the sentence context, than following skips of the full preview word whose meaning fit sensibly into the sentence context. Together, the results support the idea that coordination between word-recognition and oculomotor processes occurs at the level of implicit lexical decisions. |
Oleg V. Komogortsev; Corey D. Holland; Sampath Jayarathna; Alex Karpov 2D linear oculomotor plant mathematical model: Verification and biometric applications Journal Article In: ACM Transactions on Applied Perception, vol. 10, no. 4, pp. 1–18, 2013. @article{Komogortsev2013a, This article assesses the ability of a two-dimensional (2D) linear homeomorphic oculomotor plant mathematical model to simulate normal human saccades on a 2D plane. The proposed model is driven by a simplified pulse-step neuronal control signal and makes use of linear simplifications to account for the unique characteristics of the eye globe and the extraocular muscles responsible for horizontal and vertical eye movement. The linear nature of the model sacrifices some anatomical accuracy for computational speed and analytic tractability, and may be implemented as two one-dimensional models for parallel signal simulation. Practical applications of the model might include improved noise reduction and signal recovery facilities for eye tracking systems, additional metrics from which to determine user effort during usability testing, and enhanced security in biometric identification systems. The results indicate that the model is capable of produce oblique saccades with properties resembling those of normal human saccades and is capable of deriving muscle constants that are viable as biometric indicators. Therefore, we conclude that sacrifice in the anatomical accuracy of the model produces negligible effects on the accuracy of saccadic simulation on a 2D plane and may provide a usable model for applications in computer science, human-computer interaction, and related fields. |
Oleg V. Komogortsev; Alex Karpov Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades Journal Article In: Behavior Research, vol. 45, pp. 203–215, 2013. @article{Komogortsev2013, Ternary eye movement classification, which separates fixations, saccades, and smooth pursuit from the raw eye positional data, is extremely challenging. This article develops new and modifies existing eye-tracking algorithms for the purpose of conducting meaningful ternary classification. To this end, a set of qualitative and quantitative behavior scores is introduced to facilitate the assessment of classification performance and to provide means for automated threshold selection. Experimental evaluation of the proposed methods is conducted using eye movement records obtained from 11 subjects at 1000 Hz in response to a step-ramp stimulus eliciting fixations, saccades, and smooth pursuits. Results indicate that a simple hybrid method that incorporates velocity and dispersion thresholding allows producing robust classification performance. It is concluded that behavior scores are able to aid automated threshold selection for the algorithms capable of successful classification. |
Arnout W. Koornneef; Ted J. M. Sanders Establishing coherence relations in discourse: The influence of implicit causality and connectives on pronoun resolution Journal Article In: Language and Cognitive Processes, vol. 28, no. 8, pp. 1169–1206, 2013. @article{Koornneef2013, Many studies have shown that readers and listeners recruit verb-based implicit causality information rapidly in the service of pronoun resolution. However, since most of these studies focused on constructions in which because connected the two critical clauses, it is unclear to what extent implicit causality information affects the processing of pronouns embedded in other types of coherence relations. In an eye-tracking and completion study we addressed this void by varying whether because, but, and and joined a primary clause containing the implicit causality verb, with a secondary clause containing a critical gender-marked pronoun. The results showed that the claims made for implicit causality hold if the connective because is present (i.e., a reading time delay following a pronoun that is inconsistent with the implicit causality bias of the verb), but do not generalise to other connectives like but and and. This shows that the strength and persistence of implicit causality as a pronoun resolution cue depends on the coherence relation in which the verb, the antecedent and the pronoun appear. Many studies have shown that readers and listeners recruit verb-based implicit causality information rapidly in the service of pronoun resolution. However, since most of these studies focused on constructions in which because connected the two critical clauses, it is unclear to what extent implicit causality information affects the processing of pronouns embedded in other types of coherence relations. In an eye-tracking and completion study we addressed this void by varying whether because, but, and and joined a primary clause containing the implicit causality verb, with a secondary clause containing a critical gender-marked pronoun. The results showed that the claims made for implicit causality hold if the connective because is present (i.e., a reading time delay following a pronoun that is inconsistent with the implicit causality bias of the verb), but do not generalise to other connectives like but and and. This shows that the strength and persistence of implicit causality as a pronoun resolution cue depends on the coherence relation in which the verb, the antecedent and the pronoun appear. |
Maciej Kosilo; Sophie M. Wuerger; Matt Craddock; Ben J. Jennings; Amelia R. Hunt; Jasna Martinovic Low-level and high-level modulations of fixational saccades and high frequency oscillatory brain activity in a visual object classification task Journal Article In: Frontiers in Psychology, vol. 4, pp. 948, 2013. @article{Kosilo2013, Until recently induced gamma-band activity (GBA) was considered a neural marker of cortical object representation. However, induced GBA in the electroencephalogram (EEG) is susceptible to artifacts caused by miniature fixational saccades. Recent studies have demonstrated that fixational saccades also reflect high-level representational processes. Do high-level as opposed to low-level factors influence fixational saccades? What is the effect of these factors on artifact-free GBA? To investigate this, we conducted separate eye tracking and EEG experiments using identical designs. Participants classified line drawings as objects or non-objects. To introduce low-level differences, contours were defined along different directions in cardinal color space: S-cone-isolating, intermediate isoluminant, or a full-color stimulus, the latter containing an additional achromatic component. Prior to the classification task, object discrimination thresholds were measured and stimuli were scaled to matching suprathreshold levels for each participant. In both experiments, behavioral performance was best for full-color stimuli and worst for S-cone isolating stimuli. Saccade rates 200-700 ms after stimulus onset were modulated independently by low and high-level factors, being higher for full-color stimuli than for S-cone isolating stimuli and higher for objects. Low-amplitude evoked GBA and total GBA were observed in very few conditions, showing that paradigms with isoluminant stimuli may not be ideal for eliciting such responses. We conclude that cortical loops involved in the processing of objects are preferentially excited by stimuli that contain achromatic information. Their activation can lead to relatively early exploratory eye movements even for foveally-presented stimuli. |
Anastasia Kourkoulou; Gustav Kuhn; John M. Findlay; Susan R. Leekam Eye movement difficulties in autism spectrum disorder: Implications for implicit contextual learning Journal Article In: Autism Research, vol. 6, no. 3, pp. 177–189, 2013. @article{Kourkoulou2013, It is widely accepted that we use contextual information to guide our gaze when searching for an object. People with autism spectrum disorder (ASD) also utilise contextual information in this way; yet, their visual search in tasks of this kind is much slower compared with people without ASD. The aim of the current study was to explore the reason for this by measuring eye movements. Eye movement analyses revealed that the slowing of visual search was not caused by making a greater number of fixations. Instead, participants in the ASD group were slower to launch their first saccade, and the duration of their fixations was longer. These results indicate that slowed search in ASD in contextual learning tasks is not due to differences in the spatial allocation of attention but due to temporal delays in the initial-reflexive orienting of attention and subsequent-focused attention. These results have broader implications for understanding the unusual attention profile of individuals with ASD and how their attention may be shaped by learning. Autism Res 2013, 6: 177–189. |
Hamutal Kreiner; Simon Garrod; Patrick Sturt Number agreement in sentence comprehension: The relationship between grammatical and conceptual factors Journal Article In: Language and Cognitive Processes, vol. 28, no. 6, pp. 829–874, 2013. @article{Kreiner2013, Studies in theoretical linguistics argue that subject-verb agreement is more sensitive to grammatical number, while pronoun-antecedent agreement is more sensitive to conceptual number. This claim is robustly supported by speech production research, but few studies have examined this issue in comprehension. We investigated this dissociation between conceptual and grammatical number agreement in three eye-tracking reading experiments, using collective nouns like ?group?, which can be notionally interpreted as either singular or plural. Experiment 1 indicated that pronoun-antecedent agreement is conceptually driven; Experiment 2 indicated that subject-verb agreement is morpho-syntactically driven. Experiment 3 indicated that the morpho-grammatical processes that control the initial processing of subject-verb agreement do not bias later semantic processing of pronoun-antecedent number agreement, even when the anaphor and the verb occur in the same sentence, and the same collective noun is both the subject of the verb and antecedent of the pronoun. In view of these findings we propose that the processes that control number agreement in comprehension show a dissociation between semantic and morpho-syntactic processing that is similar to the dissociation demonstrated in speech-production. We discuss various theoretical frameworks that can account for this similarity.$backslash$nStudies in theoretical linguistics argue that subject-verb agreement is more sensitive to grammatical number, while pronoun-antecedent agreement is more sensitive to conceptual number. This claim is robustly supported by speech production research, but few studies have examined this issue in comprehension. We investigated this dissociation between conceptual and grammatical number agreement in three eye-tracking reading experiments, using collective nouns like ?group?, which can be notionally interpreted as either singular or plural. Experiment 1 indicated that pronoun-antecedent agreement is conceptually driven; Experiment 2 indicated that subject-verb agreement is morpho-syntactically driven. Experiment 3 indicated that the morpho-grammatical processes that control the initial processing of subject-verb agreement do not bias later semantic processing of pronoun-antecedent number agreement, even when the anaphor and the verb occur in the same sentence, and the same collective noun is both the subject of the verb and antecedent of the pronoun. In view of these findings we propose that the processes that control number agreement in comprehension show a dissociation between semantic and morpho-syntactic processing that is similar to the dissociation demonstrated in speech-production. We discuss various theoretical frameworks that can account for this similarity. |
Mariska E. Kret; Karin Roelofs; Jeroen J. Stekelenburg; Beatrice Gelder Emotional signals from faces, bodies and scenes influence observers' face expressions, fixations and pupil-size Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 810, 2013. @article{Kret2013, We receive emotional signals from different sources, including the face, the whole body, and the natural scene. Previous research has shown the importance of context provided by the whole body and the scene on the recognition of facial expressions. This study measured physiological responses to face-body-scene combinations. Participants freely viewed emotionally congruent and incongruent face-body and body-scene pairs whilst eye fixations, pupil-size, and electromyography (EMG) responses were recorded. Participants attended more to angry and fearful vs. happy or neutral cues, independent of the source and relatively independent from whether the face body and body scene combinations were emotionally congruent or not. Moreover, angry faces combined with angry bodies and angry bodies viewed in aggressive social scenes elicited greatest pupil dilation. Participants' face expressions matched the valence of the stimuli but when face-body compounds were shown, the observed facial expression influenced EMG responses more than the posture. Together, our results show that the perception of emotional signals from faces, bodies and scenes depends on the natural context, but when threatening cues are presented, these threats attract attention, induce arousal, and evoke congruent facial reactions. |
Mariska E. Kret; Jeroen J. Stekelenburg; Karin Roelofs; Beatrice Gelder Perception of face and body expressions using electromyography, pupillometry and gaze measures Journal Article In: Frontiers in Psychology, vol. 4, pp. 28, 2013. @article{Kret2013a, Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and emotionally congruent and incongruent face-body compounds. Participants' fixations were measured and their pupil size recorded with eye-tracking equipment and their facial reactions measured with electromyography. The results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, vice versa as well. From their facial expressions, it appeared that observers acted with signs of negative emotionality (increased corrugator activity) to angry and fearful facial expressions and with positive emotionality (increased zygomaticus) to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body facilitates the recognition of the emotion. |
Franziska Kretzschmar; Dominique Pleimling; Jana Hosemann; Stephan Füssel; Ina Bornkessel-Schlesewsky; Matthias Schlesewsky Subjective impressions do not mirror online reading effort: Concurrent EEG-eyetracking evidence from the reading of books and digital media Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e56178, 2013. @article{Kretzschmar2013, In the rapidly changing circumstances of our increasingly digital world, reading is also becoming an increasingly digital experience: electronic books (e-books) are now outselling print books in the United States and the United Kingdom. Nevertheless, many readers still view e-books as less readable than print books. The present study thus used combined EEG and eyetracking measures in order to test whether reading from digital media requires higher cognitive effort than reading conventional books. Young and elderly adults read short texts on three different reading devices: a paper page, an e-reader and a tablet computer and answered comprehension questions about them while their eye movements and EEG were recorded. The results of a debriefing questionnaire replicated previous findings in that participants overwhelmingly chose the paper page over the two electronic devices as their preferred reading medium. Online measures, by contrast, showed shorter mean fixation durations and lower EEG theta band voltage density–known to covary with memory encoding and retrieval–for the older adults when reading from a tablet computer in comparison to the other two devices. Young adults showed comparable fixation durations and theta activity for all three devices. Comprehension accuracy did not differ across the three media for either group. We argue that these results can be explained in terms of the better text discriminability (higher contrast) produced by the backlit display of the tablet computer. Contrast sensitivity decreases with age and degraded contrast conditions lead to longer reading times, thus supporting the conclusion that older readers may benefit particularly from the enhanced contrast of the tablet. Our findings thus indicate that people's subjective evaluation of digital reading media must be dissociated from the cognitive and neural effort expended in online information processing while reading from such devices. |
Yu-Cin Jian; Ming-Lei Chen; Hwa-Wei Ko Context effects in processing of Chinese academic words: An eye-tracking investigation Journal Article In: Reading Research Quarterly, vol. 48, no. 4, pp. 403–413, 2013. @article{Jian2013, This study investigated context effects of online processing of Chinese academic words during text reading. Undergraduate participants were asked to read Chinese texts that were familiar or unfamiliar (containing physics terminology) to them. Physics texts were selected first, and then we replaced the physics terminology with familiar words; other common words remained the same in both text versions. Our results indicate that readers experienced longer rereading times and total fixation durations for the same common words in the physics texts than for the corresponding texts. Shorter gaze durations were observed for the replaced words than the physics terminology; however, the duration of participants' first fixations on these two word types did not differ from each other. Furthermore, although the participants performed similar reading paths after encountering the target words of the physics terminology and replaced words, their processing duration of the current sentences was very different. They reread the physics terminology more times and spent more reading time on the current sentences containing the physics terminology, searching for more information to aid comprehension. This study showed that adult readers seemed to successfully access each Chinese character's meaning but initially failed to access the meaning of the physics terminology. This could be attributable to the nature of the formation of Chinese words; however, the use of contextual information to comprehend unfamiliar words is a universal phenomenon. |
Li Jingling; Da-Lun Tang; Chia-huei Tseng Salient collinear grouping diminishes local salience in visual search: An eye movement study Journal Article In: Journal of Vision, vol. 13, no. 12, pp. 1–10, 2013. @article{Jingling2013, Our eyes and attention are easily attracted to salient items in search displays. When a target is spatially overlapped with a salient distractor (overlapping target), it is usually detected more easily than when it is not (nonoverlapping target). Jingling and Tseng (2013), however, found that a salient distractor impaired visual search when the distractor was comprised of more than nine bars collinearly aligned to each other. In this study, we examined whether this search impairment is due to reduction of salience on overlapping targets. We used the short-latency saccades as an index for perceptual salience. Results showed that a long collinear distractor decreases perceptual salience of local overlapping targets in comparison to nonoverlapping targets, reflected by a smaller proportion of the short-latency saccades. Meanwhile, a salient noncollinear distractor increases salience of overlapping targets. Our results led us to conclude that a long collinear distractor diminishes the perceptual salience of the target, a factor which poses a counter-intuitive condition in which a target on a salient region becomes less salient. We discuss the possible causes for our findings, including crowding, the global precedence effect, and the filling-in of a collinear contour. |
Jiri Lukavsky Eye movements in repeated multiple object tracking Journal Article In: Journal of vision, vol. 13, pp. 1–16, 2013. @article{JiriLukavsky2013, Contrary to other tasks (free viewing, recognition, visual search), participants often fail to recognize repetition of trials in multiple object tracking (MOT). This study examines the intra- and interindividual variability of eye movements in repeated MOT trials along with the adherence of eye movements to the previously described strategies. I collected eye movement data from 20 subjects during 64 MOT trials at slow speed (58/s). Half of the trials were repeated four times, and the remaining trials were unique. I measured the variability of eye- movement patterns during repeated trials using normalized scanpath saliency extended to the temporal domain. People tended to make similar eye movements during repeated presentations (with no or vague feeling of repetition) and the interindividual similarity remained at the same level over time. Several strategies (centroid strategy and its variants) were compared with data and they accounted for 48.8% to 54.3% of eye-movement variability, which was less then variability explained by other peoples' eye movements (68.6%). The results show that the observed intra- and interindividual similarity of eye movements is only partly explained by the current models. |
Beth P. Johnson; Nicole J. Rinehart; Owen B. White; Lynette Millist; Joanne Fielding Saccade adaptation in autism and Asperger's disorder Journal Article In: Neuroscience, vol. 243, pp. 76–87, 2013. @article{Johnson2013, Autism and Asperger's disorder (AD) are neuro- developmental disorders primarily characterized by deficits in social interaction and communication, however motor coordination deficits are increasingly recognized as a prevalent feature of these conditions. Although it has been proposed that children with autism and AD may have diffi- culty utilizing visual feedback during motor learning tasks, this has not been directly examined. Significantly, changes within the cerebellum, which is implicated in motor learning, are known to be more pronounced in autism compared to AD. We used the classic double-step saccade adaptation paradigm, known to depend on cerebellar integrity, to inves- tigate differences in motor learning and the use of visual feedback in children aged 9–14 years with high-functioning autism (HFA; IQ > 80; n = 10) and AD (n = 13). Performance was compared to age and IQ matched typically developing children (n = 12). Both HFA and AD groups successfully adapted the gain of their saccades in response to perceived visual error, however the time course for adaptation was prolonged in the HFA group. While a shift in saccade dynamics typically occurs during adaptation, we revealed aberrant changes in both HFA and AD groups. This study contributes to a growing body of evidence centrally impli- cating the cerebellum in ocular motor dysfunction in autism. Specifically, these findings collectively imply functional impairment of the cerebellar network and its inflow and outflow tracts that underpin saccade adaptation, with greater disturbance in HFA compared to AD. |
Manon W. Jones; Jane Ashby; Holly P. Branigan Dyslexia and fluency: Parafoveal and foveal influences on rapid automatized naming Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 2, pp. 554–567, 2013. @article{Jones2013, The ability to coordinate serial processing of multiple items is crucial for fluent reading but is known to be impaired in dyslexia. To investigate this impairment, we manipulated the orthographic and phonological similarity of adjacent letters online as dyslexic and nondyslexic readers named letters in a serial naming (RAN) task. Eye movements and voice onsets were recorded. Letter arrays contained target item pairs in which the second letter was orthographically or phonologically similar to the first letter when viewed either parafoveally (Experiment 1a) or foveally (Experiment 1b). Relative to normal readers, dyslexic readers were more affected by orthographic confusability in Experiment 1a and phonological confusability in Experiment 1b. Normal readers were slower to process orthographically similar letters in Experiment 1b. Findings indicate that the phonological and orthographic processing problems of dyslexic readers manifest differently during parafoveal and foveal processing, with each contributing to slower RAN performance and impaired reading fluency. |
Donatas Jonikaitis; Martin Szinte; Martin Rolfs; Patrick Cavanagh Allocation of attention across saccades Journal Article In: Journal of Neurophysiology, vol. 109, no. 5, pp. 1425–1434, 2013. @article{Jonikaitis2013, Whenever the eyes move, spatial attention must keep track of the locations of targets as they shift on the retina. This study investigated transsaccadic updating of visual attention to cued targets. While observers prepared a saccade, we flashed an irrelevant, but salient, color cue in their visual periphery and measured the allocation of spatial attention before and after the saccade using a tilt discrimination task. We found that just before the saccade, attention was allocated to the cue's future retinal location, its predictively "remapped" location. Attention was sustained at the cue's location in the world across the saccade, despite the change of retinal position whereas it decayed quickly at the retinal location of the cue, after the eye landed. By extinguishing the color cue across the saccade, we further demonstrate that the visual system relies only on predictive allocation of spatial attention, as the presence of the cue after the saccade did not substantially affect attentional allocation. These behavioral results support and extend physiological evidence showing predictive activation of visual neurons when an attended stimulus will fall in their receptive field after a saccade. Our results show that tracking of spatial locations across saccades is a plausible consequence of physiological remapping. |
Donatas Jonikaitis; Jan Theeuwes Dissociating oculomotor contributions to spatial and feature-based selection Journal Article In: Journal of Neurophysiology, vol. 110, no. 7, pp. 1525–1534, 2013. @article{Jonikaitis2013a, Saccades not only deliver the high-resolution retinal image requisite for visual perception, but processing stages associated with saccade target selection affect visual perception even before the eye movement starts. These presaccadic effects are thought to arise from two visual selection mechanisms: spatial selection that enhances processing of the saccade target location and feature-based selection that enhances processing of the saccade target features. By measuring oculomotor performance and perceptual discrimination, we determined which selection mechanisms are associated with saccade preparation. We observed both feature-based and space-based selection during saccade preparation but found that feature-based selection was neither related to saccade initiation nor was it affected by simultaneously observed redistribution of spatial selection. We conclude that oculomotor selection biases visual selection only in a spatial, feature-unspecific manner. |
Timothy R. Jordan; Victoria A. McGowan; Kevin B. Paterson What's left? An eye movement study of the influence of interword spaces to the left of fixation during reading Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 3, pp. 551–557, 2013. @article{Jordan2013, In English and other alphabetic systems read from left to right, the useful information acquired during each fixational pause is generally reported to extend 14-15 character spaces to the right of each fixation, but only 3-4 character spaces to the left, and certainly no farther than the beginning of the fixated word. However, this leftward extent is remarkably small and seems inconsistent with the general bilateral symmetry of vision. Accordingly, in the present study we investigated the influence of a fundamental component of text to the left of fixation-interword spaces-using a well-established eyetracking paradigm in which invisible boundaries were set up along individual sentence displays that were then read. Each boundary corresponded to the leftmost edge of a word in a sentence, so that as the eyes crossed a boundary, interword spaces in the text to the left of that word were obscured (by inserting a letter x). The proximity of the obscured text during each fixational pause was maintained at one, two, three, or four interword spaces from the left boundary of each fixated word. Normal fixations, regressions, and progressive saccades were disrupted when the obscured text was up to three interword spaces (an average of over 12 character spaces) away from the fixated word, while four interword spaces away produced no disruption. These findings suggest that influential information from text is acquired during each fixational pause from much farther leftward than is generally realized and that this information contributes to normal reading performance. Implications of these findings for reading are discussed. |
Holly S. S. L. Joseph; Simon P. Liversedge Children's and adults' on-line processing of syntactically ambiguous sentences during reading Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e54141, 2013. @article{Joseph2013, While there has been a fair amount of research investigating children's syntactic processing during spoken language comprehension, and a wealth of research examining adults' syntactic processing during reading, as yet very little research has focused on syntactic processing during text reading in children. In two experiments, children and adults read sentences containing a temporary syntactic ambiguity while their eye movements were monitored. In Experiment 1, participants read sentences such as, ‘The boy poked the elephant with the long stick/trunk from outside the cage' in which the attachment of a prepositional phrase was manipulated. In Experiment 2, participants read sentences such as, ‘I think I'll wear the new skirt I bought tomorrow/yesterday. It's really nice' in which the attachment of an adverbial phrase was manipulated. Results showed that adults and children exhibited similar processing preferences, but that children were delayed relative to adults in their detection of initial syntactic misanalysis. It is concluded that children and adults have the same sentence-parsing mechanism in place, but that it operates with a slightly different time course. In addition, the data support the hypothesis that the visual processing system develops at a different rate than the linguistic processing system in children. |
Holly S. S. L. Joseph; Kate Nation; Simon P. Liversedge Using eye movements to investigate word frequency effects in children's sentence reading Journal Article In: School Psychology Review, vol. 42, no. 2, pp. 207–222, 2013. @article{Joseph2013a, Although eye movements have been used widely to investigate how skilled adult readers process written language, relatively little research has used this methodology with children. This is unfortunate as, as we discuss here, eye-movement studies have significant potential to inform our understanding of children's reading development. We consider some of the empirical and theoretical issues that arise when using this methodology with children, illustrating our points with data from an experiment examining word frequency effects in 8-year-old children's sentence reading. Children showed significantly longer gaze durations to low- than high-frequency words, demonstrating that linguistic characteristics of text drive children's eye movements as they read. We discuss these findings within the broader context of how eye-movement studies can inform our understanding of children's reading, and can assist with the development of appropriately targeted interventions to support children as they learn to read. |
Hannah M. Krüger; Amelia R. Hunt Inhibition of return across eye and object movements: The role of prediction Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 3, pp. 735–744, 2013. @article{Krueger2013, Responses are slower to targets appearing in recently inspected locations, an effect known as Inhibition of Return (IOR). IOR is typically viewed as the consequence of an involuntary mechanism that prevents reinspection of previously visited locations and thereby biases attention toward novel locations during visual search. For an inhibitory tagging mechanism to serve this function effectively, it should be robust against eye movements and the movements of objects in the environment. We investigated whether the predictability of motion supports the coding of inhibitory tags in spatiotopic coordinates across eye movements and object-based coordinates across object motion. IOR was observed in both retinotopic and spatiotopic coordinates across eye movements, regardless of the predictability of the eye movement direction. In a matching experiment, but with predictable or unpredictable object motion instead of eye movements, IOR was reduced in magnitude by object motion and was not observed in object-based coordinates, even when the motion was predictable. Together the results suggest inhibitory tags can track objects as they move across the retina, but only when this motion is self-generated. We conclude that efference copy, not prediction, plays a key role in maintaining inhibition on previously attended objects across saccades. |
Victor Kuperman; Julie A. Van Dyke Reassessing word frequency as a determinant of word recognition for skilled and unskilled readers Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 3, pp. 802–823, 2013. @article{Kuperman2013, The importance of vocabulary in reading comprehension emphasizes the need to accurately assess an individual's familiarity with words. The present article highlights problems with using occurrence counts in corpora as an index of word familiarity, especially when studying individuals varying in reading experience. We demonstrate via computational simulations and norming studies that corpus-based word frequencies systematically overestimate strengths of word representations, especially in the low-frequency range and in smaller-size vocabularies. Experience-driven differences in word familiarity prove to be faithfully captured by the subjective frequency ratings collected from responders at different experience levels. When matched on those levels, this lexical measure explains more variance than corpus-based frequencies in eye-movement and lexical decision latencies to English words, attested in populations with varied reading experience and skill. Furthermore, the use of subjective frequencies removes the widely reported (corpus) Frequency × Skill interaction, showing that more skilled readers are equally faster in processing any word than the less skilled readers, not disproportionally faster in processing lower frequency words. This finding challenges the view that the more skilled an individual is in generic mechanisms of word processing, the less reliant he or she will be on the actual lexical characteristics of that word. |
Miyoung Kwon; Anirvan S. Nandy; Bosco S. Tjan Rapid and persistent adaptability of human oculomotor control in response to simulated central vision loss Journal Article In: Current Biology, vol. 23, no. 17, pp. 1663–1669, 2013. @article{Kwon2013, The central region of the human retina, the fovea, provides high-Acuity vision. The oculomotor system continually brings targets of interest into the fovea via ballistic eye movements (saccades). Thus, the fovea serves both as the locus for fixations and as the oculomotor reference for saccades. This highly automated process of foveation is functionally critical to vision and is observed from infancy [1, 2]. How would the oculomotor system adjust to a loss of foveal vision (central scotoma)? Clinical observations of patients with central vision loss [3, 4] suggest a lengthy adjustment period [5], but the nature and dynamics of this adjustment remain unclear. Here, we demonstrate that the oculomotor system can spontaneously and rapidly adopt a peripheral locus for fixation and can rereference saccades to this locus in normally sighted individuals whose central vision is blocked by an artificial scotoma. Once developed, the fixation locus is retained over weeks in the absence of the simulated scotoma. Our data reveal a basic guiding principle of the oculomotor system that prefers control simplicity over optimality. We demonstrate the importance of a visible scotoma on the speed of the adjustment and suggest a possible rehabilitation regimen for patients with central vision loss. |
Hannah E. Kirk; Darren R. Hocking; Deborah M. Riby; Kim M. Cornish Linking social behaviour and anxiety to attention to emotional faces in Williams syndrome Journal Article In: Research in Developmental Disabilities, vol. 34, no. 12, pp. 4608–4616, 2013. @article{Kirk2013, The neurodevelopmental disorder Williams syndrome (WS) has been associated with a social phenotype of hypersociability, non-social anxiety and an unusual attraction to faces. The current study uses eye tracking to explore attention allocation to emotionally expressive faces. Eye gaze and behavioural measures of anxiety and social reciprocity were investigated in adolescents and adults with WS when compared to typically developing individuals of comparable verbal mental age (VMA) and chronological age (CA). Results showed significant associations between high levels of behavioural anxiety and attention allocation away from the eye regions of threatening facial expressions in WS. The results challenge early claims of a unique attraction to the eyes in WS and suggest that individual differences in anxiety may mediate the allocation of attention to faces in WS. |
Julie A. Kirkby; H. I. Blythe; Denis Drieghe; V. Benson; Simon P. Liversedge In: Behavior Research Methods, vol. 45, no. 3, pp. 664–678, 2013. @article{Kirkby2013, Previous studies examining binocular coordination during reading have reported conflicting results in terms of the nature of disparity (e.g. Kliegl, Nuthmann, & Engbert (Journal of Experimental Psychology General 135:12-35, 2006); Liversedge, White, Findlay, & Rayner (Vision Research 46:2363-2374, 2006). One potential cause of this inconsistency is differences in acquisition devices and associated analysis technologies. We tested this by directly comparing binocular eye movement recordings made using SR Research EyeLink 1000 and the Fourward Technologies Inc. DPI binocular eye-tracking systems. Participants read sentences or scanned horizontal rows of dot strings; for each participant, half the data were recorded with the EyeLink, and the other half with the DPIs. The viewing conditions in both testing laboratories were set to be very similar. Monocular calibrations were used. The majority of fixations recorded using either system were aligned, although data from the EyeLink system showed greater disparity magnitudes. Critically, for unaligned fixations, the data from both systems showed a majority of uncrossed fixations. These results suggest that variability in previous reports of binocular fixation alignment is attributable to the specific viewing conditions associated with a particular experiment (variables such as luminance and viewing distance), rather than acquisition and analysis software and hardware. |
Johannes Klackl; Michaela Pfundmair; Dmitrij Agroskin; Eva Jonas Who is to blame? Oxytocin promotes nonpersonalistic attributions in response to a trust betrayal Journal Article In: Biological Psychology, vol. 92, pp. 387–394, 2013. @article{Klackl2013, Recent research revealed that the neuropeptide Oxytocin (OT) increases and maintains trustful behavior, even towards interaction partners that have proven to be untrustworthy. However, the cognitive mechanisms behind this effect are unclear. In the present paper, we propose that OT might boost trust through the link between angry rumination and the use of nonpersonalistic and personalistic attributions. Nonpersonalistic attributions put the blame for the betrayal on the perpetrator's situation, whereas personalistic attributions blame his dispositions for the event. We predict that OT changes attribution processes in favor of nonpersonalistic ones and thereby boosts subsequent trust. Participants played a classic trust game in which the opponent systematically betrayed their trust. As predicted, OT strength- ened the relationship between angry rumination about the event and nonpersonalistic attribution of the opponents' behavior and weakened the link between angry rumination and personalistic attribution. Critically, nonpersonalistic attribution also mediated the interactive effect of OT and angry rumination on how strongly investments were reduced in the remaining rounds of the trust game. In summary, the present findings suggest that one underlying cognitive mechanism behind OT-induced trust might relate to how negative emotions evoked by a breach of trust influence the subsequent attributional analysis: OT seems to augment trust by fostering the interpretation of untrustworthy behavior as caused by non-personal factors. |
Jeffrey T. Klein; Michael L. Platt Social information signaling by neurons in primate striatum Journal Article In: Current Biology, vol. 23, pp. 691–696, 2013. @article{Klein2013, Social decisions depend on reliable information about others. Consequently, social primates are motivated to acquire information about the identity, social status, and reproductive quality of others [1]. Neurophysiological [2] and neuroimaging [3, 4] studies implicate the striatum in the motivational control of behavior. Neuroimaging studies specifically implicate the ventromedial striatum in signaling motivational aspects of social interaction [5]. Despite this evidence, precisely how striatal neurons encode social information remains unknown. Therefore, we probed the activity of single striatal neurons in monkeys choosing between visual social information at the potential expense of fluid reward. We show for the first time that a population of neurons located primarily in medial striatum selectively signals social information. Surprisingly, representation of social information was unrelated to simultaneously expressed social preferences. A largely nonoverlapping population of neurons that was not restricted to the medial striatum signaled information about fluid reward. Our findings demonstrate that information about social context and nutritive reward are maintained largely independently in striatum, even when both influence decisions to execute a single action. |
Reinhold Kliegl; Sven Hohenstein; Ming Yan; Scott A. McDonald How preview space/time translates into preview cost/benefit for fixation durations during reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 581–600, 2013. @article{Kliegl2013, Eye-movement control during reading depends on foveal and parafoveal information. If the parafoveal preview of the next word is suppressed, reading is less efficient. A linear mixed model (LMM) reanalysis of McDonald (2006) confirmed his observation that preview benefit may be limited to parafoveal words that have been selected as the saccade target. Going beyond the original analyses, in the same LMM, we examined how the preview effect (i.e., the difference in single-fixation duration, SFD, between random-letter and identical preview) depends on the gaze duration on the pretarget word and on the amplitude of the saccade moving the eye onto the target word. There were two key results: (a) The shorter the saccade amplitude (i.e., the larger preview space), the shorter a subsequent SFD with an identical preview; this association was not observed with a random-letter preview. (b) However, the longer the gaze duration on the pretarget word, the longer the subsequent SFD on the target, with the difference between random-letter string and identical previews increasing with preview time. A third pattern-increasing cost of a random-letter string in the parafovea associated with shorter saccade amplitudes-was observed for target gaze durations. Thus, LMMs revealed that preview effects, which are typically summarized under "preview benefit", are a complex mixture of preview cost and preview benefit and vary with preview space and preview time. The consequence for reading is that parafoveal preview may not only facilitate, but also interfere with lexical access. |
Lisa Kloft; Benedikt Reuter; Anja Riesel; Norbert Kathmann Impaired volitional saccade control: First evidence for a new candidate endophenotype in obsessive-compulsive disorder Journal Article In: European Archives of Psychiatry and Clinical Neuroscience, vol. 263, pp. 215–222, 2013. @article{Kloft2013, Recent research suggests that patients with obsessive-compulsive disorder (OCD) have deficits in the volitional control of saccades. Specific evidence comes from increased latencies of saccadic eye movements when they were volitionally executed but not when they were visually guided. The present study sought to test whether this deviance represents a cognitive endophenotype. To this end, first-degree relatives of OCD patients as genetic risk carriers were compared with OCD patients and healthy controls without a family history of OCD. Furthermore, as volitional response generation comprises selection and initiation of the required response, the study also sought to specify the cognitive mechanisms underlying impaired volitional response generation. Twenty-two unaffected first-degree relatives of OCD patients, 22 unmedicated OCD patients, and 22 healthy comparison subjects performed two types of volitional saccade tasks measuring response selection or only response initiation, respectively. Visually guided saccades were used as a control condition. Our results showed that unaffected first-degree relatives and OCD patients were significantly slowed compared to healthy comparison subjects in volitional response selection. Patients and relatives did not differ from each other. There was no group difference in the visually guided control condition. Taken together, the study provides first evidence that dysfunctional volitional response selection is a candidate endophenotype for OCD. |
Jonas Knöll; M. Concetta Morrone; Frank Bremmer Spatio-temporal topography of saccadic overestimation of time Journal Article In: Vision Research, vol. 83, pp. 56–65, 2013. @article{Knoell2013, Rapid eye movements (saccades) induce visual misperceptions. A number of studies in recent years have investigated the spatio-temporal profiles of effects like saccadic suppression or perisaccadic mislocalization and revealed substantial functional similarities. Saccade induced chronostasis describes the subjective overestimation of stimulus duration when the stimulus onset falls within a saccade. In this study we aimed to functionally characterize saccade induced chronostasis in greater detail. Specifically we tested if chronostasis is influenced by or functionally related to saccadic suppression. In a first set of experiments, we measured the perceived duration of visual stimuli presented at different spatial positions as a function of presentation time relative to the saccade. We further compared perceived duration during saccades for isoluminant and luminant stimuli. Finally, we investigated whether or not saccade induced chronostasis is dependent on the execution of a saccade itself. We show that chronostasis occurs across the visual field with a clear spatio-temporal tuning. Furthermore, we report chronostasis during simulated saccades, indicating that spurious retinal motion induced by the saccade is a prime origin of the phenomenon. |
Sungryong Koh; Nakyeong Yoon; Si On Yoon; Alexander Pollatsek Word frequency and root-morpheme frequency effects on processing of Korean particle-suffixed words Journal Article In: Journal of Cognitive Psychology, vol. 25, no. 1, pp. 64–72, 2013. @article{Koh2013, Two experiments investigated the roles of the frequency of the root morpheme and the frequency of the whole word for a particular type of suffixed word in Korean in which the suffixed word can be thought of as a phrase (e.g., grandson-with). In both experiments, sentence frames were constructed so that they could have one of two target words that varied on frequency characteristics in the same location in the sentence. In Experiment 1, the frequency of the root morpheme was varied with the frequency of the word controlled, and in Experiment 2, the frequency of the word was varied with the frequency of the root morpheme controlled. Word frequency had a significant effect on fixation times, whereas root morpheme frequency did not. The results were surprising as native Korean speakers view the root morpheme as the ‘‘word'' (analogous to how English readers would view a noun followed by a prepositional phrase). |
Dirk Kerzel; Josef G. Schönhammer Salient stimuli capture attention and action Journal Article In: Attention, Perception, & Psychophysics, vol. 75, no. 8, pp. 1633–1643, 2013. @article{Kerzel2013, Reaction times in a visual search task increase when an irrelevant but salient stimulus is presented. Recently, the hypothesis that the increase in reaction times was due to attentional capture by the salient distractor has been disputed. We devised a task in which a search display was shown after observers had initiated a reaching movement toward a touch screen. In a display of vertical bars, observers had to touch the oblique target while ignoring a salient color singleton. Because the hand was moving when the display appeared, reach trajectories revealed the current selection for action. We observed that salient but irrelevant stimuli changed the reach trajectory at the same time as the target was selected, about 270 ms after movement onset. The change in direction was corrected after another 160 ms. In a second experiment, we compared manual selection of color and orientation targets and observed that selection occurred earlier for color than for orientation targets. Salient stimuli support faster selection than do less salient stimuli. Under the assumption that attentional selection for action and perception are based on a common mechanism, our results suggest that attention is indeed captured by salient stimuli. |
Shah Khalid; Ulrich Ansorge The Simon effect of spatial words in eye movements: Comparison of vertical and horizontal effects and of eye and finger responses Journal Article In: Vision Research, vol. 86, pp. 6–14, 2013. @article{Khalid2013, Spatial stimulus location information impacts on saccades: Pro-saccades (saccades towards a stimulus location) are faster than anti-saccades (saccades away from the stimulus). This is true even when the spatial location is irrelevant for the choice of the correct response (Simon effect). The results are usually ascribed to spatial sensorimotor coupling. However, with finger responses Simon effects can be observed with irrelevant spatial word meaning, too. Here we tested whether a Simon effect of spatial word meaning in saccades could be observed for words with vertical ("above" or "below") and horizontal ("left" or "right") meanings. We asked our participants to make saccades towards one of the two saccade targets depending on the color of the centrally presented spatial word, while ignoring their spatial meaning (Experiment 1 and 2a). Results are compared to a condition in which finger responses instead of saccades were required (Experiment 2b). In addition to response latency we compared the time course of vertical and horizontal effects. We found the Simon effects due to irrelevant spatial meaning of the words in both saccades and finger responses. The time course investigations revealed different patterns for vertical and horizontal effects in saccades, indicating that distinct processes may be involved in the two types of Simon effects. |
Shah Khalid; Matthew Finkbeiner; Peter König; Ulrich Ansorge Subcortical human face processing? Evidence from masked priming Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 989–1002, 2013. @article{Khalid2013a, Face processing without awareness might depend on subcortical structures (retino-collicular projection), cortical structures, or a combination of the two. The present study was designed to tease apart these possibilities. Because the retino-collicular projection is more sensitive to low spatial frequencies, we used masked (subliminal) face prime images that were spatially low-pass filtered, or high-pass filtered. The masked primes were presented in the periphery prior to clearly visible target faces. Participants had to discriminate between male and female target faces and we recorded prime-target congruence effects–that is, the difference in discrimination speed between congruent pairs (with prime and target of the same sex) and incongruent pairs (with prime and target of different sexes). In two experiments, we consistently find that masked low-pass filtered face primes produce a congruence effect and that masked high-pass filtered face primes do not. Together our results support the assumption that the retino-collicular route which carries the low spatial frequencies also conveys sex specific features of face images contributing to subliminal face processing. |
Rizwan Ahmed Khan; Alexandre Meyer; Hubert Konik; Saïda Bouakaz Framework for reliable, real-time facial expression recognition for low resolution images Journal Article In: Pattern Recognition Letters, vol. 34, no. 10, pp. 1159–1168, 2013. @article{Khan2013, Automatic recognition of facial expressions is a challenging problem specially for low spatial resolution facial images. It has many potential applications in human-computer interactions, social robots, deceit detection, interactive video and behavior monitoring. In this study we present a novel framework that can recognize facial expressions very efficiently and with high accuracy even for very low resolution facial images. The proposed framework is memory and time efficient as it extracts texture features in a pyramidal fashion only from the perceptual salient regions of the face. We tested the framework on different databases, which includes Cohn-Kanade (CK+) posed facial expression database, spontaneous expressions of MMI facial expression database and FG-NET facial expressions and emotions database (FEED) and obtained very good results. Moreover, our proposed framework exceeds state-of-the-art methods for expression recognition on low resolution images. |
Farhan A. Khawaja; Liu D. Liu; Christopher C. Pack Responses of MST neurons to plaid stimuli Journal Article In: Journal of Neurophysiology, vol. 110, no. 1, pp. 63–74, 2013. @article{Khawaja2013, The estimation of motion information from retinal input is a fundamental function of the primate dorsal visual pathway. Previous work has shown that this function involves multiple cortical areas, with each area integrating information from its predecessors. Compared with neurons in the primary visual cortex (V1), neurons in the middle temporal (MT) area more faithfully represent the velocity of plaid stimuli, and the observation of this pattern selectivity has led to two-stage models in which MT neurons integrate the outputs of component-selective V1 neurons. Motion integration in these models is generally complemented by motion opponency, which refines velocity selectivity. Area MT projects to a third stage of motion processing, the medial superior temporal (MST) area, but surprisingly little is known about MST responses to plaid stimuli. Here we show that increased pattern selectivity in MST is associated with greater prevalence of the mechanisms implemented by two-stage MT models: Compared with MT neurons, MST neurons integrate motion components to a greater degree and exhibit evidence of stronger motion opponency. Moreover, when tested with more challenging unikinetic plaid stimuli, an appreciable percentage of MST neurons are pattern selective, while such selectivity is rare in MT. Surprisingly, increased motion integration is found in MST even for transparent plaid stimuli, which are not typically integrated perceptually. Thus the relationship between MST and MT is qualitatively similar to that between MT and V1, as repeated application of basic motion mechanisms leads to novel selectivities at each stage along the pathway. |
Markku Kilpeläinen; Christian N. L. Olivers; Jan Theeuwes The eyes like their targets on a stable background Journal Article In: Journal of Vision, vol. 13, no. 6, pp. 1–11, 2013. @article{Kilpelaeinen2013, In normal human visual behavior, our visual system is continuously exposed to abrupt changes in the local contrast and mean luminance in various parts of the visual field, as caused by actual changes in the environment, as well as by movements of our body, head, and eyes. Previous research has shown that both threshold and suprathreshold contrast percepts are attenuated by a co-occurring change in the mean luminance at the location of the target stimulus. In the current study, we tested the hypothesis that contrast targets presented with a co-occurring change in local mean luminance receive fewer fixations than targets presented in a region with a steady mean luminance. To that end we performed an eye-tracking experiment involving eight observers. On each trial, after a 4 s adaptation period, an observer's task was to make a saccade to one of two target gratings, presented simultaneously at 78 eccentricity, separated by 308 in polar angle. When both targets were presented with a steady mean luminance, saccades landed mostly in the area between the two targets, signifying the classic global effect. However, when one of the targets was presented with a change in luminance, the saccade distribution was biased towards the target with the steady luminance. The results show that the attenuation of contrast signals by co-occurring, ecologically typical changes in mean luminance affects fixation selection and is therefore likely to affect eye movements in natural visual behavior. |
Johann S. C. Kim; Gerhard Vossel; Matthias Gamer Effects of emotional context on memory for details: The role of attention Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e77405, 2013. @article{Kim2013, It was repeatedly demonstrated that a negative emotional context enhances memory for central details while impairing memory for peripheral information. This trade-off effect is assumed to result from attentional processes: a negative context seems to narrow attention to central information at the expense of more peripheral details, thus causing the differential effects in memory. However, this explanation has rarely been tested and previous findings were partly inconclusive. For the present experiment 13 negative and 13 neutral naturalistic, thematically driven picture stories were constructed to test the trade-off effect in an ecologically more valid setting as compared to previous studies. During an incidental encoding phase, eye movements were recorded as an index of overt attention. In a subsequent recognition phase, memory for central and peripheral details occurring in the picture stories was tested. Explicit affective ratings and autonomic responses validated the induction of emotion during encoding. Consistent with the emotional trade-off effect on memory, encoding context differentially affected recognition of central and peripheral details. However, contrary to the common assumption, the emotional trade-off effect on memory was not mediated by attentional processes. By contrast, results suggest that the relevance of attentional processing for later recognition memory depends on the centrality of information and the emotional context but not their interaction. Thus, central information was remembered well even when fixated very briefly whereas memory for peripheral information depended more on overt attention at encoding. Moreover, the influence of overt attention on memory for central and peripheral details seems to be much lower for an arousing as compared to a neutral context. |
Pilyoung Kim; Joseph Arizpe; Brooke H. Rosen; Varun Razdan; Catherine T. Haring; Sarah E. Jenkins; Christen M. Deveney; Melissa A. Brotman; R. James R. Blair; Daniel S. Pine; Chris I. Baker; Ellen Leibenluft Impaired fixation to eyes during facial emotion labelling in children with bipolar disorder or severe mood dysregulation Journal Article In: Journal of Psychiatry and Neuroscience, vol. 38, no. 6, pp. 407–416, 2013. @article{Kim2013a, Background: Children with bipolar disorder (BD) or severe mood dysregulation (SMD) show behavioural and neural deficits during facial emotion processing. In those with other psychiatric disorders, such deficits have been associated with reduced attention to eye regions while looking at faces. Methods: We examined gaze fixation patterns during a facial emotion labelling task among children with pediatric BD and SMD and among healthy controls. Participants viewed facial expressions with varying emotions (anger, fear, sadness, happi- ness, neutral) and emotional levels (60%, 80%, 100%) and labelled emotional expressions. Results: Our study included 22 children with BD, 28 with SMD and 22 controls. Across all facial emotions, children with BD and SMD made more labelling errors than controls. Compared with controls, children with BD spent less time looking at eyes and made fewer eye fixations across emotional expressions. Gaze patterns in children with SMD tended to fall between those of children with BD and controls, although they did not differ significantly from either of these groups on most measures. Decreased fixations to eyes correlated with lower labelling accuracy in children with BD, but not in those with SMD or in controls. Limitations: Most children with BD were medicated, which precluded our ability to evaluate med- ication effects on gaze patterns. Conclusion: Facial emotion labelling deficits in children with BD are associated with impaired attention to eyes. Future research should examine whether impaired attention to eyes is associated with neural dysfunction. Eye gaze deficits in children with BD during facial emotion labelling may also have treatment implications. Finally, children with SMD exhibited decreased attention to eyes to a lesser extent than those with BD, and these equivocal findings are worthy of further study. |
Evgenia Kanonidou; Irene Gottlob; Frank A. Proudlock The effect of font size on reading performance in strabismic amblyopia: An eye movement investigation Journal Article In: Investigative Ophthalmology & Visual Science, vol. 55, no. 1, pp. 451–459, 2013. @article{Kanonidou2013, PURPOSE: We investigated the effect of font size on reading speed and ocular motor performance in strabismic amblyopes during text reading under monocular and binocular viewing conditions. METHODS: Eye movements were recorded at 250 Hz using a head-mounted infrared video eye tracker in 15 strabismic amblyopes and 18 age-matched controls while silently reading paragraphs of text at font sizes equivalent to 1.0 to 0.2 logMAR acuity. Reading under monocular viewing with amblyopic eye/nondominant eye and nonamblyopic/dominant eye was compared to binocular viewing. Mean reading speed; number, amplitude, and direction of saccades; and fixation duration were calculated for each font size and viewing condition. RESULTS: Reading speed was significantly slower in amblyopes compared to controls for all font sizes during monocular reading with the amblyopic eye (P = 0.004), but only for smaller font sizes for reading with the nonamblyopic eye (P = 0.045) and binocularly (P = 0.038). The most significant ocular motor change was that strabismic amblyopes made more saccades per line than controls irrespective of font size and viewing conditions (P < 0.05 for all). There was no significant difference in saccadic amplitudes and fixation duration was only significantly longer in strabismic amblyopes when reading smaller fonts with the amblyopic eye viewing. CONCLUSIONS: Ocular motor deficits exist in strabismic amblyopes during reading even when reading speeds are normal and when visual acuity is not a limiting factor; that is, when reading larger font sizes with nonamblyopic eye viewing and binocular viewing. This suggests that these abnormalities are not related to crowding. |
Benjawan Kasisopa; Ronan G. Reilly; Sudaporn Luksaneeyanawin; Denis Burnham Eye movements while reading an unspaced writing system: The case of Thai Journal Article In: Vision Research, vol. 86, pp. 71–80, 2013. @article{Kasisopa2013, Thai has an alphabetic script with a distinctive feature - it has no spaces between words. Since previous research with spaced alphabetic systems (e.g., English) has suggested that readers use spaces to guide eye movements, it is of interest to investigate what physical factors might guide Thai readers' eye movements. Here the effects of word-initial and word-final position-specific character frequency, word-boundary bigram frequency, and overall word frequency on 30 Thai adults' eye movements when reading unspaced and spaced text was investigated. Linear mixed-effects model analyses of viewing time measures (first fixation duration, single fixation duration, and gaze duration) and of landing sites were conducted. Thai readers tended to land their first fixation at or near the centre of words, just as readers of spaced texts do. A critical determinant of this was word boundary characters: higher position-specific frequency of initial and of final characters significantly facilitated landing sites closer to the word centre while word-boundary bigram frequency appeared to behave as a proxy for initial and final position-specific character frequency. It appears, therefore, that Thai readers make use of the position-specific frequencies of word boundary characters in targeting words and directing eye movements to an optimal landing site. |
Kai Kaspar; Teresa Maria Hloucal; Jürgen Kriz; Sonja Canzler; Ricardo Ramos Gameiro; Vanessa Krapp; Peter König Emotions' impact on viewing behavior under natural conditions Journal Article In: PLoS ONE, vol. 8, no. 1, pp. e52737, 2013. @article{Kaspar2013, Human overt attention under natural conditions is guided by stimulus features as well as by higher cognitive components, such as task and emotional context. In contrast to the considerable progress regarding the former, insight into the interaction of emotions and attention is limited. Here we investigate the influence of the current emotional context on viewing behavior under natural conditions.In two eye-tracking studies participants freely viewed complex scenes embedded in sequences of emotion-laden images. The latter primes constituted specific emotional contexts for neutral target images.Viewing behavior toward target images embedded into sets of primes was affected by the current emotional context, revealing the intensity of the emotional context as a significant moderator. The primes themselves were not scanned in different ways when presented within a block (Study 1), but when presented individually, negative primes were more actively scanned than positive primes (Study 2). These divergent results suggest an interaction between emotional priming and further context factors. Additionally, in most cases primes were scanned more actively than target images. Interestingly, the mere presence of emotion-laden stimuli in a set of images of different categories slowed down viewing activity overall, but the known effect of image category was not affected. Finally, viewing behavior remained largely constant on single images as well as across the targets' post-prime positions (Study 2).We conclude that the emotional context significantly influences the exploration of complex scenes and the emotional context has to be considered in predictions of eye-movement patterns. |
Sally R. Ke; Jessica Lam; Dinesh K. Pai; Miriam Spering Directional asymmetries in human smooth pursuit eye movements Journal Article In: Investigative Ophthalmology & Visual Science, vol. 54, no. 6, pp. 4409–4421, 2013. @article{Ke2013, PURPOSE: Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. METHODS: In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. RESULTS: Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones. |
Gordian Griffiths; Arvid Herwig; Werner X. Schneider Stimulus localization interferes with stimulus recognition: Evidence from an attentional blink paradigm Journal Article In: Journal of Vision, vol. 13, no. 7, pp. 1–14, 2013. @article{Griffiths2013, Recognition of a second target (T2) can be impaired if presented within 500 ms after a first target (T1): This interference phenomenon is called the attentional blink (AB; e.g., Raymond, Shapiro, & Arnell, 1992) and can be viewed as emerging from limitations in the allocation of visual attention (VA) over time. AB tasks typically require participants to detect or identify targets based on their visual properties, i.e., pattern recognition. However, no study so far has investigated whether an AB for pattern recognition of T2 can be elicited if T1 implies a second major function of the visual system, i.e., spatial computations. Therefore, we tested in two experiments whether localization of a peripherally presented dot (T1) interferes with the identification of a trailing centrally presented letter T2. For Experiment 1, T2 performance increased with onset asynchrony of both targets in single-task (only report letter) and dual-task conditions. Besides this task-independent T2 deficit, task-dependent interference (difference between single- and dual-task conditions) was observed in Experiment 2, when T1 was followed by location distractors. Overall, our results indicate that limitations in the allocation of VA over time (i.e., an AB) can also be found if T1 requires localization while T2 requires the standard pattern recognition task. The results are interpreted on the basis of a common temporal attentional mechanism for pattern recognition and spatial computations. |
Martin Groen; Jan Noyes Establishing goals and maintaining coherence in multiparty computer-mediated communication Journal Article In: Discourse Processes, vol. 50, no. 2, pp. 85–106, 2013. @article{Groen2013, Communicating via text-only computer-mediated communication (CMC) channels is associated with a number of issues that would impair users in achieving dialogue coherence and goals. It has been suggested that humans have devised novel adaptive strategies to deal with those issues. However, it could be that humans rely on “classic” coherence devices too. In this study, we investigate whether relevancy markers, a subset of discourse markers, are relied on to assess dialogue coherence and goals. The results of two experiments showed that participants oriented systematically on the relevancy markers and that substitution of the original markers for other (dis)similar words affected eye movements and task performance. It appears that, despite the loosely connected dialogue contributions, the multiple concurrent dialogues, and the multiparty nature of text-only CMC dialogues, humans are able still to locate coherence- and goal-related information by relying on the presence of the relevancy markers. |
Michael A. Grubb; Nancy J. Minshew; David J. Heeger; Marisa Carrasco Exogenous spatial attention: Evidence for intact functioning in adults with autism spectrum disorder Journal Article In: Journal of Vision, vol. 13, no. 14, pp. 1–13, 2013. @article{Grubb2013, Deficits or atypicalities in attention have been reported in individuals with autism spectrum disorder (ASD), yet no consensus on the nature of these deficits has emerged. We conducted three experiments that paired a peripheral precue with a covert discrimination task, using protocols for which the effects of covert exogenous spatial attention on early vision have been well established in typically developing populations. Experiment 1 assessed changes in contrast sensitivity, using orientation discrimination of a contrast-defined grating; Experiment 2 evaluated the reduction of crowding in the visual periphery, using discrimination of a letter-like figure with flanking stimuli at variable distances; and Experiment 3 assessed improvements in visual search, using discrimination of the same letter-like figure with a variable number of distractor elements. In all three experiments, we found that exogenous attention modulated visual discriminability in a group of high-functioning adults with ASD and that it did so in the same way and to the same extent as in a matched control group. We found no evidence to support the hypothesis that deficits in exogenous spatial attention underlie the emergence of core ASD symptomatology. |
Katherine Guérard; Jean Saint-Aubin; Marilyne Maltais The role of verbal memory in regressions during reading Journal Article In: Memory & Cognition, vol. 41, no. 1, pp. 122–136, 2013. @article{Guerard2013, During reading, a number of eye movements are made backward, on words that have already been read. Recent evidence suggests that such eye movements, called regressions, are guided by memory. Several studies point to the role of spatial memory, but evidence for the role of verbal memory is more limited. In the present study, we examined the factors that modulate the role of verbal memory in regressions. Participants were required to make regressions on target words located in sentences displayed on one or two lines. Verbal interference was shown to affect regressions, but only when participants executed a regression on a word located in the first part of the sentence, irrespective of the number of lines on which the sentence was displayed. Experiments 2 and 3 showed that the effect of verbal interference on words located in the first part of the sentence disappeared when participants initiated the regression from the middle of the sentence. Our results suggest that verbal memory is recruited to guide regressions, but only for words read a longer time ago. |
Maria J. S. Guerreiro; Dana R. Murphy; Pascal W. M. Van Gerven Making sense of age-related distractibility: The critical role of sensory modality Journal Article In: Acta Psychologica, vol. 142, no. 2, pp. 184–194, 2013. @article{Guerreiro2013, Older adults are known to have reduced inhibitory control and therefore to be more distractible than young adults. Recently, we have proposed that sensory modality plays a crucial role in age-related distractibility. In this study, we examined age differences in vulnerability to unimodal and cross-modal visual and auditory distraction. A group of 24 younger (mean age = 21.7. years) and 22 older adults (mean age = 65.4. years) performed visual and auditory n-back tasks while ignoring visual and auditory distraction. Whereas reaction time data indicated that both young and older adults are particularly affected by unimodal distraction, accuracy data revealed that older adults, but not younger adults, are vulnerable to cross-modal visual distraction. These results support the notion that age-related distractibility is modality dependent. |
Jon Guez; Adam P. Morris; Bart Krekelberg Intrasaccadic suppression is dominated by reduced detector gain Journal Article In: Journal of Vision, vol. 13, no. 8, pp. 1–11, 2013. @article{Guez2013, Human vision requires fast eye movements (saccades). Each saccade causes a self-induced motion signal, but we are not aware of this potentially jarring visual input. Among the theorized causes of this phenomenon is a decrease in visual sensitivity before (presaccadic suppression) and during (intrasaccadic suppression) saccades. We investigated intrasaccadic suppression using a perceptual template model (PTM) relating visual detection to different signal-processing stages. One stage changes the gain on the detector's input; another increases uncertainty about the stimulus, allowing more noise into the detector; and other stages inject noise into the detector in a stimulus-dependent or -independent manner. By quantifying intrasaccadic suppression of flashed horizontal gratings at varying external noise levels, we obtained threshold-versus-noise (TVN) data, allowing us to fit the PTM. We tested if any of the PTM parameters changed significantly between the fixation and saccade models and could therefore account for intrasaccadic suppression. We found that the dominant contribution to intrasaccadic suppression was a reduction in the gain of the visual detector. We discuss how our study differs from previous ones that have pointed to uncertainty as an underlying cause of intrasaccadic suppression and how the equivalent noise approach provides a framework for comparing the disparate neural correlates of saccadic suppression. |
M. Guitart-Masip; G. R. Barnes; A. Horner; Markus Bauer; Raymond J. Dolan; E. Duzel Synchronization of medial temporal lobe and prefrontal rhythms in human decision making Journal Article In: Journal of Neuroscience, vol. 33, no. 2, pp. 442–451, 2013. @article{GuitartMasip2013, Optimal decision making requires that we integrate mnemonic information regarding previous decisions with value signals that entail likely rewards and punishments. The fact that memory and value signals appear to be coded by segregated brain regions, the hippocampus in the case of memory and sectors of prefrontal cortex in the case of value, raises the question as to how they are integrated during human decision making. Using magnetoencephalography to study healthy human participants, we show increased theta oscillations over frontal and temporal sensors during nonspatial decisions based on memories from previous trials. Using source reconstruction we found that the medial temporal lobe (MTL), in a location compatible with the anterior hippocampus, and the anterior cingulate cortex in the medial wall of the frontal lobe are the source of this increased theta power. Moreover, we observed a correlation between theta power in the MTL source and behavioral performance in decision making, supporting a role for MTL theta oscillations in decision-making performance. These MTL theta oscillations were synchronized with several prefrontal sources, including lateral superior frontal gyrus, dorsal anterior cingulate gyrus, and medial frontopolar cortex. There was no relationship between the strength of synchronization and the expected value of choices. Our results indicate a mnemonic guidance of human decision making, beyond anticipation of expected reward, is supported by hippocampal-prefrontal theta synchronization. |
Shai Gabay; Yoni Pertzov; Noga Cohen; Galia Avidan; Avishai Henik Remapping of the environment without corollary discharges: Evidence from scene-based IOR Journal Article In: Journal of vision, vol. 13, pp. 1–10, 2013. @article{Gabay2013, Previous studies suggested that in order to perceive a stable image of the visual world despite constant eye movements, an efference copy of the oculomotor command is used to remap the representation of the environment in the brain. In two experiments, an inhibitory attentional component (inhibition of return-IOR) was used to examine whether remapping can occur also in the absence of eye movements. Participants were asked to maintain fixation while an unpredictive, attention-grabbing cue appeared and was then followed by a movement of the background image which was artificial (random dots, Experiment 1) or composed of natural scenes (Experiment 2). The participants were then required to respond to a target stimulus that was presented either at the same location as the cue relative to fixation (retinotopic), or at a matching location relative to the background (scene based). In both experiments, an IOR effect was found in scene-based locations immediately after the movement of the background. We suggest that remapping of the inhibitory tagging, which might be a proxy for remapping of the visual scene, could be accomplished rapidly even without the use of an efference copy; the inhibitory tag seems to be anchored to the background image and to move together with it. |
Jason P. Gallivan; Craig S. Chapman; D. Adam Mclean; J. Randall Flanagan; Jody C. Culham Activity patterns in the category-selective occipitotemporal cortex predict upcoming motor actions Journal Article In: European Journal of Neuroscience, vol. 38, no. 3, pp. 2408–2424, 2013. @article{Gallivan2013a, Converging lines of evidence point to the occipitotemporal cortex (OTC) as a critical structure in visual perception. For instance, human functional magnetic resonance imaging (fMRI) has revealed a modular organisation of object-selective, face-selective, body-selective and scene-selective visual areas in the OTC, and disruptions to the processing within these regions, either in neuropsychological patients or through transcranial magnetic stimulation, can produce category-specific deficits in visual recognition. Here we show, using fMRI and pattern classification methods, that the activity in the OTC also represents how stimuli will be interacted with by the body–a level of processing more traditionally associated with the preparatory activity in sensorimotor circuits of the brain. Combining functional mapping of different OTC areas with a real object-directed delayed movement task, we found that the pre-movement spatial activity patterns across the OTC could be used to predict both the action of an upcoming hand movement (grasping vs. reaching) and the effector (left hand vs. right hand) to be used. Interestingly, we were able to extract this wide range of predictive movement information even though nearly all OTC areas showed either baseline-level or below baseline-level activity prior to action onset. Our characterisation of different OTC areas according to the features of upcoming movements that they could predict also revealed a general gradient of effector-to-action-dependent movement representations along the posterior-anterior OTC axis. These findings suggest that the ventral visual pathway, which is well known to be involved in object recognition and perceptual processing, plays a larger than previously expected role in preparing object-directed hand actions. |
Jason P. Gallivan; D. Adam McLean; J. Randall Flanagan; Jody C. Culham In: Journal of Neuroscience, vol. 33, no. 5, pp. 1991–2008, 2013. @article{Gallivan2013, Planning object-directed hand actions requires successful integration of the movement goal with the acting limb. Exactly where and how this sensorimotor integration occurs in the brain has been studied extensively with neurophysiological recordings in nonhuman primates, yet to date, because of limitations of non-invasive methodologies, the ability to examine the same types of planning-related signals in humans has been challenging. Here we show, using a multivoxel pattern analysis of functional MRI (fMRI) data, that the preparatory activity patterns in several frontoparietal brain regions can be used to predict both the limb used and hand action performed in an upcoming movement. Participants performed an event-related delayed movement task whereby they planned and executed grasp or reach actions with either their left or right hand toward a single target object. We found that, although the majority of frontoparietal areas represented hand actions (grasping vs reaching) for the contralateral limb, several areas additionally coded hand actions for the ipsilateral limb. Notable among these were subregions within the posterior parietal cortex (PPC), dorsal premotor cortex (PMd), ventral premotor cortex, dorsolateral prefrontal cortex, presupplementary motor area, and motor cortex, a region more traditionally implicated in contralateral movement generation. Additional analyses suggest that hand actions are represented independently of the intended limb in PPC and PMd. In addition to providing a unique mapping of limb-specific and action-dependent intention-related signals across the human cortical motor system, these findings uncover a much stronger representation of the ipsilateral limb than expected from previous fMRI findings. |
Lesya Y. Ganushchak; Andrea Krott; Steven Frisson; Antje S. Meyer Processing words and Short Message Service shortcuts in sentential contexts: An eye movement study Journal Article In: Applied Psycholinguistics, vol. 34, no. 1, pp. 163–179, 2013. @article{Ganushchak2013, The present study investigated whether ShortMessage Service shortcuts are more difficult to process in sentence context than the spelled-out word equivalent and, if so, how any additional processing difficulty arises. Twenty-four student participants read 37 Short Message Service shortcuts and word equivalents embedded in semantically plausible and implausible contexts (e.g., He left/drank u/you a note) while their eye movements were recorded. There were effects of plausibility and spelling on early measures of processing difficulty (first fixation durations, gaze durations, skipping, and first-pass regression rates for the targets), but there were no interactions of plausibility and spelling. Late measures of processing difficulty (second run gaze duration and total fixation duration) were only affected by plausibility but not by spelling. These results suggest that shortcuts are harder to recognize, but that, once recognized, they are integrated into the sentence context as easily as ordinary words. |
Hanna S. Gauvin; Robert J. Hartsuiker; Falk Huettig Speech monitoring and phonologically-mediated eye gaze in language perception and production: A comparison using printed word eye-tracking Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 818, 2013. @article{Gauvin2013, The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. |
Franziska Geringswald; Anne Herbik; Michael B. Hoffmann; Stefan Pollmann Contextual cueing impairment in patients with age-related macular degeneration Journal Article In: Journal of Vision, vol. 13, no. 3, pp. 1–18, 2013. @article{Geringswald2013, Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues. |
Hiroaki Gomi; Naotoshi Abekawa; Shinsuke Shimojo The hand sees visual periphery better than the eye: Motor-dependent visual motion analyses Journal Article In: Journal of Neuroscience, vol. 33, no. 42, pp. 16502–16509, 2013. @article{Gomi2013, Information pertaining to visual motion is used in the brain not only for conscious perception but also for various kinds of motor controls. In contrast to the increasing amount of evidence supporting the dissociation of visual processing for action versus perception, it is less clear whether the analysis of visual input is shared for characterizing various motor outputs, which require different kinds of interactions with environments. Here we show that, in human visuomotor control, motion analysis for quick hand control is distinct from that for quick eye control in terms of spatiotemporal analysis and spatial integration. The amplitudes of implicit and quick hand and eye responses induced by visual motion stimuli differently varied with stimulus size and pattern smoothness (e.g., spatial frequency). Surprisingly, the hand response did not decrease even when the visual motion with a coarse pattern was mostly occluded over the visual center, whereas the eye response markedly decreased. Since these contrasts cannot be ascribed to any difference in motor dynamics, they clearly indicate different spatial integration of visual motion for the individual motor systems. Going against the overly unified hierarchical view of visual analysis, our data suggest that visual motion analyses are separately tailored from early levels to individual motor modalities. Namely, the hand and eyes see the external world differently. |
Claudia C. Gonzalez; Melanie R. Burke The brain uses efference copy information to optimise spatial memory Journal Article In: Experimental Brain Research, vol. 224, no. 2, pp. 189–197, 2013. @article{Gonzalez2013a, Does a motor response to a target improve the subsequent recall of the target position or can we simply use peripheral position information to guide an accurate response? We suggest that a motor plan of the hand can be enhanced with actual motor and efference copy feedback (GoGo trials), which is absent in the passive observation of a stimulus (NoGo trials). To investigate this effect during eye and hand coordination movements, we presented stimuli in two formats (memory guided or visually guided) under three modality conditions (eyes only, hands only (with eyes fixated), or eyes and hand together). We found that during coordinated movements, both the eye and hand response times were facilitated when efference feedback of the movement was provided. Furthermore, both eye and hand movements to remembered locations were significantly more accurate in the GoGo than in the NoGo trial types. These results reveal that an efference copy of a motor plan enhances memory for a location that is not only observed in eye movements, but also translated downstream into a hand movement. These results have significant implications on how we plan, code and guide behavioural responses, and how we can optimise accuracy and timing to a given target. |
Esther G. González; Linda Lillakas; Alexander Lam; Brenda L. Gallie; Martin J. Steinbach Horizontal saccade dynamics after childhood monocular enucleation Journal Article In: Investigative Ophthalmology & Visual Science, vol. 54, no. 10, pp. 6463–6471, 2013. @article{Gonzalez2013, PURPOSE: We investigated the effects of monocularity on oculomotor control by examining the characteristics of the horizontal saccades of people with one eye, and comparing them to those of a group of age-matched controls who viewed the stimuli monocularly and binocularly. METHODS: Participants were tested in a no-gap, no-overlap saccadic task using a video-based remote eye tracker. One group consisted of unilaterally eye enucleated participants (N = 15; mean age, 31.27 years), the other of age-matched people with normal binocular vision (N = 18; mean age, 30.17 years). RESULTS: The horizontal saccade dynamics of enucleated people are similar to those of people with normal binocularity when they view monocularly and, with the exception of latency, when they view binocularly. The data show that the monocular saccades of control and enucleated observers have longer latencies than the binocular saccades of the control group, the saccades of the enucleated observers are as accurate as those of the controls viewing monocularly or binocularly, smaller saccades are more accurate than the larger ones, and abducting saccades are faster than adducting saccades. CONCLUSIONS: Our data suggest that the true monocularity produced by early enucleation does not result in slower visual processing in the afferent (sensory) pathway, or in deficits in the efferent (motor) pathways of the saccadic system. Possible mechanisms to account for the effects of monocular vision on saccades are discussed. |
Peter C. Gordon; Patrick Plummer; Wonil Choi See before you jump: Full recognition of parafoveal words precedes skips during reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 2, pp. 633–641, 2013. @article{Gordon2013, Serial attention models of eye-movement control during reading were evaluated in an eye-tracking experiment that examined how lexical activation combines with visual information in the parafovea to affect word skipping (where a word is not fixated during first-pass reading). Lexical activation was manipulated by repetition priming created through prime-target pairs embedded within a sentence. The boundary technique (Rayner, 1975) was used to determine whether the target word was fully available during parafoveal preview or whether it was available with transposed letters (e.g., Herman changed to Hreman). With full parafoveal preview, the target word was skipped more frequently when it matched the earlier prime word (i.e., was repeated) than when it did not match the earlier prime word (i.e., was new). With transposed-letter (TL) preview, repetition had no effect on skipping rates despite the great similarity of the TL preview string to the target word and substantial evidence that TL strings activate the words from which they are derived (Perea & Lupker, 2003). These results show that lexically based skipping is based on full recognition of the letter string in parafoveal preview and does not involve using the contextual constraint to compensate for the reduced information available from the parafovea. These results are consistent with models of eye-movement control during reading in which successive words in a text are processed 1 at a time (serially) and in which word recognition strongly influences eye movements. |
Frauke Görges; Frank Oppermann; Jörg D. Jescheniak; Herbert Schriefers Activation of phonological competitors in visual search Journal Article In: Acta Psychologica, vol. 143, no. 2, pp. 168–175, 2013. @article{Goerges2013, Recently, Meyer, Belke, Telling and Humphreys (2007) reported that competitor objects with homophonous names (e.g., boy) interfere with identifying a target object (e.g., buoy) in a visual search task, suggesting that an object name's phonology becomes automatically activated even in situations in which participants do not have the intention to speak. The present study explored the generality of this finding by testing a different phonological relation (rhyming object names, e.g., cat-hat) and by varying details of the experimental procedure. Experiment 1 followed the procedure by Meyer et al. Participants were familiarized with target and competitor objects and their names at the beginning of the experiment and the picture of the target object was presented prior to the search display on each trial. In Experiment 2, the picture of the target object presented prior to the search display was replaced by its name. In Experiment 3, participants were not familiarized with target and competitor objects and their names at the beginning of the experiment. A small interference effect from phonologically related competitors was obtained in Experiments 1 and 2 but not in Experiment 3, suggesting that the way the relevant objects are introduced to participants affects the chances of observing an effect from phonologically related competitors. Implications for the information flow in the conceptual-lexical system are discussed. |
Martin Gorges; Hans Peter Müller; Dorothée Lulé; Albert C. Ludolph; Elmar H. Pinkhardt; Jan Kassubek In: Brain Connectivity, vol. 3, no. 3, pp. 265–272, 2013. @article{Gorges2013a, In addition to the skeleto-motor deficits, patients with Parkinson's disease (PD) frequently present with oculomotor dysfunctions such as impaired smooth pursuit and saccadic abnormalities. There is increasing evidence for an impaired cortical function to be responsible for oculomotor deficits that are associated with lack of inhibitory control; however, these pathomechanisms still remain poorly understood. By means of "task-free" resting-state functional magnetic resonance imaging (rs-fMRI), functional connectivity changes in PD within the default mode network (DMN) have been reported. The aim of this study was to investigate whether altered functional connectivity within the DMN was correlated with oculomotor parameter changes in PD. Twelve PD patients and 13 matched healthy controls underwent rs-fMRI at 1.5 T and videooculography (VOG) using Eye-Link-System. Rs-fMRI seed-based region-to-region connectivity analysis was performed, including medial prefrontal cortex (mPFC), medial temporal lobe (MTL), posterior cingulate cortex (PCC), and hippocampal formation (HF); while VOG examination comprised ocular reactive saccades, smooth pursuit, and executive tests. Rs-fMRI analysis demonstrated a decreased region-to-region functional connectivity between mPFC and PCC as well as increased connectivity between bilateral HF in PD compared with controls. In VOG, patients and controls differed in terms of executive tests outcome, smooth pursuit eye movement, and visually guided reactive saccades but not in peak eye velocity. A significant relationship was observed between saccadic accuracy and functional connectivity strengths between MTL and PCC. These results suggest that PD-associated changes of DMN connectivity are correlated with PD-associated saccadic hypometria, in particular in the vertical direction. |
Davood G. Gozli; Amy Chow; Alison L. Chasteen; Jay Pratt Valence and vertical space: Saccade trajectory deviations reveal metaphorical spatial activation Journal Article In: Visual Cognition, vol. 21, no. 5, pp. 628–646, 2013. @article{Gozli2013, Concepts of positive and negative valence are metaphorically structured in space (e.g., happy is up,sad is down). In fact, coupling a conceptual task (e.g., evaluating words as positive or negative) with a visuospatial task (e.g., identifying stimuli above or below fixation) often gives rise to metaphorical congruency effects. For instance, after reading a positive concept, visual target processing is facilitated above fixation. However, it is possible that tasks requiring upwards and downwards attentional orienting artificially strengthen the link between vertical space and semantic valence. For this reason, in the present study the vertical axis was uncoupled from the response axis. Participants made eye movements along the horizontal axis after reading positive or negative affect words, while their saccade movement trajectories were recorded. Based on previous research on saccade trajectory deviation, we predicted that fast saccade trajectories curve towards the salient segment of space, whereas slow saccade trajectories would curve away from the salient segment. Examining saccadic trajectories revealed a pattern of deviations along the vertical axis consistent with the metaphorical congruency account, although this pattern was mainly driven by positive concepts. These results suggest that semantic processing of valence can automatically recruit spatial features along the vertical axis. |
Joshua A. Granek; Laure Pisella; John Stemberger; Alain Vighetto; Yves Rossetti; Lauren E. Sergio Decoupled visually-guided reaching in optic ataxia: Differences in motor control between canonical and non-canonical orientations in space Journal Article In: PLoS ONE, vol. 8, no. 12, pp. e86138, 2013. @article{Granek2013, Guiding a limb often involves situations in which the spatial location of the target for gaze and limb movement are not congruent (i.e. have been decoupled). Such decoupled situations involve both the implementation of a cognitive rule (i.e. strategic control) and the online monitoring of the limb position relative to gaze and target (i.e. sensorimotor recalibration). To further understand the neural mechanisms underlying these different types of visuomotor control, we tested patient IG who has bilateral caudal superior parietal lobule (SPL) damage resulting in optic ataxia (OA), and compared her performance with six age-matched controls on a series of center-out reaching tasks. The tasks comprised 1) directing a cursor that had been rotated (180° or 90°) within the same spatial plane as the visual display, or 2) moving the hand along a different spatial plane than the visual display (horizontal or para-sagittal). Importantly, all conditions were performed towards visual targets located along either the horizontal axis (left and right; which can be guided from strategic control) or the diagonal axes (top-left and top-right; which require on-line trajectory elaboration and updating by sensorimotor recalibration). The bilateral OA patient performed much better in decoupled visuomotor control towards the horizontal targets, a canonical situation in which well-categorized allocentric cues could be utilized (i.e. guiding cursor direction perpendicular to computer monitor border). Relative to neurologically intact adults, IG's performance suffered towards diagonal targets, a non-canonical situation in which only less-categorized allocentric cues were available (i.e. guiding cursor direction at an off-axis angle to computer monitor border), and she was therefore required to rely on sensorimotor recalibration of her decoupled limb. We propose that an intact caudal SPL is crucial for any decoupled visuomotor control, particularly when relying on the realignment between vision and proprioception without reliable allocentric cues towards non-canonical orientations in space. |
Tsafrir Greenberg; Joshua M. Carlson; Jiook Cha; Greg Hajcak; Lilianne R. Mujica-Parodi Neural reactivity tracks fear generalization gradients Journal Article In: Biological Psychology, vol. 92, no. 1, pp. 2–8, 2013. @article{Greenberg2013, Recent studies on fear generalization have demonstrated that fear-potentiated startle and skin conductance responses to a conditioned stimulus (CS) generalize to similar stimuli, with the strength of the fear response linked to perceptual similarity to the CS. The aim of the present study was to extend this work by examining neural correlates of fear generalization. An initial experiment (N= 8) revealed that insula reactivity tracks the conditioned fear gradient. We then replicated this effect in a larger independent sample (N= 25). Activation in the insula, anterior cingulate, right supplementary motor cortex and caudate increased reactivity as generalization stimuli (GS) were more similar to the CS, consistent with participants' overall ratings of perceived shock likelihood and pupillary response to each stimulus. |
Tsafrir Greenberg; Joshua M. Carlson; Jiook Cha; Greg Hajcak; Lilianne R. Mujica-Parodi Ventromedial prefrontal cortex reactivity is altered in generalized anxiety disorder during fear generalization Journal Article In: Depression and Anxiety, vol. 30, no. 3, pp. 242–250, 2013. @article{Greenberg2013a, BACKGROUND: Fear generalization is thought to contribute to the development and maintenance of anxiety symptoms and accordingly has been the focus of recent research. Previously, we reported that in healthy individuals (N = 25) neural reactivity in the insula, anterior cingulate cortex (ACC), supplementary motor area (SMA), and caudate follow a generalization gradient with a peak response to a conditioned stimulus (CS) that declines with greater perceptual dissimilarity of generalization stimuli (GS) to the CS. In contrast, reactivity in the ventromedial prefrontal cortex (vmPFC), a region linked to fear inhibition, showed an opposite response pattern. The aim of the current study was to examine whether neural responses to fear generalization differ in generalized anxiety disorder (GAD). A second aim was to examine connectivity of primary regions engaged by the generalization task in the GAD group versus healthy group, using psychophysiological interaction analysis. METHODS: Thirty-two women diagnosed with GAD were scanned using the same generalization task as our healthy group. RESULTS: Individuals with GAD exhibited a less discriminant vmPFC response pattern suggestive of deficient recruitment of vmPFC during fear inhibition. Across participants, there was enhanced anterior insula (aINS) coupling with the posterior insula, ACC, SMA, and amygdala during presentation of the CS, consistent with a modulatory role for the aINS in the execution of fear responses. CONCLUSIONS: These findings suggest that deficits in fear regulation, rather than in the excitatory response itself, are more critical to the pathophysiology of GAD in the context of fear generalization. |
Harold H. Greene; James M. Brown; Bryce A. Paradis Luminance contrast and the visual span during visual target localization Journal Article In: Displays, vol. 34, no. 1, pp. 27–32, 2013. @article{Greene2013, A concern for designers of monocular and binocular devices is the ability of users to search for, and localize target items embedded in noisy displays. Twenty-two participants searched (12 under monocular conditions) to localize a target embedded in random gray dot displays. The target was defined by a variation in pattern that did not differ in average contrast from the rest of each display. Displays were presented at.54 and.04 Michelson contrast. Across binocular and monocular viewings, fixation counts increased with decreasing contrast, but the gradient was steeper for monocular viewing. With decreasing contrast, fixations were longer, and the amplitudes of saccades used to localize the target decreased. The findings highlight for monocular vs. binocular target localization, the importance of considering separately, how many fixations are needed to localize the target, and how close to fixation the target must be for it to be noticed. |