All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2014 |
Daniel Frings; John Parkin; Anne M. Ridley The effects of cycle lanes, vehicle to kerb distance and vehicle type on cyclists' attention allocation during junction negotiation Journal Article In: Accident Analysis and Prevention, vol. 72, pp. 411–421, 2014. @article{Frings2014, Increased frequency of cycle journeys has led to an escalation in collisions between cyclists and vehicles, particularly at shared junctions. Risks associated with passing decisions have been shown to influence cyclists' behavioural intentions. The current study extended this research by linking not only risk perception but also attention allocation (via tracking the eye movements of twenty cyclists viewing junction approaches presented on video) to behavioural intentions. These constructs were measured in a variety of contexts: junctions featuring cycle lanes, large vs. small vehicles and differing kerb to vehicle distances). Overall, cyclists devoted the majority of their attention to the nearside (side closest to kerb) of vehicles, and perceived near and offside (side furthest from kerb) passing as most risky. Waiting behind was the most frequent behavioural intention, followed by nearside and then offside passing. While cycle lane presence did not affect behaviour, it did lead to nearside passing being perceived as less risky, and to less attention being devoted to the offside. Large vehicles led to increased risk perceived with passing, and more attention directed towards the rear of vehicles, with reduced offside passing and increased intentions to remain behind the vehicle. Whether the vehicle was large or small, nearside passing was preferred around 30% of the time. Wide kerb distances increased nearside passing intentions and lower associated perceptions of risk. Additionally, relationships between attention and both risk evaluations and behaviours were observed. These results are discussed in relation to the cyclists' situational awareness and biases that various contextual factors can introduce. From these, recommendations for road safety and training are suggested. |
Daniel Frings; Nicola Rycroft; Mark S. Allen; Richard Fenn Watching for gains and losses: The effects of motivational challenge and threat on attention allocation during a visual search task Journal Article In: Motivation and Emotion, vol. 38, no. 4, pp. 513–522, 2014. @article{Frings2014a, This experiment tests predictions based on research and evidence around the biopsychosocial model (BPSM) that people in a challenge state have faster, more gain orientated search patterns than those in a threat state. Participants (n = 44) completed a motivated performance task involving the location of a target appearing in one of two search arrays: one associated with gaining points and the other associated with avoiding the loss of points. Midway through the task, participants received a false feedback prime about their performance invoking either challenge or threat. We found that participants receiving a challenge prime (high performance feedback) spent longer searching the gain array and made fewer fixations on the loss array. Those receiving a threat prime (low performance feedback) made fewer fixations on the gain array. These findings are in line with the BPSM and provide evidence that allocation of attention (measured using eye movement data) is related to challenge and threat. |
Steven Frisson; Nathalie N. Bélanger; Keith Rayner Phonological and orthographic overlap effects in fast and masked priming Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 9, pp. 1742–1767, 2014. @article{Frisson2014, We investigated how orthographic and phonological information is activated during reading, using a fast priming task, and during single word recognition, using masked priming. Specifically, different types of overlap between prime and target were contrasted: high orthographic and high phonological overlap (track-crack), high orthographic and low phonological overlap (bear-gear), or low orthographic and high phonological overlap (fruit-chute). In addition, we examined whether (orthographic) beginning overlap (swoop-swoon) yielded the same priming pattern as end (rhyme) overlap (track-crack). Prime durations were 32 and 50ms in the fast priming version, and 50ms in the masked priming version, and mode of presentation (prime and target in lower case) was identical. The fast priming experiment showed facilitatory priming effects when both orthography and phonology overlapped, with no apparent differences between beginning and end overlap pairs. Facilitation was also found when prime and target only overlapped orthographically. In contrast, the masked priming experiment showed inhibition for both types of end overlap pairs (with and without phonological overlap), and no difference for begin overlap items. When prime and target only shared principally phonological information, facilitation was only found with a long prime duration in the fast priming experiment, while no differences were found in the masked priming version. These contrasting results suggest that fast priming and masked priming do not necessarily tap into the same type of processing. |
Steven Frisson; Hannah Koole; Louisa Hughes; Andrew Olson; Linda Wheeldon Competition between orthographically and phonologically similar words during sentence reading: Evidence from eye movements Journal Article In: Journal of Memory and Language, vol. 73, no. 1, pp. 148–173, 2014. @article{Frisson2014a, Two eye movement experiments tested the effect of orthographic and/or phonological overlap between prime and target words embedded in a sentence. In Experiment 1, four types of overlap were tested: phonological and orthographic overlap (O+P+) occurring word initially .strain-strait) or word finally .wings-kings), orthographic overlap alone (O+P-, bear-gear) and phonological overlap alone (O-P+, smile-aisle). Only O+P+ overlap resulted in inhibition, with the rhyming condition showing an immediate inhibition effect on the target word and the non-rhyming condition on the spillover region. No priming effects were found on any eye movement measure for the O+P- or the O-P+ conditions. Experiment 2 demonstrated that the size of this inhibition effect is affected by both the distance between the prime and target words and by syntactic structure. Inhibition was again observed when primes and targets appeared close together (approximately 3 words). In contrast, no inhibition was observed when the separation was nine words on average, with the prime and target either appearing in the same sentence or separated by a sentence break. However, when the target was delayed but still in the same sentence, the size of the inhibitory effect was affected by the participants' level of reading comprehension. Skilled comprehenders were more negatively impacted by related primes than less skilled comprehenders. This suggests that good readers keep lexical representations active across larger chunks of text, and that they discard this activation at the end of the sentence. This pattern of results is difficult to accommodate in existing competition or episodic memory models of priming. |
Stephan Geuter; Matthias Gamer; Selim Onat; Christian Büchel Parametric trial-by-trial prediction of pain by easily available physiological measures Journal Article In: Pain, vol. 155, no. 5, pp. 994–1001, 2014. @article{Geuter2014, Pain is commonly assessed by subjective reports on rating scales. However, in many experimental and clinical settings, an additional, objective indicator of pain is desirable. In order to identify an objective, parametric signature of pain intensity that is predictive at the individual stimulus level across subjects, we recorded skin conductance and pupil diameter responses to heat pain stimuli of different durations and temperatures in 34 healthy subjects. The temporal profiles of trial-wise physiological responses were characterized by component scores obtained from principal component analysis. These component scores were then used as predictors in a linear regression analysis, resulting in accurate pain predictions for individual trials. Using the temporal information encoded in the principal component scores explained the data better than prediction by a single summary statistic (ie, maximum amplitude). These results indicate that perceived pain is best reflected by the temporal dynamics of autonomic responses. Application of the regression model to an independent data set of 20 subjects resulted in a very good prediction of the pain ratings demonstrating the generalizability of the identified temporal pattern. Utilizing the readily available temporal information from skin conductance and pupil diameter responses thus allows parametric prediction of pain in human subjects. |
Fatema F. Ghasia; Deepak Gulati; Edward L. Westbrook; Aasef G. Shaikh Viewing condition dependence of the gaze-evoked nystagmus in Arnold Chiari type 1 malformation Journal Article In: Journal of the Neurological Sciences, vol. 339, no. 1-2, pp. 134–139, 2014. @article{Ghasia2014, Saccadic eye movements rapidly shift gaze to the target of interest. Once the eyes reach a given target, the brainstem ocular motor integrator utilizes feedback from various sources to assure steady gaze. One of such sources is cerebellum whose lesion can impair neural integration leading to gaze-evoked nystagmus. The gaze evoked nystagmus is characterized by drifts moving the eyes away from the target and a null position where the drifts are absent. The extent of impairment in the neural integration for two opposite eccentricities might determine the location of the null position. Eye in the orbit position might also determine the location of the null. We report this phenomenon in a patient with Arnold Chiari type 1 malformation who had intermittent esotropia and horizontal gaze-evoked nystagmus with a shift in the null position. During binocular viewing, the null was shifted to the right. During monocular viewing, when the eye under cover drifted nasally (secondary to the esotropia), the null of the gaze-evoked nystagmus reorganized toward the center. We speculate that the output of the neural integrator is altered from the bilateral conflicting eye in the orbit position secondary to the strabismus. This could possibly explain the reorganization of the location of the null position. |
Fatema F. Ghasia; Aasef G. Shaikh Source of high-frequency oscillations in oblique saccade trajectory Journal Article In: Experimental Eye Research, vol. 121, pp. 5–10, 2014. @article{Ghasia2014a, Most common eye movements, oblique saccades, feature rapid velocity, precise amplitude, but curved trajectory that is variable from trial-to-trial. In addition to curvature and inter-trial variability, the oblique saccade trajectory also features high-frequency oscillations. A number of studies proposed the physiological basis of the curvature and inter-trial variability of the oblique saccade trajectory, but kinematic characteristics of high-frequency oscillations are yet to be examined. We measured such oscillations and compared their properties with orthogonal pure horizontal and pure vertical oscillations generated during pure vertical and pure horizontal saccades, respectively. We found that the frequency of oscillations during oblique saccades ranged between 15 and 40Hz, consistent with the frequency of orthogonal saccadic oscillations during pure horizontal or pure vertical saccades. We also found that the amplitude of oblique saccade oscillations was larger than pure horizontal and pure vertical saccadic oscillations. These results suggest that the superimposed high-frequency sinusoidal oscillations upon the oblique saccade trajectory represent reverberations of disinhibited circuit of reciprocally innervated horizontal and vertical burst generators. |
George T. Gitchel; Paul A. Wetzel; Abu Qutubuddin; Mark S. Baron Experimental support that ocular tremor in Parkinson's disease does not originate from head movement Journal Article In: Parkinsonism and Related Disorders, vol. 20, no. 7, pp. 743–747, 2014. @article{Gitchel2014, Introduction: Our recent report of ocular tremor in Parkinson's disease (PD) has raised considerable controversy as to the origin of the tremor. Using an infrared based eye tracker and a magnetic head tracker, we reported that ocular tremor was recordable in PD subjects with no apparent head tremor. However, other investigators suggest that the ocular tremor may represent either transmitted appendicular tremor or subclinical head tremor inducing the vestibulo-ocular reflex (VOR). The present study aimed to further investigate the origin of ocular tremor in PD. Methods: Eye movements were recorded in 8 PD subjects both head free, and with full head restraint by means of a head holding device and a dental impression bite plate. Head movements were recorded independently using both a high sensitivity tri-axial accelerometer and a magnetic tracking system, each synchronized to the eye tracker. Results: Ocular tremor was observed in all 8 PD subjects and was not influenced by head free and head fixed conditions. Both magnetic tracking and accelerometer recordings supported that the ocular tremor was fully independent of head position. Conclusion: The present study findings support our initial findings that ocular tremor is a fundamental feature of PD unrelated to head movements. Although the utility of ocular tremor for diagnostic purposes requires validation, current findings in large cohorts of PD subjects suggestits potential as a reliable clinical biomarker. |
Mackenzie G. Glaholt; Keith Rayner; Eyal M. Reingold A rapid effect of stimulus quality on the durations of individual fixations during reading Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 377–389, 2014. @article{Glaholt2014, We developed a variant of the single fixation replacement paradigm (Yang & McConkie, 2001, 2004) in order to examine the effect of stimulus quality on fixation duration during reading. Subjects' eye movements were monitored while they read passages of text for comprehension. During critical fixations, equal changes to the luminance of the background produced either an increase (Up-Contrast) or a decrease (Down-Contrast) of the contrast of the text. The durations of critical fixations were found to be lengthened in the Down-Contrast but not the Up-Contrast condition. Ex-Gaussian modelling of the distributions of fixation durations showed that the reduction in stimulus quality lengthened the majority of fixations, and a survival analysis estimated the onset of this effect to be approximately 141 ms following fixation onset. Because the stimulus quality of the text during critical fixations could not be predicted or parafoveally previewed prior to foveation, the present effect can be attributed to an immediate effect of stimulus quality on fixation duration. |
Davis M. Glasser; Duje Tadin Modularity in the motion system: Independent oculomotor and perceptual processing of brief moving stimuli Journal Article In: Journal of vision, vol. 14, pp. 1–13, 2014. @article{Glasser2014, In addition to motion perception per se, we utilize motion information for a wide range of brain functions. These varied functions place different demands on the visual system, and therefore a stimulus that provides useful information for one function may be inadequate for another. For example, the direction of motion of large high-contrast stimuli is difficult to discriminate perceptually, but other studies have shown that such stimuli are highly effective at eliciting directional oculomotor responses such as the ocular following response (OFR). Here, we investigated the degree of independence between perceptual and oculomotor processing by determining whether perceptually suppressed moving stimuli can nonetheless evoke reliable eye movements. We measured reflexively evoked tracking eye movements while observers discriminated the motion direction of large high-contrast stimuli. To quantify the discrimination ability of the oculomotor system, we used signal detection theory to generate associated oculometric functions. The results showed that oculomotor sensitivity to motion direction is not predicted by perceptual sensitivity to the same stimuli. In fact, in several cases oculomotor responses were more reliable than perceptual responses. Moreover, a trial-by-trial analysis indicated that, for stimuli tested in this study, oculomotor processing was statistically independent from perceptual processing. Evidently, perceptual and oculomotor responses reflect the activity of independent dissociable mechanisms despite operating on the same input. While results of this kind have traditionally been interpreted in the framework of perception versus action, we propose that these differences reflect a more general principle of modularity. |
Roshani Gnanaseelan; David A. Gonzalez; Ewa Niechwiej-Szwedo Binocular advantage for prehension movements performed in visually enriched environments requiring visual search Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 959, 2014. @article{Gnanaseelan2014, The purpose of this study was to examine the role of binocular vision during a prehension task performed in a visually enriched environment where the target object was surrounded by distractors/obstacles. Fifteen adults reached and grasped for a cylindrical peg while eye movements and upper limb kinematics were recorded. The complexity of the visual environment was manipulated by varying the number of distractors and by varying the saliency of the target. Gaze behavior (i.e., the latency of the primary gaze shift and frequency of gaze shifts prior to reach initiation) was comparable between viewing conditions. In contrast, a binocular advantage was evident in performance accuracy. Specifically, participants picked up the wrong object twice as often during monocular viewing when the complexity of the environment increased. Reach performance was more efficient during binocular viewing, which was demonstrated by shorter reach reaction time and overall movement time. Reaching movements during the approach phase had higher peak velocity during binocular viewing. During monocular viewing reach trajectories exhibited a direction bias during the acceleration phase, which was leftward during left eye viewing and rightward during right eye viewing. This bias can be explained by the presence of esophoria in the covered eye. The grasping interval was also extended by ~20% during monocular viewing; however, the duration of the return phase after the target was picked up was comparable across viewing conditions. In conclusion, binocular vision provides important input for planning and execution of prehension movements in visually enriched environments. Binocular advantage was evident, regardless of set size or target saliency, indicating that adults plan their movements more cautiously during monocular viewing, even in relatively simple environments with a highly salient target. Nevertheless, in visually-normal adults monocular input provides sufficient information to engage in online control to correct the initial errors in movement planning. |
David C. Godlove; Alexander Maier; Geoffrey F. Woodman; Jeffrey D. Schall Microcircuitry of agranular frontal cortex: Testing the generality of the canonical cortical microcircuit Journal Article In: Journal of Neuroscience, vol. 34, no. 15, pp. 5355–5369, 2014. @article{Godlove2014, We investigated whether a frontal area that lacks granular layer IV, supplementary eye field, exhibits features of laminar circuitry similar to those observed in primary sensory areas. We report, for the first time, visually evoked local field potentials (LFPs) and spiking activity recorded simultaneously across all layers of agranular frontal cortex using linear electrode arrays. We calculated current source density from the LFPs and compared the laminar organization of evolving sinks to those reported in sensory areas. Simultaneous, transient synaptic current sinks appeared first in layers III and V followed by more prolonged current sinks in layers I/II and VI. We also found no variation of single- or multi-unit visual response latency across layers, and putative pyramidal neurons and interneurons displayed similar response latencies. Many units exhibited pronounced discharge suppression that was strongest in superficial relative to deep layers. Maximum discharge suppression also occurred later in superficial than in deep layers. These results are discussed in the context of the canonical cortical microcircuit model originally formulated to describe early sensory cortex. The data indicate that agranular cortex resembles sensory areas in certain respects, but the cortical microcircuit is modified in nontrivial ways. |
Hayward J. Godwin; Michael C. Hout; Tamaryn Menneer Visual similarity is stronger than semantic similarity in guiding visual search for numbers Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 3, pp. 689–695, 2014. @article{Godwin2014, Using a visual search task, we explored how behavior is influenced by both visual and semantic information. We recorded participants' eye movements as they searched for a single target number in a search array of single-digit numbers (0-9). We examined the probability of fixating the various distractors as a function of two key dimensions: the visual similarity between the target and each distractor, and the semantic similarity (i.e., the numerical distance) between the target and each distractor. Visual similarity estimates were obtained using multidimensional scaling based on the independent observer similarity ratings. A linear mixed-effects model demonstrated that both visual and semantic similarity influenced the probability that distractors would be fixated. However, the visual similarity effect was substantially larger than the semantic similarity effect. We close by discussing the potential value of using this novel methodological approach and the implications for both simple and complex visual search displays. |
Hayward J. Godwin; Erik D. Reichle; Tamaryn Menneer Coarse-to-fine eye movement behavior during visual search Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 5, pp. 1244–1249, 2014. @article{Godwin2014a, It has previously been argued that, during visual search, eye movement behavior is indicative of an underlying scanning "strategy" that starts on a global, or "coarse," scale but then progressively focuses to a more local, or "fine," scale. This conclusion is motivated by the finding that, as a trial progresses, fixation durations tend to increase and saccade amplitudes tend to decrease. In the present study, we replicate these effects but offer an alternative explanation for them-that they emerge from a few stochastic factors that control eye movement behavior. We report the results of a simulation supporting this hypothesis and discuss implications for future models of visual search. |
Gerardo Fernández; Jochen Laubrock; Pablo Mandolesi; Oscar Colombo; Osvaldo Agamennoni Registering eye movements during reading in Alzheimers disease: Difficulties in predicting upcoming words Journal Article In: Journal of Clinical and Experimental Neuropsychology, vol. 36, no. 3, pp. 302–316, 2014. @article{Fernandez2014, Reading requires the fine integration of attention, ocular movements, word identification, and language comprehension, among other cognitive parameters. Several of the associated cognitive processes such as working memory and semantic memory are known to be impaired by Alzheimer's disease (AD). This study analyzes eye movement behavior of 18 patients with probable AD and 40 age-matched controls during Spanish sentence reading. Controls focused mainly on word properties and considered syntactic and semantic structures. At the same time, controls' knowledge and prediction about sentence meaning and grammatical structure are quite evident when we consider some aspects of visual exploration, such as word skipping, and forward saccades. By contrast, in the AD group, the predictability effect of the upcoming word was absent, visual exploration was less focused, fixations were much longer, and outgoing saccade amplitudes were smaller than those in controls. The altered visual exploration and the absence of a contextual predictability effect might be related to impairments in working memory and long-term memory retrieval functions. These eye movement measures demonstrate considerable sensitivity with respect to evaluating cognitive processes in Alzheimer's disease. They could provide a user-friendly marker of early disease symptoms and of its posterior progression. |
Gerardo Fernández; Facundo Manes; Nora P. Rotstein; Oscar Colombo; Pablo Mandolesi; Luis E. Politi; Osvaldo Agamennoni Lack of contextual-word predictability during reading in patients with mild Alzheimer disease Journal Article In: Neuropsychologia, vol. 62, no. 1, pp. 143–151, 2014. @article{Fernandez2014a, In the present work we analyzed the effect of contextual word predictability on the eye movement behavior of patients with mild Alzheimer disease (AD) compared to age-matched controls, by using the eyetracking technique and lineal mixed models. Twenty AD patients and 40 age-matched controls participated in the study. We first evaluated gaze duration during reading low and highly predictable sentences. AD patients showed an increase in gaze duration, compared to controls, both in sentences of low or high predictability. In controls, highly predictable sentences led to shorter gaze durations; by contrary, AD patients showed similar gaze durations in both types of sentences. Similarly, gaze duration in controls was affected by the cloze predictability of word N and N+1, whereas it was the same in AD patients. In contrast, the effects of word frequency and word length were similar in controls and AD patients. Our results imply that contextual-word predictability, whose processing is proposed to require memory retrieval, facilitated reading behavior in healthy subjects, but this facilitation was lost in early AD patients. This loss might reveal impairments in brain areas such as those corresponding to working memory, memory retrieval, and semantic memory functions that are already present at early stages of AD. In contrast, word frequency and length processing might require less complex mechanisms, which were still retained by AD patients. To the best of our knowledge, this is the first study measuring how patients with early AD process well-defined words embedded in sentences of high and low predictability. Evaluation of the resulting changes in eye movement behavior might provide a useful tool for a more precise early diagnosis of AD. |
Gerardo Fernández; Diego E. Shalom; Reinhold Kliegl; Mariano Sigman Eye movements during reading proverbs and regular sentences: The incoming word predictability effect Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 3, pp. 260–273, 2014. @article{Fernandez2014b, Reading is an everyday activity requiring the efficient integration of several central cognitive subsystems ranging from attention and oculomotor control to word identification and language comprehension. Effects of frequency, length and cloze predictability of words on reading times reliably indicate local processing difficulty of fixated words; also, a reader's expectation about an upcoming word apparently influences fixation duration even before the eyes reach this word. Moreover, this effect has been reported as noncanonical (i.e., longer fixation durations on word N when word N1 is of high cloze predictability). However, this effect is difficult to observe because in natural sentences the fluctuations in predictability in content words are very small. To overcome this difficulty we investigated eye movements while reading proverbs as well as sentences constructed for high- and low-average cloze predictability. We also determined for each sentence a word at which predictability of words jumps from a low to high value. Fixation durations while reading proverbs and high-predictable sentences exhibited significant effects of the change in predictability along the sentence (when the successive word is more predictable than the fixated word). Results are in agreement with the proposal that cloze predictability of upcoming words exerts an influence on fixation durations via memory retrieval. |
Katja Fiehler; Christian Wolf; Mathias Klinghammer; Gunnar Blohm Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 636, 2014. @article{Fiehler2014, When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene. |
Ruth Filik; Hartmut Leuthold; Katie Wallington; Jemma Page Testing theories of irony processing using eye-tracking and ERPs Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 3, pp. 811–828, 2014. @article{Filik2014, Not much is known about how people comprehend ironic utterances, and to date, most studies have simply compared processing of ironic versus non-ironic statements. A key aspect of the graded salience hypothesis, distinguishing it from other accounts (such as the standard pragmatic view and direct access view), is that it predicts differences between processing of familiar and unfamiliar ironies. Specifically, if an ironic utterance is familiar, then the ironic interpretation should be available without the need for extra inferential processes, whereas for unfamiliar ironies, the literal interpretation would be computed first, and a mismatch with context would lead to a re-interpretation of the statement as being ironic. We recorded participants' eye movements while they were reading (Experiment 1), and electrical brain activity while they were listening to (Experiment 2), familiar and unfamiliar ironies compared to non-ironic controls. Results show disruption to eye movements and an N400-like effect for unfamiliar ironies only, supporting the predictions of the graded salience hypothesis. In addition, in Experiment 2, a late positivity was found for both familiar and unfamiliar ironic materials, compared to non-ironic controls. We interpret this positivity as reflecting ongoing conflict between the literal and ironic interpretations of the utterance. |
Joel Fishbein; Jesse A. Harris Making sense of Kafka: Structural biases induce early sense commitment for metonyms Journal Article In: Journal of Memory and Language, vol. 76, pp. 94–112, 2014. @article{Fishbein2014, Prior research suggests that the language processor initially activates an underspecified representation of a metonym consistent with all its senses, potentially selecting a specific sense if supported by contextual and lexical information. We explored whether a structural heuristic, the Subject as Agent Principle, which provisionally assigns an agent theta role to canonical subjects, would prompt immediate sense selection. In Experiment 1, we found initial evidence that this principle is active during offline and online processing of metonymic names like Kafka. Reading time results from Experiments 2 and 3 demonstrated that previous context biasing towards the metonymic sense of the name reduced, but did not remove, the agent preference, consistent with Frazier's (1999) proposal that the processor may avoid selecting a specific sense, unless grammatically required. |
Rebecca M. Foerster; Elena Carbone; Werner X. Schneider Long-term memory-based control of attention in multi-step tasks requires working memory: Evidence from domain-specific interference Journal Article In: Frontiers in Psychology, vol. 5, pp. 408, 2014. @article{Foerster2014, Evidence for long-term memory (LTM)-based control of attention has been found during the execution of highly practiced multi-step tasks. However, does LTM directly control for attention or are working memory (WM) processes involved? In the present study, this question was investigated with a dual-task paradigm. Participants executed either a highly practiced visuospatial sensorimotor task (speed stacking) or a verbal task (high-speed poem reciting), while maintaining visuospatial or verbal information in WM. Results revealed unidirectional and domain-specific interference. Neither speed stacking nor high-speed poem reciting was influenced by WM retention. Stacking disrupted the retention of visuospatial locations, but did not modify memory performance of verbal material (letters). Reciting reduced the retention of verbal material substantially whereas it affected the memory performance of visuospatial locations to a smaller degree. We suggest that the selection of task-relevant information from LTM for the execution of overlearned multi-step tasks recruits domain-specific WM. |
Xiao-Jing Gu; Ming Hu; Bing Li; Xin-Tian Hu The role of contrast adaptation in saccadic suppression in humans Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e86542, 2014. @article{Gu2014, The idea of retinal and ex-retinal sources of saccadic suppression has long been established in previous studies. However, how they are implemented in local circuit remains unknown. Researchers have suggested that saccadic suppression was probably achieved by contrast gain control, but this possibility has never been directly tested. In this study, we manipulated contrast gain control by contrast-adapting observers with sinusoidal gratings of different contrasts. Presaccadic and fixational contrast thresholds were measured and compared to give estimates of saccadic suppression at different adaptation states. Our results reconfirmed the selective saccadic suppression in achromatic condition, and further showed that, achromatic saccadic suppression diminished as contrast adaptation was accentuated, whereas no significant chromatic saccadic suppression was induced by greater contrast adaptation. Our data provided evidence for the involvement of contrast gain control in saccadic suppression in achromatic channel. We also discussed how the negative correlation between contrast adaptation and saccadic suppression could be interpreted with contrast gain control. |
Katherine Guérard; Jean Saint-Aubin; Marilyne Maltais; Hugo Lavoie The role of verbal memory in regressions during reading is modulated by the target word's recency in memory Journal Article In: Memory & Cognition, vol. 42, no. 7, pp. 1155–1170, 2014. @article{Guerard2014, During reading, a number of eye movements are made backward, on words that have already been read. Recent evidence suggests that such eye movements, called regressions, are guided by memory. Several studies point to the role of spatial memory, but evidence for the role of verbal memory is more limited. In the present study, we examined the factors that modulate the role of verbal memory in regressions. Participants were required to make regressions on target words located in sentences displayed on one or two lines. Verbal interference was shown to affect regressions, but only when participants executed a regression on a word located in the first part of the sentence, irrespective of the number of lines on which the sentence was displayed. Experiments 2 and 3 showed that the effect of verbal interference on words located in the first part of the sentence disappeared when participants initiated the regression from the middle of the sentence. Our results suggest that verbal memory is recruited to guide regressions, but only for words read a longer time ago. |
Ernesto Guerra; Pia Knoeferle Spatial distance effects on incremental semantic interpretation of abstract sentences: Evidence from eye tracking Journal Article In: Cognition, vol. 133, no. 3, pp. 535–552, 2014. @article{Guerra2014, A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants' reading times for sentences that convey similarity or difference between two abstract nouns (e.g., 'Peace and war are certainly different...'). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., 'peace', 'war'). In Experiments 2 and 3, they turned but remained blank. Participants' reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences. |
Maria J. S. Guerreiro; Jos J. Adam; Pascal W. M. Van Gerven Aging and response interference across sensory modalities Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 3, pp. 836–842, 2014. @article{Guerreiro2014, Advancing age is associated with decrements in selective attention. It was recently hypothesized that age-related differences in selective attention depend on sensory modality. The goal of the present study was to investigate the role of sensory modality in age-related vulnerability to distraction, using a response interference task. To this end, 16 younger (mean age = 23.1 years) and 24 older (mean age = 65.3 years) adults performed four response interference tasks, involving all combinations of visual and auditory targets and distractors. The results showed that response interference effects differ across sensory modalities, but not across age groups. These results indicate that sensory modality plays an important role in vulnerability to distraction, but not in age-related distractibility by irrelevant spatial information. |
Amy Rouinfar; Elise Agra; Adam M. Larson; N. Sanjay Rebello; Lester C. Loschky In: Frontiers in Psychology, vol. 5, pp. 1094, 2014. @article{Rouinfar2014, This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. |
Paul Roux; Baudoin Forgeot d'Arc; Christine Passerieux; Franck Ramus Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation? Journal Article In: Schizophrenia Research, vol. 157, no. 1-3, pp. 78–83, 2014. @article{Roux2014, Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation.We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations).29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore.Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. |
Arani Roy; Stephen V. Shepherd; Michael L. Platt Reversible inactivation of pSTS suppresses social gaze following in the macaque (Macaca mulatta) Journal Article In: Social Cognitive and Affective Neuroscience, vol. 9, no. 2, pp. 209–217, 2014. @article{Roy2014, Humans and other primates shift their attention to follow the gaze of others [gaze following (GF)]. This behavior is a foundational component of joint attention, which is severely disrupted in neurodevelopmental disorders such as autism and schizophrenia. Both cortical and subcortical pathways have been implicated in GF, but their contributions remain largely untested. While the proposed subcortical pathway hinges crucially on the amygdala, the cortical pathway is thought to require perceptual processing by a region in the posterior superior temporal sulcus (pSTS). To determine whether pSTS is necessary for typical GF behavior, we engaged rhesus macaques in a reward discrimination task confounded by leftward- and rightward-facing social distractors following saline or muscimol injections into left pSTS. We found that reversible inactivation of left pSTS with muscimol strongly suppressed GF, as assessed by reduced influence of observed gaze on target choices and saccadic reaction times. These findings demonstrate that activity in pSTS is required for normal GF by primates. |
Annie Roy-Charland; Melanie Perron; Olivia Beaudry; Kaylee Eady Confusion of fear and surprise: A test of the perceptual-attentional limitation hypothesis with eye movement monitoring Journal Article In: Cognition and Emotion, vol. 28, no. 7, pp. 1214–1222, 2014. @article{RoyCharland2014, Of the basic emotional facial expressions, fear is typically less accurately recognised as a result of being confused with surprise. According to the perceptual-attentional limitation hypothesis, the difficulty in recognising fear could be attributed to the similar visual configuration with surprise. In effect, they share more muscle movements than they possess distinctive ones. The main goal of the current study was to test the perceptual-attentional limitation hypothesis in the recognition of fear and surprise using eye movement recording and by manipulating the distinctiveness between expressions. Results revealed that when the brow lowerer is the only distinctive feature between expressions, accuracy is lower, participants spend more time looking at stimuli and they make more comparisons between expressions than when stimuli include the lip stretcher. These results not only support the perceptual-attentional limitation hypothesis but extend its definition by suggesting that it is not solely the number of distinctive features that is important but also their qualitative value. |
Douglas A. Ruff; Marlene R. Cohen Attention can increase or decrease spike count correlations between pairs of neurons depending on their role in a task Journal Article In: Nature Neuroscience, vol. 17, no. 11, pp. 1591–1597, 2014. @article{Ruff2014a, Visual attention enhances the responses of visual neurons that encode the attended location. Several recent studies showed that attention also decreases correlations between fluctuations in the responses of pairs of neurons (termed spike count correlation or rSC). The previous results are consistent with two hypotheses. Attention–related changes in rate and rSC might be linked (perhaps through a common mechanism), so that attention always decreases rSC. Alternately, attention might either increase or decrease rSC, possibly depending on the role the neurons play in the behavioral task. We recorded simultaneously from dozens of neurons in area V4 while monkeys performed a discrimination task. We found strong evidence in favor of the second hypothesis, showing that attention can flexibly increase or decrease correlations, depending on whether the neurons provide evidence for the same or opposite perceptual decisions. These results place important constraints on models of the neuronal mechanisms underlying cognitive factors. |
Rachel A. Ryskin; Sarah Brown-Schmidt; Enriqueta Canseco-Gonzalez; Loretta K. Yiu; Elizabeth T. Nguyen Visuospatial perspective-taking in conversation and the role of bilingual experience Journal Article In: Journal of Memory and Language, vol. 74, pp. 46–76, 2014. @article{Ryskin2014, Little is known about how listeners use spatial perspective information to guide comprehension. Perspective-taking abilities have been linked to executive function in both children and adults. Bilingual children excel at perspective-taking tasks compared to their monolingual counterparts (e.g., Greenberg, Bellana, & Bialystok, 2013), possibly due to the executive function benefits conferred by the experience of switching between languages. Here we examine the mechanisms of visuo-spatial perspective-taking in adults, and the effects of bilingualism on this process. We report novel results regarding the ability of listeners to appreciate the spatial perspective of another person in conversation: While spatial perspective-taking does pose challenges, listeners rapidly accommodated the speaker's perspective, in time to guide the on-line processing of the speaker's utterances. Moreover, once adopted, spatial perspectives were enduring, resulting in costs when switching to a different perspective, even when that perspective is one's own. In addition to these findings, direct comparison of monolingual and bilingual participants offer no support for the hypothesis that bilingualism improves the ability to appreciate the perspective of another person during language comprehension. In fact, in some cases adult bilinguals have significantly more difficulty with perspective-laden language. |
Patrick T. Sadtler; Kristin M. Quick; Matthew D. Golub; Steven M. Chase; Stephen I. Ryu; Elizabeth C. Tyler-Kabara; Byron M. Yu; Aaron P. Batista Neural constraints on learning Journal Article In: Nature, vol. 512, pp. 423–426, 2014. @article{Sadtler2014, Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess. |
Ioannis Rigas; Oleg V. Komogortsev Biometric recognition via probabilistic spatial projection of eye movement trajectories in dynamic visual environments Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 9, no. 10, pp. 1743–1754, 2014. @article{Rigas2014, This paper proposes a method for the extraction of biometric features from the spatial patterns formed by eye movements during an inspection of dynamic visual stimulus. In the suggested framework, each eye movement signal is transformed into a time-constrained decomposition by using a probabilistic representation of spatial and temporal features related to eye fixations and called fixation density map (FDM). The results for a large collection of eye movements recorded from 200 individuals indicate the best equal error rate of 10.8% and Rank-1 identification rate as high as 51%, which is a significant improvement over existing eye movement-driven biometric methods. In addition, our experiments reveal that a person recognition approach based on the FDM performs well even in cases when eye movement data are captured at lower than optimum sampling frequencies. This property is very important for the future ocular biometric systems where existing iris recognition devices could be employed to combine eye movement traits with iris information for increased security and accuracy. Considering that commercial iris recognition devices are able to implement eye image sampling usually at a relatively low rate, the ability to perform eye movement-driven biometrics at such rates is of great significance. |
Lily Riggs; Takako Fujioka; Jessica Chan; Douglas A. McQuiggan; Adam K. Anderson; Jennifer D. Ryan Association with emotional information alters subsequent processing of neutral faces Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 1001, 2014. @article{Riggs2014, The processing of emotional as compared to neutral information is associated with different patterns in eye movement and neural activity. However, the 'emotionality' of a stimulus can be conveyed not only by its physical properties, but also by the information that is presented with it. There is very limited work examining the how emotional information may influence the immediate perceptual processing of otherwise neutral information. We examined how presenting an emotion label for a neutral face may influence subsequent processing by using eye movement monitoring (EMM) and magnetoencephalography (MEG) simultaneously. Participants viewed a series of faces with neutral expressions. Each face was followed by a unique negative or neutral sentence to describe that person, and then the same face was presented in isolation again. Viewing of faces paired with a negative sentence was associated with increased early viewing of the eye region and increased neural activity between 600 and 1200 ms in emotion processing regions such as the cingulate, medial prefrontal cortex, and amygdala, as well as posterior regions such as the precuneus and occipital cortex. Viewing of faces paired with a neutral sentence was associated with increased activity in the parahippocampal gyrus during the same time window. By monitoring behavior and neural activity within the same paradigm, these findings demonstrate that emotional information alters subsequent visual scanning and the neural systems that are presumably invoked to maintain a representation of the neutral information along with its emotional details. |
Lillian M. Rigoli; Daniel Holman; Michael J. Spivey; Christopher T. Kello In: Frontiers in Human Neuroscience, vol. 8, pp. 713, 2014. @article{Rigoli2014, When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior. |
Simon Rigoulot; Marc D. Pell Emotion in the voice influences the way we scan emotional faces Journal Article In: Speech Communication, vol. 65, pp. 36–49, 2014. @article{Rigoulot2014, Previous eye-tracking studies have found that listening to emotionally-inflected utterances guides visual behavior towards an emotionally congruent face (e.g., Rigoulot and Pell, 2012). Here, we investigated in more detail whether emotional speech prosody influences how participants scan and fixate specific features of an emotional face that is congruent or incongruent with the prosody. Twenty-one participants viewed individual faces expressing fear, sadness, disgust, or happiness while listening to an emotionally-inflected pseudo-utterance spoken in a congruent or incongruent prosody. Participants judged whether the emotional meaning of the face and voice were the same or different (match/mismatch). Results confirm that there were significant effects of prosody congruency on eye movements when participants scanned a face, although these varied by emotion type; a matching prosody promoted more frequent looks to the upper part of fear and sad facial expressions, whereas visual attention to upper and lower regions of happy (and to some extent disgust) faces was more evenly distributed. These data suggest ways that vocal emotion cues guide how humans process facial expressions in a way that could facilitate recognition of salient visual cues, to arrive at a holistic impression of intended meanings during interpersonal events. |
Evan F. Risko; Srdan Medimorec; Joseph D. Chisholm; Alan Kingstone Rotating with rotated text: A natural behavior approach to investigating cognitive offloading Journal Article In: Cognitive Science, vol. 38, pp. 537–564, 2014. @article{Risko2014, Determining how we use our body to support cognition represents an important part of understanding the embodied and embedded nature of cognition. In the present investigation, we pursue this question in the context of a common perceptual task. Specifically, we report a series of experiments investigating head tilt (i.e., external normalization) as a strategy in letter naming and reading stimuli that are upright or rotated. We demonstrate that the frequency of this natural behavior is modulated by the cost of stimulus rotation on performance. In addition, we demonstrate that external normalization can benefit performance. All of the results are consistent with the notion that external normalization represents a form of cognitive offloading and that effort is an important factor in the decision to adopt an internal or external strategy. |
Sarah Risse Effects of visual span on reading speed and parafoveal processing in eye movements during sentence reading Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–13, 2014. @article{Risse2014, The visual span (or ‘‘uncrowded window''), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers' VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. |
Sarah Risse; Reinhold Kliegl Dissociating preview validity and preview difficulty in parafoveal processing of word n + 1 during reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 2, pp. 653–668, 2014. @article{Risse2014a, Many studies have shown that previewing the next word n + 1 during reading leads to substantial processing benefit (e.g., shorter word viewing times) when this word is eventually fixated. However, evidence of such preprocessing in fixations on the preceding word n when in fact the information about the preview is acquired is far less consistent. A recent study suggested that such effects may be delayed into fixations on the next word n + 1 (Risse & Kliegl, 2012). To investigate the time course of parafoveal information-acquisition on the control of eye movements during reading, we conducted 2 gaze-contingent display-change experiments and orthogonally manipulated the processing difficulty (i.e., word frequency) of an n + 1 preview word and its validity relative to the target word. Preview difficulty did not affect fixation durations on the pretarget word n but on the target word n + 1. In fact, the delayed preview-difficulty effect was almost of the same size as the preview benefit associated with the n + 1 preview validity. Based on additional results from quantile-regression analyses on the time course of the 2 preview effects, we discuss consequences as to the integration of foveal and parafoveal information and potential implications for computational models of eye guidance in reading. |
Ardi Roelofs Tracking eye movements to localize Stroop interference in naming: Word planning versus articulatory buffering Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 5, pp. 1332–1347, 2014. @article{Roelofs2014, Investigators have found no agreement on the functional locus of Stroop interference in vocal naming. Whereas it has long been assumed that the interference arises during spoken word planning, more recently some investigators have revived an account from the 1960s and 1970s holding that the interference occurs in an articulatory buffer after word planning. Here, 2 color-word Stroop experiments are reported that tested between these accounts using eye tracking. Previous research has indicated that the shifting of eye gaze from a stimulus to another occurs before the articulatory buffer is reached in spoken word planning. In the present experiments, participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to manually indicate its direction. If Stroop interference arises in the articulatory buffer, the interference should be present in the color-naming latencies but not in the gaze shift and manual response latencies. Contrary to these predictions, Stroop interference was present in all 3 behavioral measures. These results indicate that Stroop interference arises during spoken word planning rather than in articulatory buffering. |
Gustavo Rohenkohl; Ian C. Gould; Jessica Pessoa; Anna C. Nobre Combining spatial and temporal expectations to improve visual perception Journal Article In: Journal of Vision, vol. 14, no. 4, pp. 1–13, 2014. @article{Rohenkohl2014, The importance of temporal expectations in modulating perceptual functions is increasingly recognized. However, the means through which temporal expectations can bias perceptual information processing remains ill understood. Recent theories propose that modulatory effects of temporal expectations rely on the co-existence of other biases based on receptive-field properties, such as spatial location. We tested whether perceptual benefits of temporal expectations in a perceptually demanding psychophysical task depended on the presence of spatial expectations. Foveally presented symbolic arrow cues indicated simultaneously where (location) and when (time) target events were more likely to occur. The direction of the arrow indicated target location (80% validity), while its color (pink or blue) indicated the interval (80% validity) for target appearance. Our results confirmed a strong synergistic interaction between temporal and spatial expectations in enhancing visual discrimination. Temporal expectation significantly boosted the effectiveness of spatial expectation in sharpening perception. However, benefits for temporal expectation disappeared when targets occurred at unattended locations. Our findings suggest that anticipated receptive-field properties of targets provide a natural template upon which temporal expectations can operate in order to help prioritize goal-relevant events from early perceptual stages. |
Maria C. Romero; Pierpaolo Pani; Peter Janssen Coding of shape features in the macaque anterior intraparietal area Journal Article In: Journal of Neuroscience, vol. 34, no. 11, pp. 4006–4021, 2014. @article{Romero2014, The exquisite ability of primates to grasp and manipulate objects relies on the transformation of visual information into motor com- mands. To this end, the visual system extracts object affordances that can be used to program and execute the appropriate grip. The macaque anterior intraparietal (AIP) area hasbeen implicated in the extraction ofaffordances for the purpose ofgrasping. Neurons in the AIP area respond during visually guided grasping and to the visual presentation ofobjects. A subset ofAIP neurons is also activated by two-dimensional images ofobjects and even by outline contours defining the object shape, but it is unknown how AIP neurons actually represent object shape. In this study, we used a stimulus reduction approach to determine the minimum effective shape feature evoking AIP responses. AIP neurons responding to outline shapes also responded selectively to very small fragment stimuli measuring only 1–2°. This fragment selectivity could not be explained bydifferences in eyemovementsor simple orientation selectivity, but proved to be highly dependent on the relative position ofthe stimulus in the receptive field. Our findings challenge the current understanding ofthe AIP area as a critical stage in the dorsal stream for the extraction ofobject affordances. |
Stanislav M. Sajin; Cynthia M. Connine Semantic richness: The role of semantic features in processing spoken words Journal Article In: Journal of Memory and Language, vol. 70, no. 1, pp. 13–35, 2014. @article{Sajin2014, A lexical decision and two visual world paradigm experiments are reported that investigated the role of semantic representations in recognizing spoken words. Semantic richness (NOF: number of features) influenced lexical decision reaction times in that semantically rich words (high NOF) were processed faster than semantically impoverished words (low NOF). Processing in the VWP was faster for high NOF words but only when an onset competitor was present in the display (target BREAD, onset competitor BRICK). Adding background speech babble to the spoken stimuli resulted in an advantage for processing high NOF words with and without onset competitors in the display. The results suggest that semantic representations directly contribute to the recognition of spoken words and that sub-optimal listening conditions (e.g., background babble) enhance the role of semantics. |
Robert J. Sall; Timothy J. Wright; Walter R. Boot Driven to distraction? The effect of simulated red light running camera flashes on attention and oculomotor control Journal Article In: Visual Cognition, vol. 22, no. 1, pp. 57–73, 2014. @article{Sall2014, Do similar factors influence the allocation of attention in visually sparse and abstract laboratory paradigms and complex real-world scenes? To explore this question we conducted a series of experiments that examined whether the flash that accompanies a Red Light Running Camera (RLRC) can capture observers' attention away from important roadway changes. Inhibition of Return (IOR) and eye movement direction served as indices of the spatial allocation of attention. In two experiments, participants were slower to respond to the brake lights of a vehicle in a driving scene when an RLRC flash occurred nearby or were slower to initiate eye movements to brake light signals (IOR effects). In a third experiment, we found evidence that less prevalent RLRC flashes disrupted eye movement control. Results suggest that attention can be misdirected as a result of RLRC flashes and provide additional evidence that findings from simple laboratory paradigms can predict the allocation of attention in complex settings that are more familiar to observers. |
Anne Pier Salverda; Dave Kleinschmidt; Michael K. Tanenhaus Immediate effects of anticipatory coarticulation in spoken-word recognition Journal Article In: Journal of Memory and Language, vol. 71, no. 1, pp. 145–163, 2014. @article{Salverda2014, Two visual-world experiments examined listeners' use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as ". The ladder is the target". With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200-240. ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word ("The ladder is the target"). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article's vowel. With these stimuli, effects of speech on eye-movement control began about 70. ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for "data explanation" approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. |
Hélène Samson; Nicole Fiori-Duharcourt; Karine Doré-Mazars; Christelle Lemoine; Dorine Vergilino-Perez Perceptual and gaze biases during face processing: Related or not? Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e85746, 2014. @article{Samson2014, Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias. |
Germán Sanchis-Trilles; Vicent Alabau; Christian Buck; Michael Carl; Francisco Casacuberta; Mercedes García-Martínez; Ulrich Germann; Jesús González-Rubio; Robin L. Hill; Philipp Koehn; Luis A. Leiva; Bartolomé Mesa-Lao; Daniel Ortiz-Martínez; Herve Saint-Amand; Chara Tsoukala; Enrique Vidal Interactive translation prediction versus conventional post-editing in practice: A study with the CasMaCat workbench Journal Article In: Machine Translation, vol. 28, no. 3-4, pp. 217–235, 2014. @article{SanchisTrilles2014, We conducted a field trial in computer-assisted professional translation to compare interactive translation prediction (ITP) against conventional post-editing (PE) of machine translation (MT) output. In contrast to the conventional PE set-up, where an MT system first produces a static translation hypothesis that is then edited by a professional (hence post-editing), ITP constantly updates the translation hypothesis in real time in response to user edits. Our study involved nine professional translators and four reviewers working with the web-based CasMaCat workbench. Various new interactive features aiming to assist the post-editor/translator were also tested in this trial. Our results show that even with little training, ITP can be as productive as conventional PE in terms of the total time required to produce the final translation. Moreover, translation editors working with ITP require fewer key strokes to arrive at the final version of their translation. |
Laura K. Sasse; Matthias Gamer; Christian Büchel; Stefanie Brassen Selective control of attention supports the positivity effect in aging Journal Article In: PLoS ONE, vol. 9, no. 8, pp. e104180, 2014. @article{Sasse2014, There is emerging evidence for a positivity effect in healthy aging, which describes an age-specific increased focus on positive compared to negative information. Life-span researchers have attributed this effect to the selective allocation of cognitive resources in the service of prioritized emotional goals. We explored the basic principles of this assumption by assessing selective attention and memory for visual stimuli, differing in emotional content and self-relevance, in young and old participants. To specifically address the impact of cognitive control, voluntary attentional selection during the presentation of multiple-item displays was analyzed and linked to participants' general ability of cognitive control. Results revealed a positivity effect in older adults' selective attention and memory, which was particularly pronounced for self-relevant stimuli. Focusing on positive and ignoring negative information was most evident in older participants with a generally higher ability to exert top-down control during visual search. Our findings highlight the role of controlled selectivity in the occurrence of a positivity effect in aging. Since the effect has been related to well-being in later life, we suggest that the ability to selectively allocate top-down control might represent a resilience factor for emotional health in aging. |
Michaël Sassi; Maarten Demeyer; Johan Wagemans Peripheral contour grouping and saccade targeting: The role of mirror symmetry Journal Article In: Symmetry, vol. 6, no. 1, pp. 1–22, 2014. @article{Sassi2014, Integrating shape contours in the visual periphery is vital to our ability to locate objects and thus make targeted saccadic eye movements to efficiently explore our surroundings. We tested whether global shape symmetry facilitates peripheral contour integration and saccade targeting in three experiments, in which observers responded to a successful peripheral contour detection by making a saccade towards the target shape. The target contours were horizontally (Experiment 1) or vertically (Experiments 2 and 3) mirror symmetric. Observers responded by making a horizontal (Experiments 1 and 2) or vertical (Experiment 3) eye movement. Based on an analysis of the saccadic latency and accuracy, we conclude that the figure-ground cue of global mirror symmetry in the periphery has little effect on contour integration or on the speed and precision with which saccades are targeted towards objects. The role of mirror symmetry may be more apparent under natural viewing conditions with multiple objects competing for attention, where symmetric regions in the visual field can pre-attentively signal the presence of objects, and thus attract eye movements. |
Jason Satel; Matthew D. Hilchey; Zhiguo Wang; Caroline S. Reiss; Raymond M. Klein In search of a reliable electrophysiological marker of oculomotor inhibition of return Journal Article In: Psychophysiology, vol. 51, no. 10, pp. 1037–1045, 2014. @article{Satel2014, Inhibition of return (IOR) operationalizes a behavioral phenomenon characterized by slower responding to cued, relative to uncued, targets. Two independent forms of IOR have been theorized: input-based IOR occurs when the oculomotor system is quiescent, while output-based IOR occurs when the oculomotor system is engaged. EEG studies forbidding eye movements have demonstrated that reductions of target-elicited P1 components are correlated with IOR magnitude, but when eye movements occur, P1 effects bear no relationship to behavior. We expand on this work by adapting the cueing paradigm and recording event-related potentials: IOR is caused by oculomotor responses to central arrows or peripheral onsets and measured by key presses to peripheral targets. Behavioral IOR is observed in both conditions, but P1 reductions are absent in the central arrow condition. By contrast, arrow and peripheral cues enhance Nd, especially over contralateral electrode sites. |
Daniel R. Saunders; Russell L. Woods Direct measurement of the system latency of gaze-contingent displays Journal Article In: Behavior Research Methods, vol. 46, no. 2, pp. 439–447, 2014. @article{Saunders2014, Gaze-contingent displays combine a display device with an eyetracking system to rapidly update an image on the basis of the measured eye position. All such systems have a delay, the system latency, between a change in gaze location and the related change in the display. The system latency is the result of the delays contributed by the eyetracker, the display computer, and the display, and it is affected by the properties of each component, which may include variability. We present a direct, simple, and low-cost method to measure the system latency. The technique uses a device to briefly blind the eyetracker system (e.g., for video-based eyetrackers, a device with infrared light-emitting diodes (LED)), creating an eyetracker event that triggers a change to the display monitor. The time between these two events, as captured by a relatively low-cost consumer camera with high-speed video capability (1,000 Hz), is an accurate measurement of the system latency. With multiple measurements, the distribution of system latencies can be characterized. The same approach can be used to synchronize the eye position time series and a video recording of the visual stimuli that would be displayed in a particular gaze-contingent experiment. We present system latency assessments for several popular types of displays and discuss what values are acceptable for different applications, as well as how system latencies might be improved. |
Daniel J. Schad; Sarah Risse; Timothy J. Slattery; Keith Rayner Word frequency in fast priming: Evidence for immediate cognitive control of eye movements during reading Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 390–414, 2014. @article{Schad2014, Numerous studies have demonstrated effects of word frequency on eye movements during reading, but the precise timing of this influence has remained unclear. The fast priming paradigm (Sereno & Rayner, 1992) was previously used to study influences of related versus unrelated primes on the target word. Here, we used this procedure to investigate whether the frequency of the prime word has a direct influence on eye movements during reading when the prime-target relation is not manipulated. We found that with average prime intervals of 32 ms readers made longer single fixation durations on the target word in the low than in the high frequency prime condition. Distributional analyses demonstrated that the effect of prime frequency on single fixation durations occurred very early, supporting theories of immediate cognitive control of eye movements. Finding prime frequency effects only 207 ms after visibility of the prime and for prime durations of 32 ms yields new time constraints for cognitive processes controlling eye movements during reading. Our variant of the fast priming paradigm provides a new approach to test early influences of word processing on eye movement control during reading. |
Annie L. Shelton; Kim M. Cornish; Claudine Kraan; Nellie Georgiou-Karistianis; Sylvia A. Metcalfe; John L. Bradshaw; Darren R. Hocking; Alison D. Archibald; Jonathan Cohen; Julian N. Trollor; Joanne Fielding Exploring inhibitory deficits in female premutation carriers of fragile X syndrome: Through eye movements Journal Article In: Brain and Cognition, vol. 85, no. 1, pp. 201–208, 2014. @article{Shelton2014, There is evidence which demonstrates that a subset of males with a premutation CGG repeat expansion (between 55 and 200 repeats) of the fragile X mental retardation 1 gene exhibit subtle deficits of executive function that progressively deteriorate with increasing age and CGG repeat length. However, it remains unclear whether similar deficits, which may indicate the onset of more severe degeneration, are evident in female PM-carriers. In the present study we explore whether female PM-carriers exhibit deficits of executive function which parallel those of male PM-carriers. Fourteen female fragile X premutation carriers without fragile X-associated tremor/ataxia syndrome and fourteen age, sex, and IQ matched controls underwent ocular motor and neuropsychological tests of select executive processes, specifically of response inhibition and working memory. Group comparisons revealed poorer inhibitory control for female premutation carriers on ocular motor tasks, in addition to demonstrating some difficulties in behaviour self-regulation, when compared to controls. A negative correlation between CGG repeat length and antisaccade error rates for premutation carriers was also found. Our preliminary findings indicate that impaired inhibitory control may represent a phenotype characteristic which may be a sensitive risk biomarker within this female fragile X premutation population. |
Kelly Shen; Anthony R. McIntosh; Jennifer D. Ryan A working memory account of refixations in visual search Journal Article In: Journal of Vision, vol. 14, no. 14, pp. 1–11, 2014. @article{Shen2014, We tested the hypothesis that active exploration of the visual environment is mediated not only by visual attention but also by visual working memory (VWM) by examining performance in both a visual search and a change detection task. Subjects rarely fixated previously examined distracters during visual search, suggesting that they successfully retained those items. Change detection accuracy decreased with increasing set size, suggesting that subjects had a limited VWM capacity. Crucially, performance in the change detection task predicted visual search efficiency: Higher VWM capacity was associated with faster and more accurate responses as well as lower probabilities of refixation. We found no temporal delay for return saccades, suggesting that active vision is primarily mediated by VWM rather than by a separate attentional disengagement mechanism commonly associated with the inhibition-of-return (IOR) effect. Taken together with evidence that visual attention, VWM, and the oculomotor system involve overlapping neural networks, these data suggest that there exists a general capacity for cognitive processing. |
Heather Sheridan; Eyal M. Reingold Expert vs. novice differences in the detection of relevant information during a chess game: Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 5, pp. 941, 2014. @article{Sheridan2014, The present study explored the ability of expert and novice chess players to rapidly distinguish between regions of a chessboard that were relevant to the best move on the board, and regions of the board that were irrelevant. Accordingly, we monitored the eye movements of expert and novice chess players, while they selected white's best move for a variety of chess problems. To manipulate relevancy, we constructed two different versions of each chess problem in the experiment, and we counterbalanced these versions across participants. These two versions of each problem were identical except that a single piece was changed from a bishop to a knight. This subtle change reversed the relevancy map of the board, such that regions that were relevant in one version of the board were now irrelevant (and vice versa). Using this paradigm, we demonstrated that both the experts and novices spent more time fixating the relevant relative to the irrelevant regions of the board. However, the experts were faster at detecting relevant information than the novices, as shown by the finding that experts (but not novices) were able to distinguish between relevant and irrelevant information during the early part of the trial. These findings further demonstrate the domain-related perceptual processing advantage of chess experts, using an experimental paradigm that allowed us to manipulate relevancy under tightly controlled conditions. |
Martha M. Shiell; François Champoux; Robert J. Zatorre Enhancement of visual motion detection thresholds in early deaf people Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e90498, 2014. @article{Shiell2014, In deaf people, the auditory cortex can reorganize to support visual motion processing. Although this cross-modal reorganization has long been thought to subserve enhanced visual abilities, previous research has been unsuccessful at identifying behavioural enhancements specific to motion processing. Recently, research with congenitally deaf cats has uncovered an enhancement for visual motion detection. Our goal was to test for a similar difference between deaf and hearing people. We tested 16 early and profoundly deaf participants and 20 hearing controls. Participants completed a visual motion detection task, in which they were asked to determine which of two sinusoidal gratings was moving. The speed of the moving grating varied according to an adaptive staircase procedure, allowing us to determine the lowest speed necessary for participants to detect motion. Consistent with previous research in deaf cats, the deaf group had lower motion detection thresholds than the hearing. This finding supports the proposal that cross-modal reorganization after sensory deprivation will occur for supramodal sensory features and preserve the output functions. |
Yoshihito Shigihara; Semir Zeki Parallel processing of face and house stimuli by V1 and specialized visual areas: A magnetoencephalographic (MEG) study Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 901, 2014. @article{Shigihara2014, We used easily distinguishable stimuli of faces and houses constituted from straight lines, with the aim of learning whether they activate V1 on the one hand, and the specialized areas that are critical for the processing of faces and houses on the other, with similar latencies. Eighteen subjects took part in the experiment, which used magnetoencephalography (MEG) coupled to analytical methods to detect the time course of the earliest responses which these stimuli provoke in these cortical areas. Both categories of stimuli activated V1 and areas of the visual cortex outside it at around 40 ms after stimulus onset, and the amplitude elicited by face stimuli was significantly larger than that elicited by house stimuli. These results suggest that "low-level" and "high-level" features of form stimuli are processed in parallel by V1 and visual areas outside it. Taken together with our previous results on the processing of simple geometric forms (Shgihara and Zeki, 2013; Shigihara and Zeki, 2014), the present ones reinforce the conclusion that parallel processing is an important component in the strategy used by the brain to process and construct forms. |
Lutz Schega; Daniel Hamacher; Sandra Erfuth; Wolfgang Behrens-Baumann; Juliane Reupsch; Michael B. Hoffmann Differential effects of head-mounted displays on visual performance Journal Article In: Ergonomics, vol. 57, no. 1, pp. 1–11, 2014. @article{Schega2014, Head-mounted displays (HMDs) virtually augment the visual world to aid visual task completion. Three types of HMDs were compared [look around (LA); optical see-through with organic light emitting diodes and virtual retinal display] to determine whether LA, leaving the observer functionally monocular, is inferior. Response times and error rates were determined for a combined visual search and Go-NoGo task. The costs of switching between displays were assessed separately. Finally, HMD effects on basic visual functions were quantified. Effects of HMDs on visual search and Go-NoGo task were small, but for LA display-switching costs for the Go-NoGo-task the effects were pronounced. Basic visual functions were most affected for LA (reduced visual acuity and visual field sensitivity, inaccurate vergence movements and absent stereo-vision). LA involved comparatively high switching costs for the Go-NoGo task, which might indicate reduced processing of external control cues. Reduced basic visual functions are a likely cause of this effect. |
Joseph Schmidt; Annmarie MacNamara; Greg Hajcack Proudfit; Gregory J. Zelinsky More target features in visual working memory leads to poorer search guidance: Evidence from contralateral decay activity Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–19, 2014. @article{Schmidt2014, The visual-search literature has assumed that the top-down target representation used to guide search resides in visual working memory (VWM). We directly tested this assumption using contralateral delay activity (CDA) to estimate the VWM load imposed by the target representation. In Experiment 1, observers previewed four photorealistic objects and were cued to remember the two objects appearing to the left or right of central fixation; Experiment 2 was identical except that observers previewed two photorealistic objects and were cued to remember one. CDA was measured during a delay following preview offset but before onset of a four-object search array. One of the targets was always present, and observers were asked to make an eye movement to it and press a button. We found lower magnitude CDA on trials when the initial search saccade was directed to the target (strong guidance) compared to when it was not (weak guidance). This difference also tended to be larger shortly before search-display onset and was largely unaffected by VWM item-capacity limits or number of previews. Moreover, the difference between mean strong- and weak-guidance CDA was proportional to the increase in search time between mean strong-and weak-guidance trials (as measured by time-to-target and reaction-time difference scores). Contrary to most search models, our data suggest that trials resulting in the maintenance of more target features results in poor search guidance to a target. We interpret these counterintuitive findings as evidence for strong search guidance using a small set of highly discriminative target features that remain after pruning from a larger set of features, with the load imposed on VWM varying with this feature-consolidation process. |
Sebastian Schneegans; John P. Spencer; Gregor Schoner; Seongmin Hwang; Andrew Hollingworth Dynamic interactions between visual working memory and saccade target selection Journal Article In: Journal of Vision, vol. 14, no. 11, pp. 1–23, 2014. @article{Schneegans2014, Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task- irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. |
Dana Schneider; Zoie E. Nott; Paul E. Dux Task instructions and implicit theory of mind Journal Article In: Cognition, vol. 133, no. 1, pp. 43–47, 2014. @article{Schneider2014, It has been hypothesized that humans are able to track other's mental states efficiently and without being conscious of doing so using their implicit theory of mind (iToM) system. However, while iToM appears to operate unconsciously recent work suggests it does draw on executive attentional resources (Schneider, Lam, Bayliss, & Dux, 2012) bringing into question whether iToM is engaged efficiently. Here, we examined other aspects relating to automatic processing: The extent to which the operation of iToM is controllable and how it is influenced by behavioral intentions. This was implemented by assessing how task instructions affect eye-movement patterns in a Sally-Anne false-belief task. One group of subjects was given no task instructions (No Instructions), another overtly judged the location of a ball a protagonist interacted with (Ball Tracking) and a third indicated the location consistent with the actor's belief about the ball's location (Belief Tracking). Despite different task goals, all groups' eye-movement patterns were consistent with belief analysis, and the No Instructions and Ball Tracking groups reported no explicit mentalizing when debriefed. These findings represent definitive evidence that humans implicitly track the belief states of others in an uncontrollable and unintentional manner. |
Dana Schneider; Virginia P. Slaughter; Stefanie I. Becker; Paul E. Dux Implicit false-belief processing in the human brain Journal Article In: NeuroImage, vol. 101, pp. 268–275, 2014. @article{Schneider2014a, Eye-movement patterns in 'Sally-Anne' tasks reflect humans' ability to implicitly process the mental states of others, particularly false-beliefs - a key theory of mind (ToM) operation. It has recently been proposed that an efficient ToM system, which operates in the absence of awareness (implicit ToM, iToM), subserves the analysis of belief-like states. This contrasts to consciously available belief processing, performed by the explicit ToM system (eToM). The frontal, temporal and parietal cortices are engaged when humans explicitly 'mentalize' about others' beliefs. However, the neural underpinnings of implicit false-belief processing and the extent to which they draw on networks involved in explicit general-belief processing are unknown. Here, participants watched 'Sally-Anne' movies while fMRI and eye-tracking measures were acquired simultaneously. Participants displayed eye-movements consistent with implicit false-belief processing. After independently localizing the brain areas involved in explicit general-belief processing, only the left anterior superior temporal sulcus and precuneus revealed greater blood-oxygen-level-dependent activity for false- relative to true-belief trials in our iToM paradigm. No such difference was found for the right temporal-parietal junction despite significant activity in this area. These findings fractionate brain regions that are associated with explicit general ToM reasoning and false-belief processing in the absence of awareness. |
Christina Schonberg; Catherine M. Sandhofer; Tawny Tsang; Scott P. Johnson Does bilingual experience affect early visual perceptual development? Journal Article In: Frontiers in Psychology, vol. 5, pp. 1429, 2014. @article{Schonberg2014a, Visual attention and perception develop rapidly during the first few months after birth, and these behaviors are critical components in the development of language and cognitive abilities. Here we ask how early bilingual experiences might lead to differences in visual attention and perception. Experiments 1-3 investigated the looking behavior of monolingual and bilingual infants when presented with social (Experiment 1), mixed (Experiment 2), or non-social (Experiment 3) stimuli. In each of these experiments, infants' dwell times (DT) and number of fixations to areas of interest (AOIs) were analyzed, giving a sense of where the infants looked. To examine how the infants looked at the stimuli in a more global sense, Experiment 4 combined and analyzed the saccade data collected in Experiments 1-3. There were no significant differences between monolingual and bilingual infants' DTs, AOI fixations, or saccade characteristics (specifically, frequency, and amplitude) in any of the experiments. These results suggest that monolingual and bilingual infants process their visual environments similarly, supporting the idea that the substantial cognitive differences between monolinguals and bilinguals in early childhood are more related to active vocabulary production than perception of the environment. |
Tom Schonberg; Akram Bakkour; Ashleigh M. Hover; Jeanette A. Mumford; Lakshya Nagar; Jacob Perez; Russell A. Poldrack Changing value through cued approach: An automatic mechanism of behavior change Journal Article In: Nature Neuroscience, vol. 17, no. 4, pp. 625–630, 2014. @article{Schonberg2014, It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex. |
Elizabeth R. Schotter; Klinton Bicknell; Ian Howard; Roger P. Levy; Keith Rayner Task effects reveal cognitive flexibility responding to frequency and predictability: Evidence from eye movements in reading and proofreading Journal Article In: Cognition, vol. 131, no. 1, pp. 1–27, 2014. @article{Schotter2014a, It is well-known that word frequency and predictability affect processing time. These effects change magnitude across tasks, but studies testing this use tasks with different response types (e.g., lexical decision, naming, and fixation time during reading; Schilling, Rayner, & Chumbley, 1998), preventing direct comparison. Recently, Kaakinen and Hyönä (2010) overcame this problem, comparing fixation times in reading for comprehension and proofreading, showing that the frequency effect was larger in proofreading than in reading. This result could be explained by readers exhibiting substantial cognitive flexibility, and qualitatively changing how they process words in the proofreading task in a way that magnifies effects of word frequency. Alternatively, readers may not change word processing so dramatically, and instead may perform more careful identification generally, increasing the magnitude of many word processing effects (e.g., both frequency and predictability). We tested these possibilities with two experiments: subjects read for comprehension and then proofread for spelling errors (letter transpositions) that produce nonwords (e.g., trcak for track as in Kaakinen & Hyönä) or that produce real but unintended words (e.g., trial for trail) to compare how the task changes these effects. Replicating Kaakinen and Hyönä, frequency effects increased during proofreading. However, predictability effects only increased when integration with the sentence context was necessary to detect errors (i.e., when spelling errors produced words that were inappropriate in the sentence; trial for trail). The results suggest that readers adopt sophisticated word processing strategies to accommodate task demands. |
Elizabeth R. Schotter; Annie Jia; Victor S. Ferreira; Keith Rayner Preview benefit in speaking occurs regardless of preview timing Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 3, pp. 755–762, 2014. @article{Schotter2014b, Speakers access information from objects they will name but have not looked at yet, indexed by preview benefit–faster processing of the target when a preview object previously occupying its location was related rather than unrelated to the target. This suggests that speakers distribute attention over multiple objects, but it does not reveal the time course of the processing of a current and a to-be-named object. Is the preview benefit a consequence of attention shifting to the next-to-be-named object shortly before the eyes move to that location, or does the benefit reflect a more unconstrained deployment of attention to upcoming objects? Using the multiple-object naming paradigm with a gaze-contingent display change manipulation, we addressed this issue by manipulating the latency of the onset of the preview (SOA) and whether the preview represented the same concept as (but a different visual token of) the target or an unrelated concept. The results revealed that the preview benefit was robust, regardless of the latency of the preview onset or the latency of the saccade to the target (the lag between preview offset and fixation on the target). Together, these data suggest that preview benefit is not restricted to the time during an attention shift preceding an eye movement, and that speakers are able to take advantage of information from nonfoveal objects whenever such objects are visually available. |
Elizabeth R. Schotter; Randy Tran; Keith Rayner Don't believe what you read (Only Once): Comprehension is supported by regressions during reading Journal Article In: Psychological Science, vol. 25, no. 6, pp. 1218–1226, 2014. @article{Schotter2014, Recent Web apps have spurred excitement around the prospect of achieving speed reading by eliminating eye movements (i.e., with rapid serial visual presentation, or RSVP, in which words are presented briefly one at a time and sequentially). Our experiment using a novel trailing-mask paradigm contradicts these claims. Subjects read normally or while the display of text was manipulated such that each word was masked once the reader's eyes moved past it. This manipulation created a scenario similar to RSVP: The reader could read each word only once; regressions (i.e., rereadings of words), which are a natural part of the reading process, were functionally eliminated. Crucially, the inability to regress affected comprehension negatively. Furthermore, this effect was not confined to ambiguous sentences. These data suggest that regressions contribute to the ability to understand what one has read and call into question the viability of speed-reading apps that eliminate eye movements (e.g., those that use RSVP). |
Daniel Schreij; Sander A. Los; Jan Theeuwes; James T. Enns; Christian N. L. Olivers The interaction between stimulus-driven and goal-driven orienting as revealed by eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 378–390, 2014. @article{Schreij2014, It is generally agreed that attention can be captured in a stimulus-driven or in a goal-driven fashion. In studies that investigated both types of capture, the effects on mean manual response time (reaction time [RT]) are generally additive, suggesting two independent underlying processes. However, potential interactions between the two types of capture may fail to be expressed in manual RT, as it likely reflects multiple processing steps. Here we measured saccadic eye movements along with manual responses. Participants searched a target display for a red letter. To assess contingent capture, this display was preceded by an irrelevant red cue. To assess stimulus-driven capture, the target display could be accompanied by the simultaneous onset of an irrelevant new object. At the level of eye movements, the results showed strong interactions between cue validity and onset presence on the spatiotemporal trajectories of the saccades. However, at the level of manual responses, these effects cancelled out, leading to additive effects on mean RT. We conclude that both types of capture influence a shared spatial orienting mechanism and we provide a descriptive computational model of their dynamics. |
Keith Rayner; Elizabeth R. Schotter Semantic preview benefit in reading English: The effect of initial letter capitalization Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1617–1628, 2014. @article{Rayner2014a, A major controversy in reading research is whether semantic information is obtained from the word to the right of the currently fixated word (word n ⫹ 1). Although most evidence has been negative in English, semantic preview benefit has been observed for readers of Chinese and German. In the present experiment, we investigated whether the discrepancy between English and German may be attributable to a difference in visual properties of the orthography: the first letter of a noun is always capitalized in German, but is only occasionally capitalized in English. This visually salient property may draw greater attention to the word during parafoveal preview and thus increase preview benefit generally (and lead to a greater opportunity for semantic preview benefit). We used English target nouns that can either be capitalized (e.g., We went to the critically acclaimed Ballet ofParis while on vacation.) or not (e.g., We went to the critically acclaimed ballet that was showing in Paris.) and manipulated the capitalization of the preview accordingly, to determine whether capitalization modulates preview benefit in English. The gaze-contingent boundary paradigm was used with identical, semantically related, and unrelated pre- views. Consistent with our hypothesis, we found numerically larger preview benefits when the preview/ target was capitalized than when it was lowercase. Crucially, semantic preview benefit was not observed when the preview/target word was not capitalized, but was observed when the preview/target word was capitalized. |
Keith Rayner; Jinmian Yang; Susanne Schuett; Timothy J. Slattery The effect of foveal and parafoveal masks on the eye movements of older and younger readers Journal Article In: Psychology and Aging, vol. 29, no. 2, pp. 205–212, 2014. @article{Rayner2014b, In the present study, we examined foveal and parafoveal processing in older compared with younger readers by using gaze-contingent paradigms with 4 conditions. Older and younger readers read sentences in which the text was either a) presented normally, b) the foveal word was masked as soon as it was fixated, c) all of the words to the left of the fixated word were masked, or d) all of the words to the right of the fixated word were masked. Although older and younger readers both found reading when the fixated word was masked quite difficult, the foveal mask increased sentence reading time more than 3-fold (3.4) for the older readers (in comparison with the control condition in which the sentence was presented normally) compared with the younger readers who took 1.3 times longer to read sentences in the foveal mask condition (in comparison with the control condition). The left and right parafoveal masks did not disrupt reading as severely as the foveal mask, though the right mask was more disruptive than the left mask. Also, there was some indication that the younger readers found the right mask condition relatively more disruptive than the left mask condition. |
Scott A. Reed; Paul Dassonville Adaptation to leftward-shifting prisms enhances local processing in healthy individuals Journal Article In: Neuropsychologia, vol. 56, no. 1, pp. 418–427, 2014. @article{Reed2014, In healthy individuals, adaptation to left-shifting prisms has been shown to simulate the symptoms of hemispatial neglect, including a reduction in global processing that approximates the local bias observed in neglect patients. The current study tested whether leftward prism adaptation can more specifically enhance local processing abilities. In three experiments, the impact of local and global processing was assessed through tasks that measure susceptibility to illusions that are known to be driven by local or global contextual effects. Susceptibility to the rod-and-frame illusion - an illusion disproportionately driven by both local and global effects depending on frame size - was measured before and after adaptation to left- and right-shifting prisms. A significant increase in rod-and-frame susceptibility was found for the left-shifting prism group, suggesting that adaptation caused an increase in local processing effects. The results of a second experiment confirmed that leftward prism adaptation enhances local processing, as assessed with susceptibility to the simultaneous-tilt illusion. A final experiment employed a more specific measure of the global effect typically associated with the rod-and-frame illusion, and found that although the global effect was somewhat diminished after leftward prism adaptation, the trend failed to reach significance (p=.078). Rightward prism adaptation had no significant effects on performance in any of the experiments. Combined, these findings indicate that leftward prism adaptation in healthy individuals can simulate the local processing bias of neglect patients primarily through an increased sensitivity to local visual cues, and confirm that prism adaptation not only modulates lateral shifts of attention, but also prompts shifts from one level of processing to another. |
Eyal M. Reingold Eye tracking research and technology: Towards objective measurement of data quality Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 635–652, 2014. @article{Reingold2014, Two methods for objectively measuring eye tracking data quality are explored. The first method works by tricking the eye tracker to detect an abrupt change in the gaze position of an artificial eye that in actuality does not move. Such a device, referred to as an artificial saccade generator, is shown to be extremely useful for measuring the temporal accuracy and precision of eye tracking systems and for validating the latency to display change in gaze contingent display paradigms. The second method involves an artificial pupil that is mounted on a computer controlled moving platform. This device is designed to be able to provide the eye tracker with motion sequences that closely resemble biological eye movements. The main advantage of using artificial motion for testing eye tracking data quality is the fact that the spatiotemporal signal is fully specified in a manner independent of the eye tracker that is being evaluated and that nearly identical motion sequence can be reproduced multiple times with great precision. The results of the present study demonstrate that the equipment described has the potential to become an important tool in the comprehensive evaluation of data quality. |
Eyal M. Reingold; Mackenzie G. Glaholt Cognitive control of fixation duration in visual search: The role of extrafoveal processing Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 610–634, 2014. @article{Reingold2014a, Participants' eye movements were monitored in two visual search experiments that manipulated target-distractor similarity (high vs. low) as well as the availability of distractors for extrafoveal processing (Free-Viewing vs. No-Preview). The influence of the target-distractor similarity by preview manipulation on the distributions of first fixation and second fixation duration was examined by using a survival analysis technique which provided precise estimates of the timing of the first discernible influence of target-distractor similarity on fixation duration. We found a significant influence of target-distractor similarity on first fixation duration in normal visual search (Free Viewing) as early as 26?28 ms from the start of fixation. In contrast, the influence of target-distractor similarity occurred much later (199?233 ms) in the No-Preview condition. The present study also documented robust and fast acting extrafoveal and foveal preview effects. Implications for models of eye-movement control and visual search are discussed.$backslash$nParticipants' eye movements were monitored in two visual search experiments that manipulated target-distractor similarity (high vs. low) as well as the availability of distractors for extrafoveal processing (Free-Viewing vs. No-Preview). The influence of the target-distractor similarity by preview manipulation on the distributions of first fixation and second fixation duration was examined by using a survival analysis technique which provided precise estimates of the timing of the first discernible influence of target-distractor similarity on fixation duration. We found a significant influence of target-distractor similarity on first fixation duration in normal visual search (Free Viewing) as early as 26?28 ms from the start of fixation. In contrast, the influence of target-distractor similarity occurred much later (199?233 ms) in the No-Preview condition. The present study also documented robust and fast acting extrafoveal and foveal preview effects. Implications for models of eye-movement control and visual search are discussed. |
Gabriel Reyes; Jérôme Sackur Introspection during visual search Journal Article In: Consciousness and Cognition, vol. 29, pp. 212–229, 2014. @article{Reyes2014, Recent advances in the field of metacognition have shown that human participants are introspectively aware of many different cognitive states, such as confidence in a decision. Here we set out to expand the range of experimental introspection by asking whether participants could access, through pure mental monitoring, the nature of the cognitive processes that underlie two visual search tasks: an effortless "pop-out" search, and a difficult, effortful, conjunction search. To this aim, in addition to traditional first order performance measures, we instructed participants to give, on a trial-by-trial basis, an estimate of the number of items scanned before a decision was reached. By controlling response times and eye movements, we assessed the contribution of self-observation of behavior in these subjective estimates. Results showed that introspection is a flexible mechanism and that pure mental monitoring of cognitive processes is possible in elementary tasks. |
Theo Rhodes; Christopher T. Kello; Bryan Kerster Intrinsic and extrinsic contributions to heavy tails in visual foraging Journal Article In: Visual Cognition, vol. 22, no. 6, pp. 809–842, 2014. @article{Rhodes2014, Eyes move over visual scenes to gather visual information. Studies have found heavy-tailed distributions in measures of eye movements during visual search, which raises questions about whether these distributions are pervasive to eye movements, and whether they arise from intrinsic or extrinsic factors. Three different measures of eye movement trajectories were examined during visual foraging of complex images, and all three were found to exhibit heavy tails: Spatial clustering of eye movements followed a power law distribution, saccade length distributions were lognormally distributed, and the speeds of slow, small amplitude movements occurring during fixations followed a 1/f spectral power law relation. Images were varied to test whether the spatial clustering of visual scene information is responsible for heavy tails in eye movements. Spatial clustering of eye movements and saccade length distributions were found to vary with image type and task demands, but no such effects were found for eye movement speeds during fixations. Results showed that heavy-tailed distributions are general and intrinsic to visual foraging, but some of them become aligned with visual stimuli when required by task demands. The potentially adaptive value of heavy-tailed distributions in visual foraging is discussed. |
Alby Richard; Jan Churan; Veronica Whitford; Gillian A. O'Driscoll; Debra Titone; Christopher C. Pack Perisaccadic perception of visual space in people with schizophrenia Journal Article In: Journal of Neuroscience, vol. 34, no. 14, pp. 4760–4765, 2014. @article{Richard2014, Corollary discharge signals are found in the nervous systems of many animals, where they serve a large variety of functions related to the integration of sensory and motor signals. In humans, an important corollary discharge signal is generated by oculomotor structures and communicated to sensory systems in concert with the execution of each saccade. This signal is thought to serve a number of purposes related to the maintenance of accurate visual perception. The properties of the oculomotor corollary discharge can be probed by asking subjects to localize stimuli that are flashed briefly around the time of a saccade. The results of such experiments typically reveal large errors in localization. Here, we have exploited these well-known psychophysical effects to assess the potential dysfunction of corollary discharge signals in people with schizophrenia. In a standard perisaccadic localization task, we found that, compared with controls, patients with schizophrenia exhibited larger errors in localizing visual stimuli. The pattern of errors could be modeled as an overdamped corollary discharge signal that encodes instantaneous eye position. The dynamics of this signal predicted symptom severity among patients, suggesting a possible mechanistic basis for widely observed behavioral manifestations of schizophrenia. |
Fabio Richlan; Benjamin Gagl; Stefan Hawelka; Mario Braun; Matthias Schurz; Martin Kronbichler; Florian Hutzler In: Cerebral Cortex, vol. 24, no. 10, pp. 2647–2656, 2014. @article{Richlan2014, The present study investigated the feasibility of using self-paced eye movements during reading (measured by an eye tracker) as markers for calculating hemodynamic brain responses measured by functional magnetic resonance imaging (fMRI). Specifically, we were interested in whether the fixation-related fMRI analysis approach was sensitive enough to detect activation differences between reading material (words and pseudowords) and nonreading material (line and unfamiliar Hebrew strings). Reliable reading-related activation was identified in left hemisphere superior temporal, middle temporal, and occipito-temporal regions including the visual word form area (VWFA). The results of the present study are encouraging insofar as fixation-related analysis could be used in future fMRI studies to clarify some of the inconsistent findings in the literature regarding the VWFA. Our study is the first step in investigating specific visual word recognition processes during self-paced natural sentence reading via simultaneous eye tracking and fMRI, thus aiming at an ecologically valid measurement of reading processes. We provided the proof of concept and methodological framework for the analysis of fixation-related fMRI activation in the domain of reading research. |
Katrin Riese; Mareike Bayer; Gerhard Lauer; Annekathrin Schacht In the eye of the recipient: Pupillary responses to suspense in literary classics Journal Article In: Scientific Study of Literature, vol. 4, no. 2, pp. 211–232, 2014. @article{Riese2014, <p>Plot suspense is one of the most important components of narrative fiction that motivate recipients to follow fictional characters through their worlds. The present study investigates the dynamic development of narrative suspense in excerpts of literary classics from the 19th century in a multi-methodological approach. For two texts, differing in suspense as judged by a large independent sample, we collected (a) data from questionnaires, indicating different affective and cognitive dimensions of receptive engagement, (b) continuous ratings of suspense during text reception from both experts and lay recipients, and (c) registration of pupil diameter as a physiological indicator of changes in emotional arousal and attention during reception. Data analyses confirmed differences between the two texts at different dimensions of receptive engagement and, importantly, revealed significant correlations of pupil diameter and the course of suspense over time. Our findings demonstrate that changes of the pupil diameter provide a reliable ‘online' indicator of suspense.</p> |
Mark W. Schurgin; Jonathan I. Flombaum How undistorted spatial memories can produce distorted responses Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 5, pp. 1371–1380, 2014. @article{Schurgin2014, Reproducing the location of an object from the contents of spatial working memory requires the translation of a noisy representation into an action at a single location-for instance, a mouse click or a mark with a writing utensil. In many studies, these kinds of actions result in biased responses that suggest distortions in spatial working memory. We sought to investigate the possibility of one mechanism by which distortions could arise, involving an interaction between undistorted memories and nonuniformities in attention. Specifically, the resolution of attention is finer below than above fixation, which led us to predict that bias could arise if participants tend to respond in locations below as opposed to above fixation. In Experiment 1 we found such a bias to respond below the true position of an object. Experiment 2 demonstrated with eye-tracking that fixations during response were unbiased and centered on the remembered object's true position. Experiment 3 further evidenced a dependency on attention relative to fixation, by shifting the effect horizontally when participants were required to tilt their heads. Together, these results highlight the complex pathway involved in translating probabilistic memories into discrete actions, and they present a new attentional mechanism by which undistorted spatial memories can lead to distorted reproduction responses. |
Mark W. Schurgin; J. Nelson; S. Iida; Hideki Ohira; J. Y. Chiao; Steven L. Franconeri Eye movements during emotion recognition in faces Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–16, 2014. @article{Schurgin2014a, When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face. |
Alexander C. Schütz Interindividual differences in preferred directions of perceptual and motor decisions. Journal Article In: Journal of vision, vol. 14, no. 12, pp. 1–17, 2014. @article{Schuetz2014, Both the perceptual system and the motor system can be faced with ambiguous information and then have to choose between different alternatives. Often these alternatives involve decisions about directions, and anisotropies have been reported for different tasks. Here we measured interindividual differences and temporal stability of directional preferences in eye movement, motion perception, and thumb movement tasks. In all tasks, stimuli were created such that observers had to decide between two opposite directions in each trial and preferences were measured at 12 axes around the circle. There were clear directional preferences in all utilized tasks. The strongest effects were present in tasks that involved motion, like the smooth pursuit eye movement, apparent motion, and structure-from-motion tasks. The weakest effects were present in the saccadic eye movement task. Observers with strong directional preferences in the eye movement tasks showed shorter latency costs for target-conflict trials compared to single-target trials, suggesting that directional preferences might be advantageous for solving the target conflict. Although there were consistent preferences across observers in most of the tasks, there was also considerable variability in preferred directions between observers. The magnitude of preferences and the preferred directions were correlated only between few tasks. While the magnitude of preferences varied substantially over time, the direction of these preferences was stable over several weeks. These results indicate that individually stable directional preferences exist in a range of perceptual and motor tasks. |
Alexander C. Schütz; Dirk Kerzel; David Souto Saccadic adaptation induced by a perceptual task Journal Article In: Journal of Vision, vol. 14, no. 5, pp. 1–19, 2014. @article{Schuetz2014a, The human motor system and muscles are subject to fluctuations in the short and long term. Motor adaptation is classically thought of as a low-level process that compensates for the error between predicted and executed movements in order to maintain movement accuracy. Contrary to a low-level account, accurate movements might be only a means to support high-level behavioral and perceptual goals. To isolate the influence of high-level goals in adaptation of saccadic eye movements, we manipulated perceptual task requirements in the absence of low-level errors. Observers had to discriminate one character within a peripheral array of characters. Between trials, the location of this character within the array was changed. This manipulation led to an immediate strategic change and a slower, gradual adaptation of saccade amplitude and direction. These changes had a similar magnitude to classical saccade adaptation and transferred at least partially to reactive saccades without a perceptual task. These results suggest that a perceptual task can modify oculomotor commands by generating a top-down error signal in saccade maps just like a bottom-up visual position error. Hence saccade adaptation not only maintains saccadic targeting accuracy, but also optimizes gaze behavior for the behavioral goal, showing that perception shapes even low-level oculomotor mechanisms. |
D. Samuel Schwarzkopf; Elaine J. Anderson; Benjamin Haas; Sarah J. White; Geraint Rees Larger extrastriate population receptive fields in autism spectrum disorders Journal Article In: Journal of Neuroscience, vol. 34, no. 7, pp. 2713–2724, 2014. @article{Schwarzkopf2014, Previous behavioral research suggests enhanced local visual processing in individuals with autism spectrum disorders (ASDs). Here we used functional MRI and population receptive field (pRF) analysis to test whether the response selectivity of human visual cortex is atypical in individuals with high-functioning ASDs compared with neurotypical, demographically matched controls. For each voxel, we fitted a pRF model to fMRI signals measured while participants viewed flickering bar stimuli traversing the visual field. In most extrastriate regions, perifoveal pRFs were larger in the ASD group than in controls. We observed no differences in V1 or V3A. Differences in the hemodynamic response function, eye movements, or increased measurement noise could not account for these results; individuals with ASDs showed stronger, more reliable responses to visual stimulation. Interestingly, pRF sizes also correlated with individual differences in autistic traits but there were no correlations with behavioral measures of visual processing. Our findings thus suggest that visual cortex in ASDs is not characterized by sharper spatial selectivity. Instead, we speculate that visual cortical function in ASDs may be characterized by extrastriate cortical hyperexcitability or differential attentional deployment. |
Caspar M. Schwiedrzik; Christian C. Ruff; Andreea Lazar; Frauke C. Leitner; Wolf Singer; Lucia Melloni Untangling perceptual memory: Hysteresis and adaptation map into separate cortical networks Journal Article In: Cerebral Cortex, vol. 24, no. 5, pp. 1152–1164, 2014. @article{Schwiedrzik2014, Perception is an active inferential process in which prior knowledge is combined with sensory input, the result of which determines the contents of awareness. Accordingly, previous experience is known to help the brain "decide" what to perceive. However, a critical aspect that has not been addressed is that previous experience can exert 2 opposing effects on perception: An attractive effect, sensitizing the brain to perceive the same again (hysteresis), or a repulsive effect, making it more likely to perceive something else (adaptation). We used functional magnetic resonance imaging and modeling to elucidate how the brain entertains these 2 opposing processes, and what determines the direction of such experience-dependent perceptual effects. We found that although affecting our perception concurrently, hysteresis and adaptation map into distinct cortical networks: a widespread network of higher-order visual and fronto-parietal areas was involved in perceptual stabilization, while adaptation was confined to early visual areas. This areal and hierarchical segregation may explain how the brain maintains the balance between exploiting redundancies and staying sensitive to new information. We provide a Bayesian model that accounts for the coexistence of hysteresis and adaptation by separating their causes into 2 distinct terms: Hysteresis alters the prior, whereas adaptation changes the sensory evidence (the likelihood function). |
Aroline E. Seibert Hanson; Matthew T. Carlson The roles of first language and proficiency in L2 processing of Spanish clitics: Global effects Journal Article In: Language Learning, vol. 64, pp. 310–342, 2014. @article{SeibertHanson2014, We assessed the roles of first language (L1) and second language (L2) proficiency in the processing of preverbal clitics in L2 Spanish by considering the predictions of four processing theories—the Input Processing Theory, the Unified Competition Model, the Amalgamation Model, and the Associative-Cognitive CREED. We compared the performance of L1 English (typologically different from Spanish) to L1 Romanian (typologically similar to Spanish) speakers from various L2 Spanish proficiency levels on an auditory sentence-processing task.We foundmain effects ofproficiency, condition, and L1 and an interaction between proficiency and condition. Although we did not find an interaction between L1 and condition, the L1Romanians showed an overall advantage that may be attributable to structure-specific experience in the L1, raising new questions about how crosslinguistic differences influence the processing strategies learners apply to their L2. |
Mehrdad Seirafi; Peter De Weerd; Beatrice De Gelder Suppression of face perception during saccadic eye movements Journal Article In: Journal of Ophthalmology, vol. 2014, pp. 1–7, 2014. @article{Seirafi2014, Lack of awareness of a stimulus briefly presented during saccadic eye movement is known as saccadic omission. Studying the reduced visibility of visual stimuli around the time of saccade-known as saccadic suppression-is a key step to investigate saccadic omission. To date, almost all studies have been focused on the reduced visibility of simple stimuli such as flashes and bars. The extension of the results from simple stimuli to more complex objects has been neglected. In two experimental tasks, we measured the subjective and objective awareness of a briefly presented face stimuli during saccadic eye movement. In the first task, we measured the subjective awareness of the visual stimuli and showed that in most of the trials there is no conscious awareness of the faces. In the second task, we measured objective sensitivity in a two-alternative forced choice (2AFC) face detection task, which demonstrated chance-level performance. Here, we provide the first evidence of complete suppression of complex visual stimuli during the saccadic eye movement. |
Yamila Sevilla; Mora Maldonado; Diego E. Shalom Pupillary dynamics reveal computational cost in sentence planning Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 6, pp. 1041–1052, 2014. @article{Sevilla2014, This study investigated the computational cost associated with grammatical planning in sentence production. We measured people's pupillary responses as they produced spoken descriptions of depicted events. We manipulated the syntactic structure of the target by training subjects to use different types of sentences following a colour cue. The results showed higher increase in pupil size for the production of passive and object dislocated sentences than for active canonical subject-verb-object sentences, indicating that more cognitive effort is associated with more complex noncanonical thematic order. We also manipulated the time at which the cue that triggered structure-building processes was presented. Differential increase in pupil diameter for more complex sentences was shown to rise earlier as the colour cue was presented earlier, suggesting that the observed pupillary changes are due to differential demands in relatively independent structure-building processes during grammatical planning. Task-evoked pupillary responses provide a reliable measure to study the cognitive processes involved in sentence production. |
Aasef G. Shaikh; Fatema F. Ghasia Gaze holding after anterior-inferior temporal lobectomy Journal Article In: Neurological Sciences, vol. 35, no. 11, pp. 1749–1756, 2014. @article{Shaikh2014, Eye position-sensitive neurons are found in parietooccipital and anterior-inferior temporal cortex. Putative role of these neurons is to facilitate transformation of reference frame from the retina-fixed to world-fixed coordinates and assure precise action. We assessed the nature of ocular motor disorder in a subject who had selective resection of the right anterior-inferior temporal cortex for the treatment of intractable epilepsy from cortical dysplasia. The gaze was stable when the subject was viewing straight-ahead, but centrally directed drifts in the eye position were seen during eccentric horizontal gaze holding. Eye-in-orbit position determined drift velocity and its direction. Conjugate and sinusoidal vertical oscillations were also present. Horizontal drifts and vertical oscillations became prominent and disconjugate in the absence of visual cue. The gaze-holding deficit was consistent with impairment in neural integration, but in the absence of cerebellar and visual deficits. We speculate that brainstem neural integrator might receive cortical feedback regarding world-fixed coordinates. Visual system might calibrate this process. Hence the lesion of the anterior-inferior temporal lobe leads to impairment in the function of neural integrator. Vision might be used to calibrate such feedback, hence the lack of visual cue further impairs the function of the neural integrator leading to worsening of gaze-holding deficits. |
Stephanie A. H. Jones; Christopher D. Cowper-Smith; David A. Westwood Directional interactions between current and prior saccades Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 872, 2014. @article{Jones2014, One way to explore how prior sensory and motor events impact eye movements is to ask someone to look to targets located about a central point, returning gaze to the central point after each eye movement. Concerned about the contribution of this return to center movement, Anderson et al. (2008) used a sequential saccade paradigm in which participants made a continuous series of saccades to peripheral targets that appeared to the left or right of the currently fixated location in a random sequence (the next eye movement began from the last target location). Examining the effects of previous saccades (n-x) on current saccade latency (n), they found that saccadic reaction times (RT) were reduced when the direction of the current saccade matched that of a preceding saccade (e.g., two left saccades), even when the two saccades in question were separated by multiple saccades in any direction. We examined if this pattern extends to conditions in which targets appear inside continuously marked locations that provide stable visual features (i.e., target "placeholders") and when saccades are prompted by central arrows. Participants completed 3 conditions: peripheral targets (PT; continuous, sequential saccades to peripherally presented targets) without placeholders; PT with placeholders; and centrally presented arrows (CA; left or right pointing arrows at the currently fixated location instructing participants to saccade to the left or right). We found reduced saccadic RT when the immediately preceding saccade (n-1) was in the same (vs. opposite) direction in the PT without placeholders and CA conditions. This effect varied when considering the effect of the previous 2-5 (n-x) saccades on current saccade latency (n). The effects of previous eye movements on current saccade latency may be determined by multiple, time-varying mechanisms related to sensory (i.e., retinotopic location), motor (i.e., saccade direction), and environmental (i.e., persistent visual objects) factors. |
D. Jonikaitis; Artem V. Belopolsky Target-distractor competition in the oculomotor system is spatiotopic Journal Article In: Journal of Neuroscience, vol. 34, no. 19, pp. 6687–6691, 2014. @article{Jonikaitis2014, In natural scenes, multiple visual stimuli compete for selection; however, each saccade displaces the stimulus representations in retinotopicaly organized visual and oculomotor maps. In the present study, we used saccade curvature to investigate whether oculomotor competition across eye movements is represented in retinotopic or spatiotopic coordinates. Participants performed a sequence of saccades and we induced oculomotor competition by briefly presenting a task-irrelevant distractor at different times during the saccade sequence. Despite the intervening saccade, the second saccade curved away from a spatial representation of the distractor that was presented before the first saccade. Furthermore, the degree of saccade curvature increased with the salience of the distractor presented before the first saccade. The results suggest that spatiotopic representations of target-distractor competition are crucial for successful interaction with objects of interest despite the intervening eye movements. |
Timothy R. Jordan; Abubaker A. A. Almabruk; Eman A. Gadalla; Victoria A. McGowan; Sarah J. White; Lily Abedipour; Kevin B. Paterson Reading direction and the central perceptual span: Evidence from Arabic and English Journal Article In: Psychonomic Bulletin & Review, vol. 21, no. 2, pp. 505–511, 2014. @article{Jordan2014, In English and other alphabetic languages read from left to right, useful information acquired during each fixational pause is generally reported to extend much further to the right of each fixation than to the left. However, the asymmetry of the perceptual span for alphabetic languages read in the opposite direction (i.e., from right to left) has received very little attention in empirical research. Accordingly, we investigated the perceptual span for Arabic, which is one of the world's most widely read languages and is read from right to left, using a gaze-contingent window paradigm in which a region of text was displayed normally around each point of fixation, while text outside this region was obscured. Skilled Arabic readers who were bilingual in Arabic and English read Arabic and English sentences while a window of normal text extended symmetrically 0.5(o) to the left and right of fixation or asymmetrically, by increasing this window to 1.5(o) or 2.5(o) to either the left or the right. When English was read, performance across window conditions was superior when windows extended rightward. However, when Arabic was read, performance was superior when windows extended leftward and was essentially the reverse of that observed for English. These findings show for the first time that a leftward asymmetry in the central perceptual span occurs when Arabic is read and, for the first time in over 30 years, provide a new indication that the perceptual span for alphabetic languages is modified by the overall direction of reading. |
Timothy R. Jordan; Victoria A. McGowan; Kevin B. Paterson Reading with filtered fixations: Adult age differences in the effectiveness of low-level properties of text within central vision Journal Article In: Psychology and Aging, vol. 29, no. 2, pp. 229–235, 2014. @article{Jordan2014a, When reading, low-level visual properties of text are acquired from central vision during brief fixational pauses, but the effectiveness of these properties may differ in older age. To investigate, a filtering technique displayed the low, medium, or high spatial frequencies of text falling within central vision as young (18-28 years) and older (65+ years) adults read. Reading times for normal text did not differ across age groups, but striking differences in the effectiveness of spatial frequencies were observed. Consequently, even when young and older adults read equally well, the effectiveness of spatial frequencies in central vision differs markedly in older age. |
Holly S. S. L. Joseph; Elizabeth Wonnacott; Paul Forbes; Kate Nation Becoming a written word: Eye movements reveal order of acquisition effects following incidental exposure to new words during silent reading Journal Article In: Cognition, vol. 133, no. 1, pp. 238–248, 2014. @article{Joseph2014, We know that from mid-childhood onwards most new words are learned implicitly via reading; however, most word learning studies have taught novel items explicitly. We examined incidental word learning during reading by focusing on the well-documented finding that words which are acquired early in life are processed more quickly than those acquired later. Novel words were embedded in meaningful sentences and were presented to adult readers early (day 1) or later (day 2) during a five-day exposure phase. At test adults read the novel words in semantically neutral sentences. Participants' eye movements were monitored throughout exposure and test. Adults also completed a surprise memory test in which they had to match each novel word with its definition. Results showed a decrease in reading times for all novel words over exposure, and significantly longer total reading times at test for early than late novel words. Early-presented novel words were also remembered better in the offline test. Our results show that order of presentation influences processing time early in the course of acquiring a new word, consistent with partial and incremental growth in knowledge occurring as a function of an individual's experience with each word. |
Johanna K. Kaakinen; Jukka Hyönä Task relevance induces momentary changes in the functional visual field during reading Journal Article In: Psychological Science, vol. 25, no. 2, pp. 626–632, 2014. @article{Kaakinen2014, In the research reported here, we examined whether task demands can induce momentary tunnel vision during reading. More specifically, we examined whether the size of the functional visual field depends on task relevance. Forty participants read an expository text with a specific task in mind while their eye movements were recorded. A display-change paradigm with random-letter strings as preview masks was used to study the size of the functional visual field within sentences that contained task-relevant and task-irrelevant information. The results showed that orthographic parafoveal-on-foveal effects and preview benefits were observed for words within task-irrelevant but not task-relevant sentences. The results indicate that the size of the functional visual field is flexible and depends on the momentary processing demands of a reading task. The higher cognitive processing requirements experienced when reading task-relevant text rather than task-irrelevant text induce momentary tunnel vision, which narrows the functional visual field. |
Johanna K. Kaakinen; Henri Olkoniemi; Taina Kinnari; Jukka Hyönä Processing of written irony: An eye movement study Journal Article In: Discourse Processes, vol. 51, no. 4, pp. 287–311, 2014. @article{Kaakinen2014a, We examined processing of written irony by recording readers' eye movements while they read target phrases embedded either in ironic or non-ironic story context. After reading each story, participants responded to a text memory question and an inference question tapping into the understanding of the meaning of the target phrase. The results of Experiment 1 (N ¼ 52) showed that readers were more likely to reread ironic than non-ironic target sentences during first-pass reading as well as during later look-backs. Experiment 2 (N ¼ 60) examined individual differences related to working memory capacity (WMC), Sarcasm Self-Report Scale (SSS), and need for cognition (NFC) in the processing of irony. The results of Experiment 2 suggest that WMC, but not SSS or NFC, plays a role in how readers resolve the meaning of ironic utterances. High WMC was related to increased probability of initiating first-pass rereadings in ironic compared with literal sentences. The results of these two experiments suggest that the processing of (unconventional) irony does require extra processing effort and that the effects are localized in the ironic utterances. |
R. M. Kalwani; Siddhartha Joshi; Joshua I. Gold Phasic activation of individual neurons in the locus ceruleus/subceruleus complex of monkeys reflects rewarded decisions to go but not stop Journal Article In: Journal of Neuroscience, vol. 34, no. 41, pp. 13656–13669, 2014. @article{Kalwani2014, Neurons in the brainstem nucleus locus ceruleus (LC) often exhibit phasic activation in the context of simple sensory-motor tasks. The functional role of this activation, which leads to the release of norepinephrine throughout the brain, is not yet understood in part because the conditions under which it occurs remain in question. Early studies focused on the relationship of LC phasic activation to salient sensory events, whereas more recent work has emphasized its timing relative to goal-directed behavioral responses, possibly representing the end of a sensory-motor decision process. To better understand the relationship between LC phasic activation and sensory, motor, and decision processing, we recorded spiking activity of neurons in the LC+ (LC and the adjacent, norepinephrine-containing subceruleus nucleus) of monkeys performing a countermanding task. The task required the monkeys to occasionally withhold planned, saccadic eye movements to a visual target. We found that many well isolated LC+ units responded to both the onset of the visual cue instructing the monkey to initiate the saccade and again after saccade onset, even when it was initiated erroneously in the presence of a stop signal. Many of these neurons did not respond to saccades made outside of the task context. In contrast, neither the appearance of the stop signal nor the successful withholding of the saccade elicited an LC+ response. Therefore, LC+ phasic activation encodes sensory and motor events related to decisions to execute, but not withhold, movements, implying a functional role in goal-directed actions, but not necessarily more covert forms of processing. |
Marc R. Kamke; Alexander E. Ryan; Martin V. Sale; Megan E. J. Campbell; Stephan Riek; Timothy J. Carroll; Jason B. Mattingley Visual spatial attention has opposite effects on bidirectional plasticity in the human motor cortex Journal Article In: Journal of Neuroscience, vol. 34, no. 4, pp. 1475–1480, 2014. @article{Kamke2014, Long-term potentiation (LTP) and long-term depression (LTD) are key mechanisms of synaptic plasticity that are thought to act in concert to shape neural connections. Here we investigated the influence of visual spatial attention on LTP-like and LTD-like plasticity in the human motor cortex. Plasticity was induced using paired associative stimulation (PAS), which involves repeated pairing of peripheral nerve stimulation and transcranial magnetic stimulation to alter functional responses in the thumb area of the primary motor cortex. PAS-induced changes in cortical excitability were assessed using motor-evoked potentials. During plasticity induction, participants directed their attention to one of two visual stimulus streams located adjacent to each hand. When participants attended to visual stimuli located near the left thumb, which was targeted by PAS, LTP-like increases in excitability were significantly enhanced, and LTD-like decreases in excitability reduced, relative to when they attended instead to stimuli located near the right thumb. These differential effects on (bidirectional) LTP-like and LTD-like plasticity suggest that voluntary visual attention can exert an important influence on the functional organization of the motor cortex. Specifically, attention acts to both enhance the strengthening and suppress the weakening of neural connections representing events that fall within the focus of attention. |
Kei Kanari; Hirohiko Kaneko Standard deviation of luminance distribution affects lightness and pupillary response Journal Article In: Journal of the Optical Society of America A, vol. 31, no. 12, pp. 2795–2805, 2014. @article{Kanari2014, We examined whether the standard deviation (SD) of luminance distribution serves as information of illumination. We measured the lightness of a patch presented in the center of a scrambled-dot pattern while manipulating the SD of the luminance distribution. Results showed that lightness decreased as the SD of the surround stimulus increased. We also measured pupil diameter while viewing a similar stimulus. The pupil diameter decreased as the SD of luminance distribution of the stimuli increased. We confirmed that these results were not obtained because of the increase of the highest luminance in the stimulus. Furthermore, results of field measurements revealed a correlation between the SD of luminance distribution and illuminance in natural scenes. These results indicated that the visual system refers to the SD of the luminance distribution in the visual stimulus to estimate the scene illumination. |
Min Suk Kang; Geoffrey F. Woodman The neurophysiological index of visual working memory maintenance is not due to load dependent eye movements Journal Article In: Neuropsychologia, vol. 56, no. 1, pp. 63–72, 2014. @article{Kang2014, The Contralateral Delayed Activity (CDA) is slow negative potential found during a variety of tasks, providing an important measure of the representation of information in visual working memory. However, it is studied using stimulus arrays in which the to-be-remembered objects are shown in the periphery of the left or the right visual field. Our goal was to determine whether fixational eye movements in the direction of the memoranda might underlie the CDA. We found that subjects' gaze was shifted toward the visual field of the memoranda during the retention interval, with its magnitude increasing with the set size. However, the CDA was clearly observed even when the subjects' gaze shifts were absent. In addition, the magnitude of the subjects' gaze shifts was unrelated to their visual working memory capacity measured with behavioral data, unlike the CDA. Finally, the onset latency of the set size dependent eye movements followed the onset of the set size dependent CDA. Thus, our findings clearly show that the CDA does not represent a simple inability to maintain fixation during visual working memory maintenance, but that this neural index of representation in working memory appears to induce eye movements toward the locations of the objects being remembered. |
André Krügel; Ralf Engbert A model of saccadic landing positions in reading under the influence of sensory noise Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 334–353, 2014. @article{Kruegel2014, During reading, saccadic eye movements are produced to move the high acuity foveal region of the eye to words of interest for efficient word processing. Distributions of saccadic landing positions peak close to a word's centre but are relatively broad compared to simple oculomotor tasks. Moreover, landing-position distributions are modulated both by distance of the launch site and by saccade type (e.g., one-step saccade, word skipping, refixation). Here we present a mathematical model for the computation of a saccade intended for a given target word. Two fundamental assumptions are related to (1) the sensory computation of the word centre from inter- word spaces and (2) the integration of sensory information and a priori knowledge using Bayesian estimation. Our model was developed for data from a large corpus of eye movements from normal reading. We demonstrate that the model is able simultaneously to account for a systematic shift of saccadic mean landing position with increasing launch-site distance and for qualitative differences between one-step saccades (i.e., from a given word to the next word) and word-skipping saccades. |