所有EyeLink眼动仪出版物
以下按年份列出了截至2025年(包括2026年初)的所有14000篇经同行评审的EyeLink研究出版物。您可以使用视觉搜索、平滑追求、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼动追踪研究可以在解决方案页面上找到。如果我们错过了任何EyeLink眼动追踪论文,请给我们发电子邮件!
2013 |
Wael F. Asaad; Navaneethan Santhanam; Steven McClellan; David J. Freedman High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB Journal Article In: Journal of Neurophysiology, vol. 109, no. 1, pp. 249–260, 2013. @article{Asaad2013,Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. |
Sven Mucke; Niall C. Strang; Senay Aydin; Edward A. H. Mallen; Dirk Seidel; Velitchko Manahilov Spatial frequency selectivity of visual suppression during convergence eye movements Journal Article In: Vision Research, vol. 89, pp. 96–101, 2013. @article{Mucke2013,Visual suppression of low-spatial frequency information during eye movements is believed to contribute to a stable perception of our visual environment. While visual perception has been studied extensively during saccades, vergence has been somewhat neglected. Here, we show that convergence eye movements reduce contrast sensitivity to low spatial frequency information around the onset of the eye movements, but do not affect sensitivity to higher spatial frequencies. This suggests that visual suppression elicited by convergence eye movements may have the same temporal and spatial characteristics as saccadic suppression. |
Ian C. Fiebelkorn; Adam C. Snyder; Manuel R. Mercier; John S. Butler; S. Molholm; John J. Foxe Cortical cross-frequency coupling predicts perceptual outcomes Journal Article In: NeuroImage, vol. 69, pp. 126–137, 2013. @article{Fiebelkorn2013,Functional networks are comprised of neuronal ensembles bound through synchronization across multiple intrinsic oscillatory frequencies. Various coupled interactions between brain oscillators have been described (e.g., phase-amplitude coupling), but with little evidence that these interactions actually influence perceptual sensitivity. Here, electroencephalographic (EEG) recordings were made during a sustained-attention task to demonstrate that cross-frequency coupling has significant consequences for perceptual outcomes (i.e., whether participants detect a near-threshold visual target). The data reveal that phase-detection relationships at higher frequencies are dependent on the phase of lower frequencies, such that higher frequencies alternate between periods when their phase is either strongly or weakly predictive of visual-target detection. Moreover, the specific higher frequencies and scalp topographies linked to visual-target detection also alternate as a function of lower-frequency phase. Cross-frequency coupling between lower (i.e., delta and theta) and higher frequencies (e.g., low- and high-beta) thus results in dramatic fluctuations of visual-target detection. |
Alex L. White; Martin Rolfs; Marisa Carrasco Adaptive deployment of spatial and feature-based attention before saccades Journal Article In: Vision Research, vol. 85, pp. 26–35, 2013. @article{White2013,What you see depends not only on where you are looking but also on where you will look next. The pre-saccadic attention shift is an automatic enhancement of visual sensitivity at the target of the next saccade. We investigated whether and how perceptual factors independent of the oculomotor plan modulate pre-saccadic attention within and across trials. Observers made saccades to one (the target) of six patches of moving dots and discriminated a brief luminance pulse (the probe) that appeared at an unpredictable location. Sensitivity to the probe was always higher at the target's location (spatial attention), and this attention effect was stronger if the previous probe appeared at the previous target's location. Furthermore, sensitivity was higher for probes moving in directions similar to the target's direction (feature-based attention), but only when the previous probe moved in the same direction as the previous target. Therefore, implicit cognitive processes permeate pre-saccadic attention, so that-contingent on recent experience-it flexibly distributes resources to potentially relevant locations and features. |
Dana L. Chesney; Nicole M. McNeil; James R. Brockmole; Ken Kelley An eye for relations: Eye-tracking indicates long-term negative effects of operational thinking on understanding of math equivalence Journal Article In: Memory & Cognition, vol. 41, no. 7, pp. 1079–1095, 2013. @article{Chesney2013,Prior knowledge in the domain of mathematics can sometimes interfere with learning and performance in that domain. One of the best examples of this phenomenon is in students' difficulties solving equations with operations on both sides of the equal sign. Elementary school children in the U.S. typically acquire incorrect, operational schemata rather than correct, relational schemata for interpreting equations. Researchers have argued that these operational schemata are never unlearned and can continue to affect performance for years to come, even after relational schemata are learned. In the present study, we investigated whether and how operational schemata negatively affect undergraduates' performance on equations. We monitored the eye movements of 64 undergraduate students while they solved a set of equations that are typically used to assess children's adherence to operational schemata (e.g., 3 + 4 + 5 = 3 + __). Participants did not perform at ceiling on these equations, particularly when under time pressure. Converging evidence from performance and eye movements showed that operational schemata are sometimes activated instead of relational schemata. Eye movement patterns reflective of the activation of relational schemata were specifically lacking when participants solved equations by adding up all the numbers or adding the numbers before the equal sign, but not when they used other types of incorrect strategies. These findings demonstrate that the negative effects of acquiring operational schemata extend far beyond elementary school. |
Sanjay G. Manohar; Masud Husain Attention as foraging for information and value Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 711, 2013. @article{Manohar2013,What is the purpose of attention? One avenue of research has led to the proposal that attention might be crucial for gathering information about the environment, while other lines of study have demonstrated how attention may play a role in guiding behavior to rewarded options. Many experiments that study attention require participants to make a decision based on information acquired discretely at one point in time. In real-world situations, however, we are usually not presented with information about which option to select in such a manner. Rather we must initially search for information, weighing up reward values of options before we commit to a decision. Here, we propose that attention plays a role in both foraging for information and foraging for value. When foraging for information, attention is guided toward the unknown. When foraging for reward, attention is guided toward high reward values, allowing decision-making to proceed by accept-or-reject decisions on the currently attended option. According to this account, attention can be regarded as a low-cost alternative to moving around and physically interacting with the environment-"teleforaging"-before a decision is made to interact physically with the world. To track the timecourse of attention, we asked participants to seek out and acquire information about two gambles by directing their gaze, before choosing one of them. Participants often made multiple refixations on items before making a decision. Their eye movements revealed that early in the trial, attention was guided toward information, i.e., toward locations that reduced uncertainty about value. In contrast, late in the trial, attention was guided by expected value of the options. At the end of the decision period, participants were generally attending to the item they eventually chose. We suggest that attentional foraging shifts from an uncertainty-driven to a reward-driven mode during the evolution of a decision, permitting decisions to be made by an engage-or-search strategy. |
Stefan Van der Stigchel; Tanja C. W. Nijboer How global is the global effect? The spatial characteristics of saccade averaging Journal Article In: Vision Research, vol. 84, pp. 6–15, 2013. @article{VanderStigchel2013c,When a target and a distractor are presented in close proximity, an eye movement will generally land in between these two elements. This is known as the 'global effect' and has been claimed to be a reflection of the averaged saccade programs towards both locations. The aim of the present study was to systematically investigate whether there is only a limited area in the saccade map in which saccade averaging occurs. To this end, we examined various distances between target and distractor in two experiments and investigated whether the majority of eye movements landed in between the target and the distractor. Results indicated that the endpoint distribution was unimodal for distances up to 35° (in polar coordinates), with saccades generally landing in between the target and the distractor. When the distance was higher than 45°, the saccade endpoint distribution was predominantly bimodal, with saccades landing either on the target or on the distractor. The decrease in saccade averaging was linear until almost no averaging saccades were observed for the longest distances. As saccades landing in between target and distractor reflect a weak, or absent, top-down signal, the present study indicated that top-down information is unable to strongly influence the oculomotor system when target and distractor are presented in close proximity. In this situation, the resulting eye movement is determined by the weighted average of saccade vectors present in a restricted region in the motor map. |
Jesse A. Harris; Charles Clifton; Lyn Frazier Processing and domain selection: Quantificational variability effects Journal Article In: Language and Cognitive Processes, vol. 28, no. 10, pp. 1519–1544, 2013. @article{Harris2013a,Three studies investigated how readers interpret sentences with variable quantificational domains, for example, The army was mostly in the capital, where mostly may quantify over individuals or parts (Most of the army was in the capital) or over times (The army was in the capital most of the time). It is proposed that a general conceptual economy principle, No Extra Times, discourages the postulation of potentially unnecessary times, and thus favours the interpretation quantifying over parts. Disambiguating an ambiguously quantified sentence to a quantification over times interpretation was rated as less natural than disambiguating it to a quantification over parts interpretation (Experiment 1). In an interpretation questionnaire, sentences with similar quantificational variability were constructed so that both interpretations of the sentence would require postulating multiple times; this resulted in the elimination of the preference for a quantification over parts interpretation, suggesting the parts preference observed in Experiment 1 is not reducible to a lexical bias of the adverb mostly (Experiment 2). An eye movement recording study showed that, in the absence of prior evidence for multiple times, readers exhibit greater difficulty when reading material that forces a quantification over times interpretation than when reading material that allows a quantification over parts interpretation (Experiment 3). These experiments contribute to understanding readers' default assumptions about the temporal properties of sentences, which is essential for understanding the selection of a domain for adverbial quantifiers and, more generally, for understanding how situational constraints influence sentence processing. |
Melaina T. Vinski; Scott Watter Being a grump only makes things worse: A transactional account of acute stress on mind wandering Journal Article In: Frontiers in Psychology, vol. 4, pp. 730, 2013. @article{Vinski2013,The current work investigates the influence of acute stress on mind wandering. Participants completed the Positive and Negative Affect Schedule as a measure of baseline negative mood, and were randomly assigned to either the high-stress or low-stress version of the Trier Social Stress Test. Participants then completed the Sustained Attention to Response Task as a measure of mind-wandering behavior. In Experiment 1, participants reporting a high degree of negative mood that were exposed to the high-stress condition were more likely to engage in a variable response time, make more errors, and were more likely to report thinking about the stressor relative to participants that report a low level of negative mood. These effects diminished throughout task performance, suggesting that acute stress induces a temporary mind-wandering state in participants with a negative mood. The temporary affect-dependent deficits observed in Experiment 1 were replicated in Experiment 2, with the high negative mood participants demonstrating limited resource availability (indicated by pupil diameter) immediately following stress induction. These experiments provide novel evidence to suggest that acute psychosocial stress briefly suppresses the availability of cognitive resources and promotes an internally oriented focus of attention in participants with a negative mood. |
Arielle Borovsky; Erin Burns; Jeffrey L. Elman; Julia L. Evans Lexical activation during sentence comprehension in adolescents with history of specific language impairment Journal Article In: Journal of Communication Disorders, vol. 46, no. 5-6, pp. 413–427, 2013. @article{Borovsky2013,One remarkable characteristic of speech comprehension in typically developing (TD) children and adults is the speed with which the listener can integrate information across multiple lexical items to anticipate upcoming referents. Although children with Specific Language Impairment (SLI) show lexical deficits (Sheng & McGregor, 2010) and slower speed of processing (Leonard et al., 2007), relatively little is known about how these deficits manifest in real-time sentence comprehension. In this study, we examine lexical activation in the comprehension of simple transitive sentences in adolescents with a history of SLI and age-matched, TD peers. Participants listened to sentences that consisted of the form, Article-Agent-Action-Article-Theme, (e.g., The pirate chases the ship) while viewing pictures of four objects that varied in their relationship to the Agent and Action of the sentence (e.g., Target, Agent-Related, Action-Related, and Unrelated). Adolescents with SLI were as fast as their TD peers to fixate on the sentence's final item (the Target) but differed in their post-action onset visual fixations to the Action-Related item. Additional exploratory analyses of the spatial distribution of their visual fixations revealed that the SLI group had a qualitatively different pattern of fixations to object images than did the control group. The findings indicate that adolescents with SLI integrate lexical information across words to anticipate likely or expected meanings with the same relative fluency and speed as do their TD peers. However, the failure of the SLI group to show increased fixations to Action-Related items after the onset of the action suggests lexical integration deficits that result in failure to consider alternate sentence interpretations.Learning outcomes: As a result of this paper, the reader will be able to describe several benefits of using eye-tracking methods to study populations with language disorders. They should also recognize several potential explanations for lexical deficits in SLI, including possible reduced speed of processing, and degraded lexical representations. Finally, they should recall the main outcomes of this study, including that adolescents with SLI show different timing and location of eye-fixations while interpreting sentences than their age-matched peers. © 2013. |
Heather Sheridan; Keith Rayner; Eyal M. Reingold Unsegmented text delays word identification: Evidence from a survival analysis of fixation durations Journal Article In: Visual Cognition, vol. 21, no. 1, pp. 38–60, 2013. @article{srr13,The present study employed distributional analyses of fixation times to examine the impact of removing spaces between words during reading. Specifically, we presented high and low frequency target words in a normal text condition that contained spaces (e.g., "John decided to sell the table in the garage sale") and in an unsegmented text condition that contained random numbers instead of spaces (e.g., "John4decided8to5sell9the7table2in3the9garage6sale"). The unsegmented text con- dition produced larger word frequency effects relative to the normal text condition for the gaze duration and total time measures (for similar findings, see Rayner, Fischer, & Pollatsek, 1998), which indicates that removing spaces can impact the word identification stage of reading. To further examine the effect of spacing on word identification, we used distributional analyses of first-fixation durations to contrast the time course of word frequency effects in the normal versus the unsegmented text conditions. In replication of prior findings (Reingold, Reichle, Glaholt, & Sheridan, 2012; Staub, White, Drieghe, Hollway, & Rayner, 2010), ex-Gaussian fitting revealed that the word frequency variable impacted both the shift and the skew of the distributions, and this pattern of results occurred for both the normal and unsegmented text conditions. In addition, a survival analysis technique revealed a later time course ofword frequency effects in the unsegmented relative to the normal condition, such that the earliest discernible influence of word frequency was 112 ms from the start of fixation in the normal text condition, and 152 ms in the unsegmented text condition. This delay in the temporal onset of word frequency effects in the unsegmented text condition strongly suggests that removing spaces delays the word identification stage of reading. Possible underlying mechanisms are discussed, including lateral masking and word segmentation. |
Steven G. Luke; Kiel Christianson The influence of frequency across the time course of morphological processing: Evidence from the transposed-letter effect Journal Article In: Journal of Cognitive Psychology, vol. 25, no. 7, pp. 781–799, 2013. @article{Luke2013,The role that morphology plays in lexical access has been the subject of much debate, as has the influence of word frequency on morphological processing. The effect of frequency on morphological processing across the time course of lexical access was investigated using the transposed-letter effect. The results of two experiments (one masked-priming experiment and one eye-tracking experiment) outline a process in which morphological structure can be detected quickly and independently of frequency. The present study is also the first to show that transpositions that cross morpheme boundaries can be as disruptive as letter substitutions in inflected words, replicating earlier results with derived and compound words. |
Jane Ashby; Heather Dix; Morgan Bontrager Phonemic awareness contributes to text reading fluency: Evidence from eye movements. Journal Article In: School Psychology Review, vol. 42, no. 2, pp. 157–170, 2013. @article{Ashby2013,Although phonemic awareness is a known predictor of early decoding and word recognition, less is known about relationships between phonemic awareness and text reading fluency. This longitudinal study is the first to inves-tigate this relationship by measuring eye movements during picture matching tasks and during silent sentence reading. Time spent looking at the correct target during phonemic awareness and receptive spelling tasks gauged the efficiency of phonological and orthographic processes. Children's eye movements during sen-tence reading provided a direct measure of silent reading fluency for compre-hended text. Results indicate that children who processed the phonemic awareness targets more slowly in Grade 2 tended to be slower readers in Grade 3. Processing difficulty during a receptive spelling task was related to reading fluency within Grade 2. Findings suggest that inefficient phonemic processing contributes to poor silent reading fluency after second grade. |
Aline Godfroid; Frank Boers; Alex Housen An eye for words: Gauging the role of attention in incidental L2 vocabulary acquisition by means of eye-tracking Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 3, pp. 483–517, 2013. @article{Godfroid2013,This eye-tracking study tests the hypothesis that more attention leads to more learning, following claims that attention to new language elements in the input results in their initial representation in long-term memory (i.e., intake; Robinson, 2003; Schmidt, 1990, 2001). Twenty-eight advanced learners of English read English texts that contained 12 targets for incidental word learning. The target was a known word (control condition), a matched pseudoword, or that pseudoword preceded or followed by the known word (with the latter being a cue to the pseudoword's meaning). Participants' eye-fixation durations on the targets during reading served as a measure of the amount of attention paid (see Rayner, 2009). Results indicate that participants spent more time processing the unknown pseudowords than their matched controls. The longer participants looked at a pseudoword during reading, the more likely they were to recognize that word in an unannounced vocabulary posttest. Finally, the known, appositive cues were fixated longer when they followed the pseudowords than when they preceded them; however, their presence did not lead to higher retention of the pseudowords. We discuss how eye-tracking may add to existing methodologies for studying attention and noticing (Schmidt, 1990) in SLA. |
Clare M. Press; James M. Kilner The time course of eye movements during action observation reflects sequence learning Journal Article In: NeuroReport, vol. 24, no. 14, pp. 822–826, 2013. @article{Press2013,When we observe object-directed actions such as grasping, we make predictive eye movements. However, eye movements are reactive when observing similar actions without objects. This reactivity may reflect a lack of attribution of intention to observed actors when they perform actions without 'goals'. Alternatively, it may simply signal that there is no cue present that has been predictive of the subsequent trajectory in the observer's experience. To test this hypothesis, the present study investigated how the time course of eye movements changes as a function of visual experience of predictable, but arbitrary, actions without objects. Participants observed a point-light display of a model performing sequential finger actions in a serial reaction time task. Eye movements became less reactive across blocks. In addition, participants who exhibited more predictive eye movements subsequently demonstrated greater learning when required either to execute, or to recognize, the sequence. No measures were influenced by whether participants had been instructed that the observed movements were human or lever generated. The present data indicate that eye movements when observing actions without objects reflect the extent to which the trajectory can be predicted through experience. The findings are discussed with reference to the implications for the mechanisms supporting perception of actions both with and without objects as well as those mediating inanimate object processing. |
Stefan Van der Stigchel; Tanja C. W. Nijboer; Janet H. Bultitude; Robert D. Rafal Delayed oculomotor inhibition in patients with lesions to the human frontal oculomotor cortex: Evidence from a study on saccade averaging Journal Article In: Brain and Cognition, vol. 82, no. 2, pp. 192–200, 2013. @article{VanderStigchel2013a,The frontal oculomotor cortex is known to play an important role in oculomotor selection. The aim of the current study was to examine whether previously observed findings concerning the role of the frontal oculomotor cortex in the speed of saccade initiation and oculomotor inhibition might be related to a common underlying role of these areas in oculomotor selection. To this end, six patients with lesions to the frontal oculomotor cortex performed a double stimulus paradigm in which two elements were presented simultaneously in close proximity. Patients performed a block in which no specific task instruction was given and a block in which an instruction was provided about which of the two elements was the target. The rationale behind this manipulation was that the introduction of a specific task instruction would require a stronger involvement of top-down factors. In contrast to the block without a specific task instruction, saccade latencies to the contralesional visual field were longer than the ipsilesional visual field when a task instruction was given. This effect was strongest for saccades that landed away from the target and the distractor, reflecting trials in which strong oculomotor inhibition was applied. The observed deficits can be explained in terms of a slowing of the inhibitory signals associated with the rejection of a distractor. Given the known role of the Frontal Eye Fields and the location of the lesions, we attribute these findings to the Frontal Eye Fields, revealing their important role in the voluntary control of eye movements. |
Steven G. Luke; John M. Henderson Oculomotor and cognitive control of eye movements in reading: Evidence from mindless reading Journal Article In: Attention, Perception, & Psychophysics, vol. 75, no. 6, pp. 1230–1242, 2013. @article{Luke2013a,In the present study, we investigated the influence of cognitive factors on eye-movement behaviors in reading. Participants performed two tasks: a normal-reading task, as well as a mindless-reading task in which letters were replaced with unreadable block shapes. This mindless-reading task served as an oculomotor control condition, simulating the visual aspects of reading but removing higher-level linguistic processing. Fixation durations, word skipping, and some regressions were influenced by cognitive factors, whereas eye movements within words appeared to be less open to cognitive control. Implications for models of eye-movement control in reading are discussed. |
Stefan Van der Stigchel; Robert D. Rafal; Janet H. Bultitude Temporal dynamics of error correction in a double step task in patients with a lesion to the lateral intra-parietal cortex Journal Article In: Neuropsychologia, vol. 51, no. 14, pp. 2988–2994, 2013. @article{VanderStigchel2013b,Five patients with lesions involving intra-parietal cortex (IPCx) were tested in a rapid version of the double step paradigm to investigate the role of the IPCx in the rapid, online, updating of a saccade program. Saccades were executed to a single target in either the contra- or the ipsilesional visual field. In two thirds of the trials, a step change in target position required that the saccade shifted to a new location within the same field but in the contra- or the ipsilesional direction, allowing us to investigate whether patients are able to update their saccade program given new exogenous information about the required endpoint of the saccade. This set-up resulted in three types of initial saccades: saccades to the target on no-step trials, uncorrected saccades to the original target location on step trials and corrected saccades to the new target location on step trials. Furthermore, if the updating of the original eye movement program failed, patients performed a second saccade to the new target location that required a rapid error correction. The analysis of the double-step task on a group level indicated that latencies for all trial types were longer when saccades were directed to the contralesional versus the ipsilesional field. Furthermore, longer latencies were required for patients to initiate a corrective second saccade after making an uncorrected first saccade in their contralesional compared to ipsilesional field. There were no differences in the ultimate landing positions of the eye movements for such corrected saccades. These results reveal that deficits in updating of saccade programs only seem to be present if the updating must occur after the gaze has shifted to a new location, pointing to a role of intra-parietal cortex in the processes involved in updating information when the current reference frame has to be updated. In conclusion, the paradigm deployed in the current study allows for a refinement of the role of the intra-parietal cortex in the updating of saccade programs. |
Roberta Daini; Andrea Albonico; Manuela Malaspina; Marialuisa Martelli; Silvia Primativo; Lisa S. Arduino Dissociation in optokinetic stimulation sensitivity between omission and substitution reading errors in neglect dyslexia Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 581, 2013. @article{Daini2013,Although omission and substitution errors in neglect dyslexia (ND) patients have always been considered as different manifestations of the same acquired reading disorder, recently, we proposed a new dual mechanism model. While omissions are related to the exploratory disorder which characterizes unilateral spatial neglect (USN), substitutions are due to a perceptual integration mechanism. A consequence of this hypothesis is that specific training for omission-type ND patients would aim at restoring the oculo-motor scanning and should not improve reading in substitution-type ND. With this aim we administered an optokinetic stimulation (OKS) to two brain-damaged patients with both USN and ND, MA and EP, who showed ND mainly characterized by omissions and substitutions, respectively. MA also showed an impairment in oculo-motor behavior with a non-reading task, while EP did not. The two patients presented a dissociation with respect to their sensitivity to OKS, so that, as expected, MA was positively affected, while EP was not. Our results confirm a dissociation between the two mechanisms underlying omission and substitution reading errors in ND patients. Moreover, they suggest that such a dissociation could possibly be extended to the effectiveness of rehabilitative procedures, and that patients who mainly omit contralesional-sided letters would benefit from OKS. |
Evgenia Kanonidou; Irene Gottlob; Frank A. Proudlock The effect of font size on reading performance in strabismic amblyopia: An eye movement investigation Journal Article In: Investigative Ophthalmology & Visual Science, vol. 55, no. 1, pp. 451–459, 2013. @article{Kanonidou2013,PURPOSE: We investigated the effect of font size on reading speed and ocular motor performance in strabismic amblyopes during text reading under monocular and binocular viewing conditions. METHODS: Eye movements were recorded at 250 Hz using a head-mounted infrared video eye tracker in 15 strabismic amblyopes and 18 age-matched controls while silently reading paragraphs of text at font sizes equivalent to 1.0 to 0.2 logMAR acuity. Reading under monocular viewing with amblyopic eye/nondominant eye and nonamblyopic/dominant eye was compared to binocular viewing. Mean reading speed; number, amplitude, and direction of saccades; and fixation duration were calculated for each font size and viewing condition. RESULTS: Reading speed was significantly slower in amblyopes compared to controls for all font sizes during monocular reading with the amblyopic eye (P = 0.004), but only for smaller font sizes for reading with the nonamblyopic eye (P = 0.045) and binocularly (P = 0.038). The most significant ocular motor change was that strabismic amblyopes made more saccades per line than controls irrespective of font size and viewing conditions (P < 0.05 for all). There was no significant difference in saccadic amplitudes and fixation duration was only significantly longer in strabismic amblyopes when reading smaller fonts with the amblyopic eye viewing. CONCLUSIONS: Ocular motor deficits exist in strabismic amblyopes during reading even when reading speeds are normal and when visual acuity is not a limiting factor; that is, when reading larger font sizes with nonamblyopic eye viewing and binocular viewing. This suggests that these abnormalities are not related to crowding. |
Heather Sheridan; Eyal M. Reingold The mechanisms and boundary conditions of the Einstellung Effect in chess: Evidence from eye movements Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e75796, 2013. @article{sr13,In a wide range of problem-solving settings, the presence of a familiar solution can block the discovery of better solutions (i.e., the Einstellung effect). To investigate this effect, we monitored the eye movements of expert and novice chess players while they solved chess problems that contained a familiar move (i.e., the Einstellung move), as well as an optimal move that was located in a different region of the board. When the Einstellung move was an advantageous (but suboptimal) move, both the expert and novice chess players who chose the Einstellung move continued to look at this move throughout the trial, whereas the subset of expert players who chose the optimal move were able to gradually disengage their attention from the Einstellung move. However, when the Einstellung move was a blunder, all of the experts and the majority of the novices were able to avoid selecting the Einstellung move, and both the experts and novices gradually disengaged their attention from the Einstellung move. These findings shed light on the boundary conditions of the Einstellung effect, and provide convergent evidence for Bilalić, McLeod, & Gobet (2008)'s conclusion that the Einstellung effect operates by biasing attention towards problem features that are associated with the familiar solution rather than the optimal solution. |
Yan Jing Wu; Filipe Cristino; Charles Leek; Guillaume Thierry Non-selective lexical access in bilinguals is spontaneous and independent of input monitoring: Evidence from eye tracking Journal Article In: Cognition, vol. 129, no. 2, pp. 418–425, 2013. @article{Wu2013b,Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese-English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing. |
Feng Du; Yue Qi; Xingshan Li; Kan Zhang Dual processes of oculomotor capture by abrupt onset: Rapid involuntary capture and sluggish voluntary prioritization Journal Article In: PLoS ONE, vol. 8, no. 11, pp. e80678, 2013. @article{Du2013,The present study showed that there are two distinctive processes underlying oculomotor capture by abrupt onset. When a visual mask between the cue and the target eliminates the unique luminance transient of an onset, the onset still attracts attention in a top-down fashion. This memory-based prioritization of onset is voluntarily controlled by the knowledge of target location. But when there is no visual mask between the cue and the target, the onset captures attention mainly in a bottom-up manner. This transient-driven capture of onset is involuntary because it occurs even when the onset is completely irrelevant to the target location. In addition, the present study demonstrated distinctive temporal characteristics for these two processes. The involuntary capture driven by luminance transients is rapid and brief, whereas the memory- based voluntary prioritization of onset is more sluggish and long-lived. |
Aline Godfroid; Maren S. Uggen Attention to irregular verbs by beginning learners of German: An eye-movement study Journal Article In: Studies in Second Language Acquisition, vol. 35, no. 2, pp. 291–322, 2013. @article{Godfroid2013a,This study focuses on beginning second language learners' attention to irregular verb morphology, an area of grammar that many adults fi nd diffi cult to acquire (e.g., DeKeyser, 2005 ; Larsen-Freeman, 2010 ). We measured beginning learners' eye movements during sentence processing to investigate whether or not they actually attend to irregular verb features and, if so, whether the amount of attention that they pay to these features predicts their acquisition. On the assumption that attention facilitates learning (e.g., Gass, 1997 ; Robinson, 2003 ; Schmidt, 2001 ), we expected more attention (i.e., longer fi xations or more frequent comparisons between verb forms) to lead to more learning of the irregular verbs. Forty beginning learners of German read 12 German sentence pairs with stem-changing verbs and 12 German sentence pairs with regular verbs while an Eyelink 1000. We recorded their eye movements. The stem-changing verbs consisted of six a → ä changing verbs and six e → i(e) changing verbs. Each verb appeared in a baseline sentence in the fi rst-person singular, which has no stem change, and a critical sentence in the second- or third-person singular, which have a stem change for the irregular but not the regular verbs, on the same screen. Productive pre- and posttests measured the effects of exposure on learning. Results indicate that learners looked longer overall at stem-changing verbs than regular verbs, revealing a late effect of verb irregularity on reading times. Longer total times had a modest, favorable effect on the subsequent production of the stem vowel. Finally, the production of only the a → ä verbs—not the e → i(e) verbs—benefi ted from direct visual comparisons during reading, possibly because of the umlaut in the former. We interpret the results with reference to recent theory and research on attention, noticing, and language learning and provide a more nuanced and empirically based understanding of the noticing construct. |
Johann S. C. Kim; Gerhard Vossel; Matthias Gamer Effects of emotional context on memory for details: The role of attention Journal Article In: PLoS ONE, vol. 8, no. 10, pp. e77405, 2013. @article{Kim2013,It was repeatedly demonstrated that a negative emotional context enhances memory for central details while impairing memory for peripheral information. This trade-off effect is assumed to result from attentional processes: a negative context seems to narrow attention to central information at the expense of more peripheral details, thus causing the differential effects in memory. However, this explanation has rarely been tested and previous findings were partly inconclusive. For the present experiment 13 negative and 13 neutral naturalistic, thematically driven picture stories were constructed to test the trade-off effect in an ecologically more valid setting as compared to previous studies. During an incidental encoding phase, eye movements were recorded as an index of overt attention. In a subsequent recognition phase, memory for central and peripheral details occurring in the picture stories was tested. Explicit affective ratings and autonomic responses validated the induction of emotion during encoding. Consistent with the emotional trade-off effect on memory, encoding context differentially affected recognition of central and peripheral details. However, contrary to the common assumption, the emotional trade-off effect on memory was not mediated by attentional processes. By contrast, results suggest that the relevance of attentional processing for later recognition memory depends on the centrality of information and the emotional context but not their interaction. Thus, central information was remembered well even when fixated very briefly whereas memory for peripheral information depended more on overt attention at encoding. Moreover, the influence of overt attention on memory for central and peripheral details seems to be much lower for an arousing as compared to a neutral context. |
Heather Sheridan; Eyal M. Reingold A further examination of the lexical-processing stages hypothesized by the E-Z Reader model Journal Article In: Attention, Perception, & Psychophysics, vol. 75, no. 3, pp. 407–414, 2013. @article{sr13b,Participants' eye movements were monitored while they read sentences in which high- and low-frequency target words were presented normally (i.e., the normal condition) or with either reduced stimulus quality (i.e., the faint condition) or alternating lower- and uppercase letters (i.e., the case-alternated condition). Both the stimulus quality and case alternation manipulations interacted with word frequency for the gaze duration measure, such that the magnitude of word frequency effects was increased relative to the normal condition. However, stimulus quality (but not case alternation) interacted with word frequency for the early fixation time measures (i.e., first fixation, single fixation), whereas case alternation (but not stimulus quality) interacted with word frequency for the later fixation time measures (i.e., total time, go-past time). We interpret this pattern of results as evidence that stimulus quality influences an earlier stage of lexical processing than does case alternation, and we discuss the implications of our results for models of eye movement control during reading. |
Janet H. Bultitude; Stefan Van der Stigchel; Tanja C. W. Nijboer Prism adaptation alters spatial remapping in healthy individuals: Evidence from double-step saccades Journal Article In: Cortex, vol. 49, no. 3, pp. 759–770, 2013. @article{Bultitude2013,The visual system is able to represent and integrate large amounts of information as we move our gaze across a scene. This process, called spatial remapping, enables the construction of a stable representation of our visual environment despite constantly changing retinal images. Converging evidence implicates the parietal lobes in this process, with the right hemisphere having a dominant role. Indeed, lesions to the right parietal lobe (e.g., leading to hemispatial neglect) frequently result in deficits in spatial remapping. Research has demonstrated that recalibrating visual, proprioceptive and motor reference frames using prism adaptation ameliorates neglect symptoms and induces neglect-like performance in healthy people - one example of the capacity for rapid neural plasticity in response to new sensory demands. Because of the influence of prism adaptation on parietal functions, the present research investigates whether prism adaptation alters spatial remapping in healthy individuals. To this end twenty-eight undergraduates completed blocks of a double-step saccade (DSS) task after sham adaptation and adaptation to leftward- or rightward-shifting prisms. The results were consistent with an impairment in spatial remapping for left visual field targets following adaptation to leftward-shifting prisms. These results suggest that temporarily realigning spatial representations using sensory-motor adaptation alters right-hemisphere remapping processes in healthy individuals. The implications for the possible mechanisms of the amelioration of hemispatial neglect after prism adaptation are discussed. |
James P. Herman; C. Phillip Cloud; Josh Wallman End-point variability is not noise in saccade adaptation Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e59731, 2013. @article{Herman2013,When each of many saccades is made to overshoot its target, amplitude gradually decreases in a form of motor learning called saccade adaptation. Overshoot is induced experimentally by a secondary, backwards intrasaccadic target step (ISS) triggered by the primary saccade. Surprisingly, however, no study has compared the effectiveness of different sizes of ISS in driving adaptation by systematically varying ISS amplitude across different sessions. Additionally, very few studies have examined the feasibility of adaptation with relatively small ISSs. In order to best understand saccade adaptation at a fundamental level, we addressed these two points in an experiment using a range of small, fixed ISS values (from 0° to 1° after a 10° primary target step). We found that significant adaptation occurred across subjects with an ISS as small as 0.25°. Interestingly, though only adaptation in response to 0.25° ISSs appeared to be complete (the magnitude of change in saccade amplitude was comparable to size of the ISS), further analysis revealed that a comparable proportion of the ISS was compensated for across conditions. Finally, we found that ISS size alone was sufficient to explain the magnitude of adaptation we observed; additional factors did not significantly improve explanatory power. Overall, our findings suggest that current assumptions regarding the computation of saccadic error may need to be revisited. |
Shuichiro Taya; David Windridge; Magda Osman Trained eyes: Experience promotes adaptive gaze control in dynamic and uncertain visual environments Journal Article In: PLoS ONE, vol. 8, no. 8, pp. e71371, 2013. @article{Taya2013,Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around 'events,' which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. |
Eckart Zimmermann The reference frames in saccade adaptation Journal Article In: Journal of Neurophysiology, vol. 109, pp. 1815, 2013. @article{Zimmermann2013a,Saccade adaptation is a mecha- nism that adjusts saccade landing positions if they systematically fail to reach their intended target. In the laboratory, saccades can be shortened or lengthened if the saccade target is displaced during execution of the saccade. In this study, saccades were performed from different positions to an adapted saccade target to dissociate adapta- tion to a spatiotopic position in external space from a combined retinotopic and spatiotopic coding. The presentation duration of the saccade target before saccade execution was systematically varied, during adaptation and during test trials, with a delayed saccade paradigm. Spatiotopic shifts in landing positions depended on a certain preview duration of the target before saccade execution. When saccades were performed immediately to a suddenly appearing target, no spatiotopic adaptation was observed. These results suggest that a spatiotopic representation of the visual target signal builds up as a function of the duration the saccade target is visible before saccade execution. Different coordinate frames might also explain the separate adaptability of reactive and voluntary saccades. Spatiotopic effects were found only in outward adaptation but not in inward adaptation, which is consistent with the idea that outward adaptation takes place at the level of the visual target representation, whereas inward adap- tation is achieved at a purely motor level. |
Ruth Filik; Hartmut Leuthold The role of character-based knowledge in online narrative comprehension: Evidence from eye movements and ERPs Journal Article In: Brain Research, vol. 1506, pp. 94–104, 2013. @article{Filik2013,Little is known about the on-line evaluation of information relating to well-known story characters during text comprehension. For example, it is not clear in how much detail readers represent character-based information, and the time course over which this information is utilized during on-line language comprehension. We describe an event-related potential (ERP) study (Experiment 1) and an eye-tracking study (Experiment 2) investigating whether, and when, readers utilize their prior knowledge of a character in processing event information. Participants read materials in which an event was described that either did or did not fit with the character's typical behavior. ERPs elicited by the critical word revealed an N400 effect when the action described did not fit with the character's typical behavior. Results from early eye movement measures supported these findings, and later measures suggested that such violations were more easily accommodated for well-known fictional characters than real-world characters. |
Steven G. Luke; Antje Nuthmann; John M. Henderson Eye movement control in scene viewing and reading: Evidence from the stimulus onset delay paradigm Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 1, pp. 10–15, 2013. @article{Luke2013b,The present study used the stimulus onset delay paradigm to investigate eye movement control in reading and in scene viewing in a within-participants design. Short onset delays (0, 25, 50, 200, and 350 ms) were chosen to simulate the type of natural processing difficulty encountered in reading and scene viewing. Fixation duration increased linearly with delay duration, and the effect was equivalent for both tasks. Although fixations were longer in scene viewing, the effects of onset delay were highly consistent across tasks. These results suggest that reading and scene viewing share a common mechanism for saccade planning and control. |
Tim J. Preston; Fei Guo; Koel Das; Barry Giesbrecht; Miguel P. Eckstein Neural representations of contextual guidance in visual search of real-world scenes Journal Article In: Journal of Neuroscience, vol. 33, no. 18, pp. 7846–7855, 2013. @article{Preston2013,Exploiting scene context and object– object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes. |
Keith Rayner; Jinmian Yang; Susanne Schuett; Timothy J. Slattery Eye movements of older and younger readers when reading unspaced text Journal Article In: Experimental Psychology, vol. 60, no. 5, pp. 354–361, 2013. @article{rycl11,Older and younger readers read normal and unspaced text as their eye movements were monitored. A high or low frequency word was embedded in each sentence. Global analyses yielded large effects of spacing with unspaced text leading to much longer reading times for both groups, but the older readers had much more difficulty with unspaced text than younger readers. Local analyses of the target word revealed large main effects due to age, spacing, and frequency. In general, the older readers had more difficulty with the unspaced text than younger readers and some reasons why they did so are suggested. |
Renée M. Visser; H. Steven Scholte; Tinka Beemsterboer; Merel Kindt Neural pattern similarity predicts long-term fear memory Journal Article In: Nature Neuroscience, vol. 16, no. 4, pp. 388–390, 2013. @article{Visser2013,Although certain changes in the brain may reflect fear learning, there are no known markers that indicate whether an aversive experience will develop into fear memory. We examined the moment-to-moment dynamics of human fear learning by applying multi-voxel pattern analysis to single-trial blood oxygen level–dependent magnetic resonance imaging data. We found that the long-term behavioral expression of fear memory could be predicted from neural patterns at the time of learning. |
Katherine S. White; Eiling Yee; Sheila E. Blumstein; James L. Morgan Adults show less sensitivity to phonetic detail in unfamiliar words, too Journal Article In: Journal of Memory and Language, vol. 68, no. 4, pp. 362–378, 2013. @article{White2013a,Young word learners fail to discriminate phonetic contrasts in certain situations, an observation that has been used to support arguments that the nature of lexical representation and lexical processing changes over development. An alternative possibility, however, is that these failures arise naturally as a result of how word familiarity affects lexical processing. In the present work, we explored the effects of word familiarity on adults' use of phonetic detail. Participants' eye movements were monitored as they heard single-segment onset mispronunciations of words drawn from a newly learned artificial lexicon. In Experiment 1, single-feature onset mispronunciations were presented; in Experiment 2, participants heard two-feature onset mispronunciations. Word familiarity was manipulated in both experiments by presenting words with various frequencies during training. Both word familiarity and degree of mismatch affected adults' use of phonetic detail: in their looking behavior, participants did not reliably differentiate single-feature mispronunciations and correct pronunciations of low frequency words. For higher frequency words, participants differentiated both 1- and 2-feature mispronunciations from correct pronunciations. However, responses were graded such that 2-feature mispronunciations had a greater effect on looking behavior. These experiments demonstrate that the use of phonetic detail in adults, as in young children, is affected by word familiarity. Parallels between the two populations suggest continuity in the architecture underlying lexical representation and processing throughout development. |
Steven G. Luke; Joseph Schmidt; John M. Henderson Temporal oculomotor inhibition of return and spatial facilitation of return in a visual encoding task Journal Article In: Frontiers in Psychology, vol. 4, pp. 400, 2013. @article{Luke2013c,Oculomotor inhibition of return (O-IOR) is an increase in saccade latency prior to an eye movement to a recently fixated location compared to other locations. It has been proposed that this temporal O-IOR may have spatial consequences, facilitating foraging by inhibiting return to previously attended regions. In order to test this possibility, participants viewed arrays of objects and of words while their eye movements were recorded. Temporal O-IOR was observed, with equivalent effects for object and word arrays, indicating that temporal O-IOR is an oculomotor phenomenon independent of array content. There was no evidence for spatial inhibition of return (IOR). Instead, spatial facilitation of return was observed: participants were significantly more likely than chance to make return saccades and to re-fixate just-visited locations. Further, the likelihood of making a return saccade to an object or word was contingent on the amount of time spent viewing that object or word before leaving it. This suggests that, unlike temporal O-IOR, return probability is influenced by cognitive processing. Taken together, these results are inconsistent with the hypothesis that IOR functions as a foraging facilitator. The results also provide strong evidence for a different oculomotor bias that could serve as a foraging facilitator: saccadic momentum, a tendency to repeat the most recently executed saccade program. We suggest that models of visual attention could incorporate saccadic momentum in place of IOR. |
Hans P. Op de Beeck; Ben Vermaercke; Daniel G. Woolley; Nicole Wenderoth Combinatorial brain decoding of people's whereabouts during visuospatial navigation Journal Article In: Frontiers in Neuroscience, vol. 7, pp. 78, 2013. @article{OpdeBeeck2013,Complex behavior typically relies upon many different processes which are related to activity in multiple brain regions. In contrast, neuroimaging analyses typically focus upon isolated processes. Here we present a new approach, combinatorial brain decoding, in which we decode complex behavior by combining the information which we can retrieve from the neural signals about the many different sub-processes. The case in point is visuospatial navigation. We explore the extent to which the route travelled by human subjects (N = 3) in a complex virtual maze can be decoded from activity patterns as measured with functional magnetic resonance imaging. Preliminary analyses suggest that it is difficult to directly decode spatial position from regions known to contain an explicit cognitive map of the environment, such as the hippocampus. Instead, we were able to indirectly derive spatial position from the pattern of activity in visual and motor cortex. The non-spatial representations in these regions reflect processes which are inherent to navigation, such as which stimuli are perceived at which point in time and which motor movement is executed when (e.g., turning left at a crossroad). Highly successful decoding of routes followed through the maze was possible by combining information about multiple aspects of navigation events across time and across multiple cortical regions. This "proof of principle" study highlights how visuospatial navigation is related to the combined activity of multiple brain regions, and establishes combinatorial brain decoding as a means to study complex mental events that involve a dynamic interplay of many cognitive processes. |
Jayalakshmi Viswanathan; Jason J. S. Barton The global effect for antisaccades Journal Article In: Experimental Brain Research, vol. 225, no. 2, pp. 247–259, 2013. @article{Viswanathan2013,In the global effect, prosaccades are deviated to a position intermediate between two targets or between a distractor and a target, which may reflect spatial averaging in a map encoded by the superior colliculus. Antisaccades differ from prosaccades in that they dissociate the locations of the stimulus and goal and generate weaker collicular activity. We used these antisaccade properties to determine whether the global effect was generated in stimulus or goal computations, and whether the global effect would be larger for antisaccades, as predicted by collicular averaging. In the first two experiments, human subjects performed antisaccades while distractors were placed in the vicinity of either the stimulus or the saccadic goal. Global effects occurred only for goal-related and not for stimulus-related distractors, indicating that this effect emerges from interactions with motor representations. In the last experiment, subjects performed prosaccades and antisaccades with and without goal-related distractors. When the results were adjusted for differences in response latency, the global effect for rapid responses was three to four times larger for antisaccades than for prosaccades. Finally, we compared our findings with predictions from collicular models, to quantitatively test the spatial averaging hypothesis: we found that our results were consistent with the predictions of a collicular model. We conclude that the antisaccade global effect shows properties compatible with spatial averaging in collicular maps and likely originates in layers with neural activity related to goal rather than stimulus representations. |
S. E. Bosch; Sebastiaan F. W. Neggers; Stefan Van der Stigchel The role of the frontal eye fields in oculomotor competition: Image-guided TMS enhances contralateral target selection Journal Article In: Cerebral Cortex, vol. 23, no. 4, pp. 824–832, 2013. @article{Bosch2013,In order to execute a correct eye movement to a target in a search display, a saccade program toward the target element must be activated, while saccade programs toward distracting elements must be inhibited. The aim of the present study was to elucidate the role of the frontal eye fields (FEFs) in oculomotor competition. Functional magnetic resonance imaging-guided single-pulse transcranial magnetic stimulation (TMS) was administered over either the left FEF, the right FEF, or the vertex (control site) at 3 time intervals after target presentation, while subjects performed an oculomotor capture task. When TMS was applied over the FEF contralateral to the visual field where a target was presented, there was less interference of an ipsilateral distractor compared with FEF stimulation ipsilateral to the target's visual field or TMS over vertex. Furthermore, TMS over the FEFs decreased latencies of saccades to the contralateral visual field, irrespective of whether the saccade was directed to the target or to the distractor. These findings show that single-pulse TMS over the FEFs enhances the selection of a target in the contralateral visual field and decreases saccade latencies to the contralateral visual field. |
William J. Harrison; Jason B. Mattingley; Roger W. Remington Eye movement targets are released from visual crowding Journal Article In: Journal of Neuroscience, vol. 33, no. 7, pp. 2927–2933, 2013. @article{Harrison2013,Our ability to recognize objects in peripheral vision is impaired when other objects are nearby (Bouma, 1970). This phenomenon, known as crowding, is often linked to interactions in early visual processing that depend primarily on the retinal position of visual stimuli (Pelli, 2008; Pelli and Tillman, 2008). Here we tested a new account that suggests crowding is influenced by spatial information derived from an extraretinal signal involved in eye movement preparation. We had human observers execute eye movements to crowded targets and measured their ability to identify those targets just before the eyes began to move. Beginning ∼50 ms before a saccade toward a crowded object, we found that not only was there a dramatic reduction in the magnitude of crowding, but the spatial area within which crowding occurred was almost halved. These changes in crowding occurred despite no change in the retinal position of target or flanking stimuli. Contrary to the notion that crowding depends on retinal signals alone, our findings reveal an important role for eye movement signals. Eye movement preparation effectively enhances object discrimination in peripheral vision at the goal of the intended saccade. These presaccadic changes may enable enhanced recognition of visual objects in the periphery during active search of visually cluttered environments. |
Evelyne Lagrou; Robert J. Hartsuiker; Wouter Duyck Interlingual lexical competition in a spoken sentence context: Evidence from the visual world paradigm Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 5, pp. 963–972, 2013. @article{Lagrou2013,We used the visual world paradigm to examine interlingual lexical competition when Dutch-English bilinguals listened to low-constraining sentences in their nonnative (L2; Experiment 1) and native (L1; Experiment 2) languages. Additionally, we investigated the influence of the degree of cross-lingual phonological similarity. When listening in L2, participants fixated more on competitor pictures of which the onset of the name was phonologically related to the onset of the name of the target in the nontarget language (e.g., fles, "bottle", given target flower) than on phonologically unrelated distractor pictures. Even when they listened in L1, this effect was also observed when the onsets of the names of the target picture (in L1) and the competitor picture (in L2) were phonologically very similar. These findings provide evidence for interlingual competition during the comprehension of spoken sentences, both in L2 and in L1. |
Joshua Levy; Tom Foulsham; Alan Kingstone Monsters are people too Journal Article In: Biology Letters, vol. 9, pp. 1–4, 2013. @article{Levy2013,Animals, including dogs, dolphins, monkeys and man, follow gaze. What mediates this bias towards the eyes? One hypothesis is that primates possess a distinct neural module that is uniquely tuned for the eyes of others. An alternative explanation is that configural face processing drives fixations to the middle of peoples' faces, which is where the eyes happen to be located. We distinguish between these two accounts. Observers were presented with images of people, non-human creatures with eyes in the middle of their faces (`humanoids') or creatures with eyes positioned elsewhere (`monsters'). There was a profound and significant bias towards looking early and often at the eyes of humans and humanoids and also, critically, at the eyes of monsters. These findings demonstrate that the eyes, and not the middle of the head, are being targeted by the oculomotor system. |
Antimo Buonocore; Robert D. McIntosh Attention modulates saccadic inhibition magnitude Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 6, pp. 1051–1059, 2013. @article{Buonocore2013,Visual transient events during ongoing eye movement tasks inhibit saccades within a precise temporal window, spanning from around 60-120 ms after the event, having maximum effect at around 90 ms. It is not yet clear to what extent this saccadic inhibition phenomenon can be modulated by attention. We studied the saccadic inhibition induced by a bright flash above or below fixation, during the preparation of a saccade to a lateralized target, under two attentional manipulations. Experiment 1 demonstrated that exogenous precueing of a distractor's location reduced saccadic inhibition, consistent with inhibition of return. Experiment 2 manipulated the relative likelihood that a distractor would be presented above or below fixation. Saccadic inhibition magnitude was relatively reduced for distractors at the more likely location, implying that observers can endogenously suppress interference from specific locations within an oculomotor map. We discuss the implications of these results for models of saccade target selection in the superior colliculus. |
Yu-Cin Jian; Ming-Lei Chen; Hwa-Wei Ko Context effects in processing of Chinese academic words: An eye-tracking investigation Journal Article In: Reading Research Quarterly, vol. 48, no. 4, pp. 403–413, 2013. @article{Jian2013,This study investigated context effects of online processing of Chinese academic words during text reading. Undergraduate participants were asked to read Chinese texts that were familiar or unfamiliar (containing physics terminology) to them. Physics texts were selected first, and then we replaced the physics terminology with familiar words; other common words remained the same in both text versions. Our results indicate that readers experienced longer rereading times and total fixation durations for the same common words in the physics texts than for the corresponding texts. Shorter gaze durations were observed for the replaced words than the physics terminology; however, the duration of participants' first fixations on these two word types did not differ from each other. Furthermore, although the participants performed similar reading paths after encountering the target words of the physics terminology and replaced words, their processing duration of the current sentences was very different. They reread the physics terminology more times and spent more reading time on the current sentences containing the physics terminology, searching for more information to aid comprehension. This study showed that adult readers seemed to successfully access each Chinese character's meaning but initially failed to access the meaning of the physics terminology. This could be attributable to the nature of the formation of Chinese words; however, the use of contextual information to comprehend unfamiliar words is a universal phenomenon. |
Maciej Kosilo; Sophie M. Wuerger; Matt Craddock; Ben J. Jennings; Amelia R. Hunt; Jasna Martinovic Low-level and high-level modulations of fixational saccades and high frequency oscillatory brain activity in a visual object classification task Journal Article In: Frontiers in Psychology, vol. 4, pp. 948, 2013. @article{Kosilo2013,Until recently induced gamma-band activity (GBA) was considered a neural marker of cortical object representation. However, induced GBA in the electroencephalogram (EEG) is susceptible to artifacts caused by miniature fixational saccades. Recent studies have demonstrated that fixational saccades also reflect high-level representational processes. Do high-level as opposed to low-level factors influence fixational saccades? What is the effect of these factors on artifact-free GBA? To investigate this, we conducted separate eye tracking and EEG experiments using identical designs. Participants classified line drawings as objects or non-objects. To introduce low-level differences, contours were defined along different directions in cardinal color space: S-cone-isolating, intermediate isoluminant, or a full-color stimulus, the latter containing an additional achromatic component. Prior to the classification task, object discrimination thresholds were measured and stimuli were scaled to matching suprathreshold levels for each participant. In both experiments, behavioral performance was best for full-color stimuli and worst for S-cone isolating stimuli. Saccade rates 200-700 ms after stimulus onset were modulated independently by low and high-level factors, being higher for full-color stimuli than for S-cone isolating stimuli and higher for objects. Low-amplitude evoked GBA and total GBA were observed in very few conditions, showing that paradigms with isoluminant stimuli may not be ideal for eliciting such responses. We conclude that cortical loops involved in the processing of objects are preferentially excited by stimuli that contain achromatic information. Their activation can lead to relatively early exploratory eye movements even for foveally-presented stimuli. |
Kimberly S. Chiew; Todd S. Braver Temporal dynamics of motivation-cognitive control interactions revealed by high-resolution pupillometry Journal Article In: Frontiers in Psychology, vol. 4, pp. 15, 2013. @article{Chiew2013,Motivational manipulations, such as the presence of performance-contingent reward incentives, can have substantial influences on cognitive control. Previous evidence suggests that reward incentives may enhance cognitive performance specifically through increased preparatory, or proactive, control processes. The present study examined reward influences on cognitive control dynamics in the AX-Continuous Performance Task (AX-CPT), using high-resolution pupillometry. In the AX-CPT, contextual cues must be actively maintained over a delay in order to appropriately respond to ambiguous target probes. A key feature of the task is that it permits dissociable characterization of preparatory, proactive control processes (i.e., utilization of context) and reactive control processes (i.e., target-evoked interference resolution). Task performance profiles suggested that reward incentives enhanced proactive control (context utilization). Critically, pupil dilation was also increased on reward incentive trials during context maintenance periods, suggesting trial-specific shifts in proactive control, particularly when context cues indicated the need to overcome the dominant target response bias. Reward incentives had both transient (i.e., trial-by-trial) and sustained (i.e., block-based) effects on pupil dilation, which may reflect distinct underlying processes. The transient pupillary effects were present even when comparing against trials matched in task performance, suggesting a unique motivational influence of reward incentives. These results suggest that pupillometry may be a useful technique for investigating reward motivational signals and their dynamic influence on cognitive control. |
Roger P. Levy; Frank Keller Expectation and locality effects in German verb-final structures Journal Article In: Journal of Memory and Language, vol. 68, no. 2, pp. 199–222, 2013. @article{Levy2013a,Probabilistic expectations and memory limitations are central factors governing the real-time comprehension of natural language, but how the two factors interact remains poorly understood. One respect in which the two factors have come into theoretical conflict is the documentation of both locality effects, in which having more dependents preceding a governing verb increases processing difficulty at the verb, and anti-locality effects, in which having more preceding dependents facilitates processing at the verb. However, no controlled study has previously demonstrated both locality and anti-locality effects in the same type of dependency relation within the same language. Additionally, many previous demonstrations of anti-locality effects have been potentially confounded with lexical identity, plausibility, and sentence position. Here, we provide new evidence of both locality and anti-locality effects in the same type of dependency relation in a single language-verb-final constructions in German-while controlling for lexical identity, plausibility, and sentence position. In main clauses, we find clear anti-locality effects, with the presence of a preceding dative argument facilitating processing at the final verb; in subject-extracted relative clauses with identical linear ordering of verbal dependents, we find both anti-locality and locality effects, with processing facilitated when the verb is preceded by a dative argument alone, but hindered when the verb is preceded by both the dative argument and an adjunct. These results indicate that both expectations and memory limitations need to be accounted for in any complete theory of online syntactic comprehension. |
J. R. Lukos; J. Snider; M. E. Hernandez; E. Tunik; S. Hillyard; Howard Poizner Parkinson's disease patients show impaired corrective grasp control and eye-hand coupling when reaching to grasp virtual objects Journal Article In: Neuroscience, vol. 254, pp. 205–221, 2013. @article{Lukos2013,The effect of Parkinson's disease (PD) on hand-eye coordination and corrective response control during reach-to-grasp tasks remains unclear. Moderately impaired PD patients (n= 9) and age-matched controls (n= 12) reached to and grasped a virtual rectangular object, with haptic feedback provided to the thumb and index fingertip by two 3-degree of freedom manipulanda. The object rotated unexpectedly on a minority of trials, requiring subjects to adjust their grasp aperture. On half the trials, visual feedback of finger positions disappeared during the initial phase of the reach, when feedforward mechanisms are known to guide movement. PD patients were tested without (OFF) and with (ON) medication to investigate the effects of dopamine depletion and repletion on eye-hand coordination online corrective response control. We quantified eye-hand coordination by monitoring hand kinematics and eye position during the reach. We hypothesized that if the basal ganglia are important for eye-hand coordination and online corrections to object perturbations, then PD patients tested OFF medication would show reduced eye-hand spans and impoverished arm-hand coordination responses to the perturbation, which would be further exasperated when visual feedback of the hand was removed. Strikingly, PD patients tracked their hands with their gaze, and their movements became destabilized when having to make online corrective responses to object perturbations exhibiting pauses and changes in movement direction. These impairments largely remained even when tested in the ON state, despite significant improvement on the Unified Parkinson's Disease Rating Scale. Our findings suggest that basal ganglia-cortical loops are essential for mediating eye-hand coordination and adaptive online responses for reach-to-grasp movements, and that restoration of tonic levels of dopamine may not be adequate to remediate this coordinative nature of basal ganglia-modulated function. © 2013 IBRO. |
Linsey Roijendijk; Jason Farquhar; Marcel A. J. Gerven; Ole Jensen; Stan Gielen In: PLoS ONE, vol. 8, no. 12, pp. e80489, 2013. @article{rfvjg13,OBJECTIVE: Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. APPROACH: We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. MAIN RESULTS: Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. SIGNIFICANCE: Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research. |
Viral Sheth; Irene Gottlob; Sarim Mohammad; Rebecca J. McLean; Gail D. E. Maconachie; Anil Kumar; Christopher Degg; Frank A. Proudlock Diagnostic potential of iris cross-sectional imaging in albinism using optical coherence tomography Journal Article In: Ophthalmology, vol. 120, no. 10, pp. 2082–2090, 2013. @article{Sheth2013,Purpose: To characterize in vivo anatomic abnormalities of the iris in albinism compared with healthy controls using anterior segment optical coherence tomography (AS-OCT) and to explore the diagnostic potential of this technique for albinism. We also investigated the relationship between iris abnormalities and other phenotypical features of albinism. Design: Prospective cross-sectional study. Participants: A total of 55 individuals with albinism and 45 healthy controls. Methods: We acquired 4.37×4.37-mm volumetric scans (743 A-scans, 50 B-scans) of the nasal and temporal iris in both eyes using AS-OCT (3-μm axial resolution). Iris layers were segmented and thicknesses were measured using ImageJ software. Iris transillumination grading was graded using Summers and colleagues' classification. Retinal OCT, eye movement recordings, best-corrected visual acuity (BCVA), visual evoked potential (VEP), and grading of skin and hair pigmentation were used to quantify other phenotypical features associated with albinism. Main Outcome Measures: Iris AS-OCT measurements included (1) total iris thickness, (2) stroma/anterior border (SAB) layer thickness, and (3) posterior epithelial layer (PEL) thickness. Correlation with other phenotypical measurements, including (1) iris transillumination grading, (2) retinal layer measurements at the fovea, (3) nystagmus intensity, (4) BCVA, (5) VEP asymmetry, (6) skin pigmentation, and (7) hair pigmentation (of head hair, lashes, and brows). Results: The mean iris thickness was 10.7% thicker in controls (379.3±44.0 μm) compared with the albinism group (342.5±52.6 μm; P > 0.001), SAB layers were 5.8% thicker in controls (315.1±43.8 μm) compared with the albinism group (297.7±50.0 μm; P=0.044), and PEL was 44.0% thicker in controls (64.1±11.7 μm) compared with the albinism group (44.5±13.9 μm; P < 0.0001). The most ciliary quartile of the PEL yielded a sensitivity of 85% and specificity of 78% for detecting albinism. Phenotypic features of albinism, such as skin and hair pigmentation, BCVA, and nystagmus intensity, were significantly correlated to AS-OCT iris thickness measurements. Conclusions: We have characterized in vivo abnormalities of the iris associated with albinism for the first time and show that PEL thickness is particularly affected. We demonstrate that PEL thickness has diagnostic potential for detecting iris abnormalities in albinism. Anterior segment OCT iris measurements are significantly correlated to BCVA and nystagmus intensity in contrast to iris transillumination grading measurements that were not. |
C. Cavina-Pratesi; Constanze Hesse Why do the eyes prefer the index finger? Simultaneous recording of eye and hand movements during precision grasping Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–15, 2013. @article{CavinaPratesi2013,Previous research investigating eye movements when grasping objects with precision grip has shown that we tend to fixate close to the contact position of the index finger on the object. It has been hypothesized that this behavior is related to the fact that the index finger usually describes a more variable trajectory than the thumb and therefore requires a higher amount of visual monitoring. We wished to directly test this prediction by creating a grasping task in which either the index finger or the thumb described a more variable trajectory. Experiment 1 showed that the trajectory variability of the digits can be manipulated by altering the direction from which the hand approaches the object. If the start position is located in front of the object (hand-before), the index finger produces a more variable trajectory. In contrast, when the hand approaches the object from a starting position located behind it (hand-behind), the thumb produces a more variable movement path. In Experiment 2, we tested whether the fixation pattern during grasping is altered in conditions in which the trajectory variability of the two digits is reversed. Results suggest that regardless of the trajectory variability, the gaze was always directed toward the contact position of the index finger. Notably, we observed that regardless of our starting position manipulation, the index finger was the first digit to make contact with the object. Hence, we argue that time to contact (and not movement variability) is the crucial parameter which determines where we look during grasping. |
Li Zhang; Jie Ren; Liang Xu; Xue Jun Qiu; Jost B. Jonas Visual comfort and fatigue when watching three-dimensional displays as measured by eye movement analysis Journal Article In: British Journal of Ophthalmology, vol. 97, no. 7, pp. 941–942, 2013. @article{Zhang2013a,With the growth in three-dimensional viewing of movies, we assessed whether visual fatigue or alertness differed between three-dimensional (3D) viewing versus two- dimensional (2D) viewing of movies. We used a camera-based analysis of eye move- ments to measure blinking, fixation and sac- cades as surrogates of visual fatigue. |
Patrick J. Mineault; Theodoros P. Zanos; Christopher C. Pack Local field potentials reflect multiple spatial scales in V4 Journal Article In: Frontiers in Computational Neuroscience, vol. 7, pp. 21, 2013. @article{Mineault2013,Local field potentials (LFP) reflect the properties of neuronal circuits or columns recorded in a volume around a microelectrode (Buzsáki et al., 2012). The extent of this integration volume has been a subject of some debate, with estimates ranging from a few hundred microns (Katzner et al., 2009; Xing et al., 2009) to several millimeters (Kreiman et al., 2006). We estimated receptive fields (RFs) of multi-unit activity (MUA) and LFPs at an intermediate level of visual processing, in area V4 of two macaques. The spatial structure of LFP receptive fields varied greatly as a function of time lag following stimulus onset, with the retinotopy of LFPs matching that of MUAs at a restricted set of time lags. A model-based analysis of the LFPs allowed us to recover two distinct stimulus-triggered components: an MUA-like retinotopic component that originated in a small volume around the microelectrodes (~350 μm), and a second component that was shared across the entire V4 region; this second component had tuning properties unrelated to those of the MUAs. Our results suggest that the LFP reflects neural activity across multiple spatial scales, which both complicates its interpretation and offers new opportunities for investigating the large-scale structure of network processing. |
Shai Gabay; Yoni Pertzov; Noga Cohen; Galia Avidan; Avishai Henik Remapping of the environment without corollary discharges: Evidence from scene-based IOR Journal Article In: Journal of vision, vol. 13, pp. 1–10, 2013. @article{Gabay2013,Previous studies suggested that in order to perceive a stable image of the visual world despite constant eye movements, an efference copy of the oculomotor command is used to remap the representation of the environment in the brain. In two experiments, an inhibitory attentional component (inhibition of return-IOR) was used to examine whether remapping can occur also in the absence of eye movements. Participants were asked to maintain fixation while an unpredictive, attention-grabbing cue appeared and was then followed by a movement of the background image which was artificial (random dots, Experiment 1) or composed of natural scenes (Experiment 2). The participants were then required to respond to a target stimulus that was presented either at the same location as the cue relative to fixation (retinotopic), or at a matching location relative to the background (scene based). In both experiments, an IOR effect was found in scene-based locations immediately after the movement of the background. We suggest that remapping of the inhibitory tagging, which might be a proxy for remapping of the visual scene, could be accomplished rapidly even without the use of an efference copy; the inhibitory tag seems to be anchored to the background image and to move together with it. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr Spatial position information accumulates steadily over time Journal Article In: Journal of Neuroscience, vol. 33, no. 47, pp. 18396–18401, 2013. @article{Zimmermann2013,One of the more enduring mysteries of neuroscience is how the visual system constructs robust maps of the world that remain stable in the face of frequent eye movements. Here we show that encoding the position of objects in external space is a relatively slow process, building up over hundreds of milliseconds. We display targets to which human subjects saccade after a variable preview duration. As they saccade, the target is displaced leftwards or rightwards, and subjects report the displacement direction. When subjects saccade to targets without delay, sensitivity is poor; but if the target is viewed for 300-500 ms before saccading, sensitivity is similar to that during fixation with a strong visual mask to dampen transients. These results suggest that the poor displacement thresholds usually observed in the "saccadic suppression of displacement" paradigm are a result of the fact that the target has had insufficient time to be encoded in memory, and not a result of the action of special mechanisms conferring saccadic stability. Under more natural conditions, trans-saccadic displacement detection is as good as in fixation, when the displacement transients are masked. |
Pilyoung Kim; Joseph Arizpe; Brooke H. Rosen; Varun Razdan; Catherine T. Haring; Sarah E. Jenkins; Christen M. Deveney; Melissa A. Brotman; R. James R. Blair; Daniel S. Pine; Chris I. Baker; Ellen Leibenluft Impaired fixation to eyes during facial emotion labelling in children with bipolar disorder or severe mood dysregulation Journal Article In: Journal of Psychiatry and Neuroscience, vol. 38, no. 6, pp. 407–416, 2013. @article{Kim2013a,Background: Children with bipolar disorder (BD) or severe mood dysregulation (SMD) show behavioural and neural deficits during facial emotion processing. In those with other psychiatric disorders, such deficits have been associated with reduced attention to eye regions while looking at faces. Methods: We examined gaze fixation patterns during a facial emotion labelling task among children with pediatric BD and SMD and among healthy controls. Participants viewed facial expressions with varying emotions (anger, fear, sadness, happi- ness, neutral) and emotional levels (60%, 80%, 100%) and labelled emotional expressions. Results: Our study included 22 children with BD, 28 with SMD and 22 controls. Across all facial emotions, children with BD and SMD made more labelling errors than controls. Compared with controls, children with BD spent less time looking at eyes and made fewer eye fixations across emotional expressions. Gaze patterns in children with SMD tended to fall between those of children with BD and controls, although they did not differ significantly from either of these groups on most measures. Decreased fixations to eyes correlated with lower labelling accuracy in children with BD, but not in those with SMD or in controls. Limitations: Most children with BD were medicated, which precluded our ability to evaluate med- ication effects on gaze patterns. Conclusion: Facial emotion labelling deficits in children with BD are associated with impaired attention to eyes. Future research should examine whether impaired attention to eyes is associated with neural dysfunction. Eye gaze deficits in children with BD during facial emotion labelling may also have treatment implications. Finally, children with SMD exhibited decreased attention to eyes to a lesser extent than those with BD, and these equivocal findings are worthy of further study. |
Li Zhang; Ya-Qin Zhang; Jing-Shang Zhang; Liang Xu; Jost B. Jonas Visual fatigue and discomfort after stereoscopic display viewing Journal Article In: Acta Ophthalmologica, vol. 91, no. 2, pp. 149–153, 2013. @article{Zhang2013b,Purpose: Different types of stereoscopic video displays have recently been introduced. We measured and compared visual fatigue and visual discomfort induced by viewing two different stereoscopic displays that either used the pattern retarder-spatial domain technology with linearly polarized three-dimensional technology or the circularly polarized three-dimensional technology using shutter glasses.; Methods: During this observational cross-over study performed at two subsequent days, a video was watched by 30 subjects (age: 20-30 years). Half of the participants watched the screen with a pattern retard three-dimensional display at the first day and a shutter glasses three-dimensional display at the second day, and reverse. The study participants underwent a standardized interview on visual discomfort and fatigue, and a series of functional examinations prior to, and shortly after viewing the movie. Additionally, a subjective score for visual fatigue was given.; Results: Accommodative magnitude (right eye: p < 0.001; left eye: p = 0.01), accommodative facility (p = 0.008), near-point convergence break-up point (p = 0.007), near-point convergence recover point (p = 0.001), negative (p = 0.03) and positive (p = 0.001) relative accommodation were significantly smaller, and the visual fatigue score was significantly higher (1.65 ± 1.18 versus 1.20 ± 1.03; p = 0.02) after viewing the shutter glasses three-dimensional display than after viewing the pattern retard three-dimensional display.; Conclusions: Stereoscopic viewing using pattern retard (polarized) three-dimensional displays as compared with stereoscopic viewing using shutter glasses three-dimensional displays resulted in significantly less visual fatigue as assessed subjectively, parallel to significantly better values of accommodation and convergence as measured objectively. |
Manon Mulckhuyse; Geert Crombez; Stefan Van der Stigchel Conditioned fear modulates visual selection Journal Article In: Emotion, vol. 13, no. 3, pp. 529–536, 2013. @article{Mulckhuyse2013,Eye movements reflect the dynamic interplay between top-down- and bottom-up-driven processes. For example, when we voluntarily move our eyes across the visual field, salient visual stimuli in the environment may capture our attention, our eyes, or modulate the trajectory of an eye movement. Previous research has shown that the behavioral relevance of a salient stimulus modulates these processes. This study investigated whether a stimulus signaling an aversive event modulates saccadic behavior. Using a differential fear-conditioning procedure, we presented a threatening (conditional stimulus: CS+) and a nonthreatening stimulus distractor (CS-) during an oculomotor selection task. The results show that short-latency saccades deviated more strongly toward the CS+ than toward the CS- distractor, whereas long-latency saccades deviated more strongly away from the CS+ than from the CS- distractor. Moreover, the CS+ distractor captured the eyes more often than the CS- distractor. Together, these results demonstrate that conditioned fear has a direct and immediate influence on visual selection. The findings are interpreted in terms of a neurobiological model of emotional visual processing. |
Chen Song; D. Samuel Schwarzkopf; Antoine Lutti; Baojuan Li; Ryota Kanai; Geraint Rees Effective connectivity within human primary visual cortex predicts interindividual diversity in illusory perception Journal Article In: Journal of Neuroscience, vol. 33, no. 48, pp. 18781–18791, 2013. @article{ssllkr13,Visual perception depends strongly on spatial context. A classic example is the tilt illusion where the perceived orientation of a central stimulus differs from its physical orientation when surrounded by tilted spatial contexts. Here we show that such contextual modulation of orientation perception exhibits trait-like interindividual diversity that correlates with interindividual differences in effective connectivity within human primary visual cortex. We found that the degree to which spatial contexts induced illusory orientation perception, namely, the magnitude of the tilt illusion, varied across healthy human adults in a trait-like fashion independent of stimulus size or contrast. Parallel to contextual modulation of orientation perception, the presence of spatial contexts affected effective connectivity within human primary visual cortex between peripheral and foveal representations that responded to spatial context and central stimulus, respectively. Importantly, this effective connectivity from peripheral to foveal primary visual cortex correlated with interindividual differences in the magnitude of the tilt illusion. Moreover, this correlation with illusion perception was observed for effective connectivity under tilted contextual stimulation but not for that under iso-oriented contextual stimulation, suggesting that it reflected the impact of orientation-dependent intra-areal connections. Our findings revealed an interindividual correlation between intra-areal connectivity within primary visual cortex and contextual influence on orientation perception. This neurophysiological-perceptual link provides empirical evidence for theoretical proposals that intra-areal connections in early visual cortices are involved in contextual modulation of visual perception. |
Jill X. O'Reilly; Urs Schuffelgen; Steven F. Cuell; Timothy E. J. Behrens; Rogier B. Mars; Matthew F. S. Rushworth Dissociable effects of surprise and model update in parietal and anterior cingulate cortex Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 38, pp. E3660–E3669, 2013. @article{OReilly2013,Brains use predictive models to facilitate the processing of expected stimuli or planned actions. Under a predictive model, surprising (low probability) stimuli or actions necessitate the immediate reallocation of processing resources, but they can also signal the need to update the underlying predictive model to reflect changes in the environment. Surprise and updating are often correlated in experimental paradigms but are, in fact, distinct constructs that can be formally defined as the Shannon information (IS) and Kullback-Leibler divergence (DKL) associated with an observation. In a saccadic planning task, we observed that distinct behaviors and brain regions are associated with surprise/IS and updating/DKL. Although surprise/IS was associated with behavioral reprogramming as indexed by slower reaction times, as well as with activity in the posterior parietal cortex [human lateral intraparietal area (LIP)], the anterior cingulate cortex (ACC) was specifically activated during updating of the predictive model (DKL). A second saccade-sensitive region in the inferior posterior parietal cortex (human 7a), which has connections to both LIP and ACC, was activated by surprise and modulated by updating. Pupillometry revealed a further dissociation between surprise and updating with an early positive effect of surprise and late negative effect of updating on pupil area. These results give a computational account of the roles of the ACC and two parietal saccade regions, LIP and 7a, by which their involvement in diverse tasks can be understood mechanistically. The dissociation of functional roles between regions within the reorienting/reprogramming network may also inform models of neurological phenomena, such as extinction and Balint syndrome, and neglect. |
Guanghan Song; Denis Pellerin; Lionel Granjon Different types of sounds influence gaze differently in videos Journal Article In: Journal of Eye Movement Research, vol. 6, no. 4, pp. 1–13, 2013. @article{Song2013,This paper presents an analysis of the effect of different types of sounds on visual gaze when a person is looking freely at videos, which would be helpful to predict eye position. In order to test the effect of sound, an audio-visual experiment was designed with two groups of participants, with audio-visual (AV) and visual (V) conditions. By using statisti- cal tools, we analyzed the difference between eye position of participants with AV and V conditions. We observed that the effect of sound is different depending on the kind of sound, and that the classes with human voice (i.e. speech, singer, human noise and singers) have the greatest effect. Furthermore, the results of the distance between sound source and eye position of the group with AV condition, suggested that only particular types of sound attract human eye position to the sound source. Finally, an analysis of the fixation duration between AV and V conditions showed that participants with AV condition move eyes more frequently than those with V condition. |
Wesley K. Burge; Lesley A. Ross; Franklin R. Amthor; William G. Mitchell; Alexander Zotov; Kristina M. Visscher Processing speed training increases the efficiency of attentional resource allocation in young adults Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 684, 2013. @article{Burge2013,Cognitive training has been shown to improve performance on a range of tasks. However, the mechanisms underlying these improvements are still unclear. Given the wide range of transfer effects, it is likely that these effects are due to a factor common to a wide range of tasks. One such factor is a participant's efficiency in allocating limited cognitive resources. The impact of a cognitive training program, Processing Speed Training (PST), on the allocation of resources to a set of visual tasks was measured using pupillometry in 10 young adults as compared to a control group of a 10 young adults (n = 20). PST is a well-studied computerized training program that involves identifying simultaneously presented central and peripheral stimuli. As training progresses, the task becomes increasingly more difficult, by including peripheral distracting stimuli and decreasing the duration of stimulus presentation. Analysis of baseline data confirmed that pupil diameter reflected cognitive effort. After training, participants randomized to PST used fewer attentional resources to perform complex visual tasks as compared to the control group. These pupil diameter data indicated that PST appears to increase the efficiency of attentional resource allocation. Increases in cognitive efficiency have been hypothesized to underlie improvements following experience with action video games, and improved cognitive efficiency has been hypothesized to underlie the benefits of PST in older adults. These data reveal that these training schemes may share a common underlying mechanism of increasing cognitive efficiency in younger adults. |
Hayward J. Godwin; Valerie Benson; Denis Drieghe Using interrupted visual displays to explore the capacity, time course, and format of fixation plans during visual search Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 6, pp. 1700–1712, 2013. @article{Godwin2013,We assessed fixation planning in visual search in two experiments by tracking participants' eye movements while they each searched for a simple target (a T shape) among a set of distractors (L shapes). On some trials, the search display was briefly interrupted with a blank screen and, following a randomly determined period of elapsed time, the search display was reinstated. In Experiment 1, we found that search continued during the interruption but fixation durations were increased and the accuracy of saccadic targeting was impaired. A MLM demonstrated that acuity played a role in determining whether fixated missing objects were processed during the interruption and that fixation planning was uninfluenced by the length of time available prior to the interruption. In Experiment 2, to check that fixations in the interruption periods were not random, half the distractors were blue (the target was blue as well) and half were yellow. All of the findings from Experiment 1 were replicated and the majority of fixations in the interruption period landed upon blue distractors. Results are discussed in relation to the capacity, time course, and format of fixation plans in visual search. |
Suiping Wang; Deyuan Mo; Ming Xiang; Ruiping Xu; Hsuan-Chih Chen The time course of semantic and syntactic processing in reading Chinese: Evidence from ERPs Journal Article In: Language and Cognitive Processes, vol. 28, no. 4, pp. 577–596, 2013. @article{Wang2013c,The time course of semantic and syntactic processing in reading Chinese was examined by recording event-related brain potentials (ERPs) as native Chinese speakers read individually presented sentences for comprehension and performed semantic plausibility judgments. The transitivity of the verbs in Chinese ba/bei constructions was manipulated to form three types of stimuli: Congruent sentences (CON), sentences with semantic violation (SEM), and sentences with combined semantic and syntactic violation (SEM'SYN). Compared with the critical words in CON, those in SEM and SEM'SYN elicited an N400-P600 biphasic pattern. The N400 effects in both violation conditions were of similar size and distribution, but the P600 in SEM'SYN was bigger than that in SEM. Overall, the lack of a difference between SEM and SEM'SYN in the earlier time window (i.e., N400 window) suggested that syntactic processing in Chinese does not necessarily occur earlier than semantic processing. |
Veronica Whitford; Gillian A. O'Driscoll; Christopher C. Pack; Ridha Joober; Ashok Malla; Debra Titone In: Journal of Experimental Psychology: General, vol. 142, no. 1, pp. 57–75, 2013. @article{Whitford2013,Language and oculomotor disturbances are 2 of the best replicated findings in schizophrenia. However, few studies have examined skilled reading in schizophrenia (e.g., Arnott, Sali, Copland, 2011; Hayes & O'Grady, 2003; Revheim et al., 2006; E. O. Roberts et al., 2012), and none have examined the contribution of cognitive and motor processes that underlie reading performance. Thus, to evaluate the relationship of linguistic processes and oculomotor control to skilled reading in schizophrenia, 20 individuals with schizophrenia and 16 demographically matched controls were tested using a moving window paradigm (McConkie & Rayner, 1975). Linguistic skills supporting reading (phonological awareness) were assessed with the Comprehensive Test of Phonological Processing (R. K. Wagner, Torgesen, & Rashotte, 1999). Eye movements were assessed during reading tasks and during nonlinguistic tasks tapping basic oculomotor control (prosaccades, smooth pursuit) and executive functions (predictive saccades, antisaccades). Compared with controls, schizophrenia patients exhibited robust oculomotor markers of reading difficulty (e.g., reduced forward saccade amplitude) and were less affected by reductions in window size, indicative of reduced perceptual span. Reduced perceptual span in schizophrenia was associated with deficits in phonological processing and reduced saccade amplitudes. Executive functioning (antisaccade errors) was not related to perceptual span but was related to reading comprehension. These findings suggest that deficits in language, oculomotor control, and cognitive control contribute to skilled reading deficits in schizophrenia. Given that both language and oculomotor dysfunction precede illness onset, reading may provide a sensitive window onto cognitive dysfunction in schizophrenia vulnerability and be an important target for cognitive remediation. |
Virginie Desestret; Nathalie Streichenberger; Muriel T. N. Panouillères; Denis Pélisson; B. Plus; Charles Duyckaerts; Dennis K. Burns; Christian Scheiber; Alain Vighetto; Caroline Tilikete An elderly woman with difficulty reading and abnormal eye movements Journal Article In: Journal of Neuro-Ophthalmology, vol. 33, no. 3, pp. 296–301, 2013. @article{Desestret2013,A 73-year-old woman was evaluated in our neuro- ophthalmology clinic with a 1-year history of progressive difficulty reading. The patient's visual acuity, pupillary reactions to light and near stimulation, visual fields, and fundi were normal. Examination of her eye movements revealed a supranuclear vertical gaze abnormality, charac- terized by lack of upward saccades but intact downward saccades. The patient also had had difficulty initiating voluntary, especially leftward horizontal saccades on command, but reactive horizontal saccades were relatively well preserved. She was able to follow a pencil light moved by the examiner using small saccades (saccadic smooth pursuit) and her vestibulo-ocular reflex (VOR) was intact. She had apraxia of lid closure. The patient had no cognitive deficit, behavioral or social disturbance, aphasia, alexia, limb apraxia, postural ataxia, pyramidal signsorparkinsonism. |
Stéphanie Ducrot; Joël Pynte; Alain Ghio; Bernard Lété Visual and linguistic determinants of the eyes' initial fixation position in reading development Journal Article In: Acta Psychologica, vol. 142, no. 3, pp. 287–298, 2013. @article{Ducrot2013,Two eye-movement experiments with one hundred and seven first- through fifth-grade children were conducted to examine the effects of visuomotor and linguistic factors on the recognition of words and pseudowords presented in central vision (using a variable-viewing-position technique) and in parafoveal vision (shifted to the left or right of a central fixation point). For all groups of children, we found a strong effect of stimulus location, in both central and parafoveal vision. This effect corresponds to the children's apparent tendency, for peripherally located targets, to reach a position located halfway between the middle and the left edge of the stimulus (preferred viewing location, PVL), whether saccading to the right or left. For centrally presented targets, refixation probability and lexical-decision time were the lowest near the word's center, suggesting an optimal viewing position (OVP). The viewing-position effects found here were modulated (1) by print exposure, both in central and parafoveal vision; and (2) by the intrinsic qualities of the stimulus (lexicality and word frequency) for targets in central vision but not for parafoveally presented targets. |
Joo-Hyun Song; Patrick Bédard Allocation of attention for dissociated visual and motor goals Journal Article In: Experimental Brain Research, vol. 226, no. 2, pp. 209–219, 2013. @article{Song2013a,In daily life, selecting an object visually is closely intertwined with processing that object as a potential goal for action. Since visual and motor goals are typically identical, it remains unknown whether attention is primarily allocated to a visual target, a motor goal, or both. Here, we dissociated visual and motor goals using a visuomotor adaptation paradigm, in which participants reached toward a visual target using a computer mouse or a stylus pen, while the direction of the cursor was rotated 45° counter-clockwise from the direction of the hand movement. Thus, as visuomotor adaptation was accomplished, the visual target was dissociated from the movement goal. Then, we measured the locus of attention using an attention-demanding rapid serial visual presentation (RSVP) task, in which participants detected a pre-defined visual stimulus among the successive visual stimuli presented on either the visual target, the motor goal, or a neutral control location. We demonstrated that before visuomotor adaptation, participants performed better when the RSVP stream was presented at the visual target than at other locations. However, once visual and motor goals were dissociated following visuomotor adaptation, performance at the visual and motor goals was equated and better than performance at the control location. Therefore, we concluded that attentional resources are allocated both to visual target and motor goals during goal-directed reaching movements. |
Hayward J. Godwin; Stuart Hyde; Dominic Taunton; James Calver; James I. R. Blake; Simon P. Liversedge The influence of expertise on maritime driving behaviour Journal Article In: Applied Cognitive Psychology, vol. 27, no. 4, pp. 483–492, 2013. @article{Godwin2013a,We compared expert and novice behaviour in a group of participants as they engaged in a simulated maritime driving task. We varied the difficulty of the driving task by controllling the severity of the sea state in which they were driving their craft. Increases in sea severity increased the size of the upcoming waves while also increasing the length of the waves. Expert participants drove their craft at a higher speed than novices and decreased their fixation durations as wave severity increased. Furthermore, the expert participants increased the horizontal spread of their fixation positions as wave severity increased to a greater degree than novices. Conversely, novice participants showed evidence of a greater vertical spread of fixations than experts. By connecting our findings with previous research investigating eye movement behaviour and road driving, we suggest that novice or inexperienced drivers show inflexibility in adaptation to changing driving conditions. |
Richard L. Lewis; Michael Shvartsman; Satinder Singh The adaptive nature of eye movements in linguistic tasks: How payoff and architecture shape speed-accuracy trade-offs Journal Article In: Topics in Cognitive Science, vol. 5, no. 3, pp. 581–610, 2013. @article{Lewis2013,We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed- accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to—and found to accord with—eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. |
Sophie Marat; Anis Rahman; Denis Pellerin; Nathalie Guyader; Dominique Houzet Improving visual saliency by adding 'face feature map' and 'center bias' Journal Article In: Cognitive Computation, vol. 5, no. 1, pp. 63–75, 2013. @article{Marat2013,Faces play an important role in guiding visual attention, thus the inclusion of face detection into a classical visual attention model can improve eye movement predictions. In this study, we proposed a visual saliency model to predict eye movements during free viewing of videos. The model is inspired by the biology of the visual system, and breaks down each frame of a video database into three saliency maps, each earmarked for a particular visual feature. (i) A ‘static' saliency map emphasizes regions that differ from their context in terms of luminance, orientation and spatial frequency. (ii) A ‘dynamic' saliency map emphasizes moving regions with values proportional to motion amplitude. (iii) A ‘face' saliency map emphasizes areas where a face is detected with a value proportional to the confidence of the detection. In parallel, a behavioral experiment was carried out to record eye movements of participants when viewing the videos. These eye movements were compared with the models' saliency maps to quantify their efficiency.We also examined the influence of center bias on the saliency maps, and incorporated it into the model in a suitable way. Finally, we proposed an efficient fusion method of all these saliency maps. Consequently, the fused master saliency map developed in this research is a good predictor of participants' eye positions. |
Silvia Primativo; Lisa S. Arduino; Maria De Luca; Roberta Daini; Marialuisa Martelli Neglect dyslexia: A matter of "good looking" Journal Article In: Neuropsychologia, vol. 51, no. 11, pp. 2109–2119, 2013. @article{Primativo2013,Brain-damaged patients with right-sided unilateral spatial neglect (USN) often make left-sided errors in reading single words or pseudowords (neglect dyslexia, ND). We propose that both left neglect and low fixation accuracy account for reading errors in neglect dyslexia.Eye movements were recorded in USN patients with (ND+) and without (ND-) neglect dyslexia and in a matched control group of right brain-damaged patients without neglect (USN-). Unlike ND- and controls, ND+ patients showed left lateralized omission errors and a distorted eye movement pattern in both a reading aloud task and a non-verbal saccadic task. During reading, the total number of fixations was larger in these patients independent of visual hemispace, and most fixations were inaccurate. Similarly, in the saccadic task only ND+ patients were unable to reach the moving dot. A third experiment addressed the nature of the left lateralization in reading error distribution by simulating neglect dyslexia in ND- patients. ND- and USN- patients had to perform a speeded reading-at-threshold task that did not allow for eye movements. When stimulus exploration was prevented, ND- patients, but not controls, produced a pattern of errors similar to that of ND+ with unlimited exposure time (e.g., left-sided errors).We conclude that neglect dyslexia reading errors may arise in USN patients as a consequence of an additional and independent deficit unrelated to the orthographic material. In particular, the presence of an altered oculo-motor pattern, preventing the automatic execution of the fine saccadic eye movements involved in reading, uncovers, in USN patients, the attentional bias also in reading single centrally presented words. |
Melissa L. -H. Võ; Jeremy M. Wolfe The interplay of episodic and semantic memory in guiding repeated search in scenes. Journal Article In: Cognition, vol. 126, no. 2, pp. 198–212, 2013. @article{Vo2013,It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers performed as many as 15 searches for different objects in the same, unchanging scene, the speed of search did not decrease much over the course of these multiple searches (Vo & Wolfe, 2012). Only when observers were asked to search for the same object again did search become considerably faster. We argued that our naturalistic scenes provided such strong "semantic" guidance-e.g., knowing that a faucet is usually located near a sink-that guidance by incidental episodic memory-having seen that faucet previously-was rendered less useful. Here, we directly manipulated the availability of semantic information provided by a scene. By monitoring observers' eye movements, we found a tight coupling of semantic and episodic memory guidance: Decreasing the availability of semantic information increases the use of episodic memory to guide search. These findings have broad implications regarding the use of memory during search in general and particularly during search in naturalistic scenes. |
Oliver Bott The processing domain of aspectual interpretation Journal Article In: Studies in Linguistics and Philosophy, vol. 93, pp. 195–229, 2013. @article{Bott2013,In the semantic literature lexical aspect is often treated as a property of VPs or even of whole sentences. Does the interpretation of lexical aspect – contrary to the incrementality assumption commonly made in psycholinguistics – have to wait until the verb and all its arguments are present? To address this issue, we conducted an offline study, two self-paced reading experiments and an eyetracking experiment to investigate aspectual mismatch and aspectual coercion in German sentences while manipulating the position of the mismatching or coercing stimulus. Our findings provide evidence that mismatch detection and aspectual repair depend on a complete verb-argument structure. When the verb didn't receive all its (minimally required) arguments no mismatch or coercion effects showed up at the mismatching or coercing stimulus. Effects were delayed until a later point after all the arguments had been encountered. These findings have important consequences for semantic theory and for processing accounts of aspectual semantics. As far as semantic theory is concerned, it has to model lexical aspect as a supralexical property coming only into play at the sentence level. For theories of semantic processing the results are even more striking because they indicate that (at least some) semantic phenomena are processed on a more global level than it would be expected assuming incremental semantic interpretation. |
Carolin Dudschig; Jan L. Souman; Martin Lachmair; Irmgard Vega; Barbara Kaup Reading "sun" and looking up: The influence of language on saccadic eye movements in the vertical dimension Journal Article In: PLoS ONE, vol. 8, no. 2, pp. e56872, 2013. @article{Dudschig2013,Traditionally, language processing has been attributed to a separate system in the brain, which supposedly works in an abstract propositional manner. However, there is increasing evidence suggesting that language processing is strongly interrelated with sensorimotor processing. Evidence for such an interrelation is typically drawn from interactions between language and perception or action. In the current study, the effect of words that refer to entities in the world with a typical location (e.g., sun, worm) on the planning of saccadic eye movements was investigated. Participants had to perform a lexical decision task on visually presented words and non-words. They responded by moving their eyes to a target in an upper (lower) screen position for a word (non-word) or vice versa. Eye movements were faster to locations compatible with the word's referent in the real world. These results provide evidence for the importance of linguistic stimuli in directing eye movements, even if the words do not directly transfer directional information. |
Frouke Hermens; Tandra Ghose; Johan Wagemans Advance information modulates the global effect even without instruction on where to look Journal Article In: Experimental Brain Research, vol. 226, no. 4, pp. 639–648, 2013. @article{Hermens2013,When observers are asked to make an eye movement to a visual target in the presence of a near distractor, their eyes tend to land on a position in between the target and the distractor, an effect known as the global effect. While it was initially believed that the global effect is a mandatory eye movement strategy, recent studies have shown that explicit instructions to make an eye movement to a certain part of the scene can overrule the effect. We here investigate whether such top-down influences are also found when people are not actively involved in an explicit eye movement task, but instead, make eye movements in the service of another task. Participants were presented with arrays of yellow and green discs, each containing a letter, and were asked to identify a target letter. Because the discs were presented away from fixation, participants made an eye movement to the array of discs on most of the trials. An analysis of the landing sites of these eye movements revealed that, even without an explicit instruction, observers take the advance information about the colour of the disc containing the target into account before moving their eyes. Moreover, when asking participants to maintain fixation for intervals of different durations, it was found that the implicit top-down influences operated on a very similar time-scale as previously observed for explicit eye movement instructions. |
Alison M. Trude; Annie Tremblay; Sarah Brown-Schmidt Limitations on adaptation to foreign accents Journal Article In: Journal of Memory and Language, vol. 69, no. 3, pp. 349–367, 2013. @article{Trude2013,Although foreign accents can be highly dissimilar to native speech, existing research suggests that listeners readily adapt to foreign accents after minimal exposure. However, listeners often report difficulty understanding non-native accents, and the time-course and specificity of adaptation remain unclear. Across five experiments, we examined whether listeners could use a newly learned feature of a foreign accent to eliminate lexical competitors during on-line speech perception. Participants heard the speech of a native English speaker and a native speaker of Québec French who, in English, pronounces /i/ as [. i] (e.g., weak as wick) before all consonants except voiced fricatives. We examined whether listeners could learn to eliminate a shifted /i/-competitor (e.g., weak) when interpreting the accented talker produce an unshifted word (e.g., wheeze). In four experiments, adaptation was strikingly limited, though improvement across the course of the experiment and with stimulus variations indicates learning was possible. In a fifth experiment, adaptation was not improved when a native English talker produced the critical vowel shift, demonstrating that the limitation is not simply due to the fact the accented talker was non-native. These findings suggest that although listeners can arrive at the correct interpretation of a foreign accent, this process can pose significant difficulty. |
Anastasia Kourkoulou; Gustav Kuhn; John M. Findlay; Susan R. Leekam Eye movement difficulties in autism spectrum disorder: Implications for implicit contextual learning Journal Article In: Autism Research, vol. 6, no. 3, pp. 177–189, 2013. @article{Kourkoulou2013,It is widely accepted that we use contextual information to guide our gaze when searching for an object. People with autism spectrum disorder (ASD) also utilise contextual information in this way; yet, their visual search in tasks of this kind is much slower compared with people without ASD. The aim of the current study was to explore the reason for this by measuring eye movements. Eye movement analyses revealed that the slowing of visual search was not caused by making a greater number of fixations. Instead, participants in the ASD group were slower to launch their first saccade, and the duration of their fixations was longer. These results indicate that slowed search in ASD in contextual learning tasks is not due to differences in the spatial allocation of attention but due to temporal delays in the initial-reflexive orienting of attention and subsequent-focused attention. These results have broader implications for understanding the unusual attention profile of individuals with ASD and how their attention may be shaped by learning. Autism Res 2013, 6: 177–189. |
Louise Ann Leyland; Julie A. Kirkby; Barbara J. Juhasz; Alexander Pollatsek; Simon P. Liversedge The influence of word shading and word length on eye movements during reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 471–486, 2013. @article{Leyland2013,An interesting issue in reading is how parafoveal information affects saccadic targeting and fixation durations. We investigated the influence of shading selected regions of text on eye movements during reading of long and short words within sentences. A target word, either four- or eight-letters long, was presented in one of four shading conditions: the whole target word shaded; the first half shaded; second half shaded; no shading. There was no evidence of a visually mediated parafoveal-on-foveal effect. Saccadic targeting was modulated by the shading on the first half of the word, such that fixations landed closer to the beginning of the word than in the other three shading conditions. Furthermore, partial word shading, resulting in visual non-uniformity of the target word, produced longer gaze durations than the other conditions. Finally, readers spent more time re-reading target words when they were partially shaded than in the other two conditions. We suggest that our effects are due to targeting of the optimal viewing location and revisits to check words that appear visually unusual. Together, the results indicate robust effects of low-level visual characteristics of the word on oculomotor decisions of where and when to move the eyes during reading. |
Veronica Shi; Jie Cui; Xoana G. Troncoso; Stephen L. Macknik; Susana Martinez-Conde Effect of stimulus width on simultaneous contrast Journal Article In: PeerJ, vol. 1, pp. 1–13, 2013. @article{Shi2013,Perceived brightness of a stimulus depends on the background against which the stimulus is set, a phenomenon known as simultaneous contrast. For instance, the same gray stimulus can look light against a black background or dark against a white background. Here we quantified the perceptual strength of simultaneous contrast as a function of stimulus width. Previous studies have reported that wider stimuli result in weaker simultaneous contrast, whereas narrower stimuli result in stronger simultaneous contrast. However, no previous research has quantified this relationship. Our results show a logarithmic relationship between stimulus width and perceived brightness. This relationship is well matched by the normalized output of a Difference-of-Gaussians (DOG) filter applied to stimuli of varied widths. |
Steven L. Prime; Jonathan J. Marotta Gaze strategies during visually-guided versus memory-guided grasping Journal Article In: Experimental Brain Research, vol. 225, no. 2, pp. 291–305, 2013. @article{Prime2013,Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream. |
Christopher D. Fiorillo; Minryung R. Song; Sora R. Yun Multiphasic temporal dynamics in responses of midbrain dopamine neurons to appetitive and aversive stimuli Journal Article In: Journal of Neuroscience, vol. 33, no. 11, pp. 4710–4725, 2013. @article{Fiorillo2013,The transient response of dopamine neurons has been described as reward prediction error (RPE), with activation or suppression by events that are better or worse than expected, respectively. However, at least a minority of neurons are activated by aversive or high-intensity stimuli, casting doubt on the generality of RPE in describing the dopamine signal. To overcome limitations of previous studies, we studied neuronal responses to a wider variety of high-intensity and aversive stimuli, and we quantified and controlled aversiveness through a choice task in which macaques sacrificed juice to avoid aversive stimuli. Whereas most previous work has portrayed the RPE as a single impulse or "phase," here we demonstrate its multiphasic temporal dynamics. Aversive or high-intensity stimuli evoked a triphasic sequence of activation-suppression-activation extending over a period of 40-700 ms. The initial activation at short latencies (40-120 ms) reflected sensory intensity. The influence of motivational value became dominant between 150 and 250 ms, with activation in the case of appetitive stimuli, and suppression in the case of aversive and neutral stimuli. The previously unreported late activation appeared to be a modest "rebound" after strong suppression. Similarly, strong activation by reward was often followed by suppression. We suggest that these "rebounds" may result from overcompensation by homeostatic mechanisms in some cells. Our results are consistent with a realistic RPE, which evolves over time through a dynamic balance of excitation and inhibition. |
Daniel Mirman; Allison E. Britt; Qi Chen Effects of phonological and semantic deficits on facilitative and inhibitory consequences of item repetition in spoken word comprehension Journal Article In: Neuropsychologia, vol. 51, no. 10, pp. 1848–1856, 2013. @article{Mirman2013,Repeating a word can have both facilitative and inhibitory effects on subsequent processing. The present study investigated these dynamics by examining the facilitative and inhibitory consequences of different kinds of item repetition in two individuals with aphasia and a group of neurologically intact control participants. The two individuals with aphasia were matched on overall aphasia severity, but had deficits at different levels of processing: one with a phonological deficit and spared semantic processing, the other with a semantic deficit and spared phonological processing. Participants completed a spoken word-to-picture matching task in which they had to pick which of four object images matched the spoken word. The trials were grouped into pairs such that exactly two objects from the first trial in a pair were present on screen during the second trial in the pair. When the second trial's target was the same as the first trial's target, compared to control participants, both participants with aphasia exhibited equally larger repetition priming effects. When the second trial's target was one of the new items, the participant with a phonological deficit exhibited a significantly more negative effect (i.e., second trial response slower than first trial response) than the control participants and the participant with a semantic deficit. Simulations of a computational model confirmed that this pattern of results could arise from (1) normal residual activation being functionally more significant when overall lexical processing is slower and (2) residual phonological activation of the previous trial's target having a particularly strong inhibitory effect specifically when phonological processing is impaired because the task was phonologically-driven (the spoken input specified the target). These results provide new insights into perseveration errors and lexical access deficits in aphasia. |
Christopher D. Fiorillo; Sora R. Yun; Minryung R. Song Diversity and homogeneity in responses of midbrain dopamine neurons Journal Article In: Journal of Neuroscience, vol. 33, no. 11, pp. 4693–4709, 2013. @article{Fiorillo2013a,Dopamine neurons of the ventral midbrain have been found to signal a reward prediction error that can mediate positive reinforcement. Despite the demonstration of modest diversity at the cellular and molecular levels, there has been little analysis of response diversity in behaving animals. Here we examine response diversity in rhesus macaques to appetitive, aversive, and neutral stimuli having relative motivational values that were measured and controlled through a choice task. First, consistent with previous studies, we observed a continuum of response variability and an apparent absence of distinct clusters in scatter plots, suggesting a lack of statistically discrete subpopulations of neurons. Second, we found that a group of " sensitive " neurons tend to be more strongly suppressed by a variety of stimuli and to be more strongly activated by juice. Third, neurons in the " ventral tier " of substantia nigra were found to have greater suppression, and a subset of these had higher baseline firing rates and late " rebound " activation after suppression. These neurons could belong to a previously identified subgroup of dopamine neurons that express high levels of H-type cation channels but lack calbindin. Fourth, neurons further rostral exhibited greater suppression. Fifth, although we observed weak activation of some neurons by aversive stimuli, this was not associated with their aversiveness. In conclusion, we find a diversity of response properties, distributed along a continuum, within what may be a single functional population of neurons signaling reward prediction error. |
Mingli Song; Dapeng Tao; Chun Chen; Jiajun Bu; Yezhou Yang Color-to-gray based on chance of happening preservation Journal Article In: Neurocomputing, vol. 119, pp. 222–231, 2013. @article{Song2013b,It is important to convert color images into grayscale ones for both commercial and scientific applications, such as reducing the publication cost and making the color blind people capture the visual content and semantics from color images. Recently, a dozen of algorithms have been developed for color-to-gray conversion. However, none of them considers the visual attention consistency between the color image and the converted grayscale one. Therefore, these methods may fail to convey important visual information from the original color image to the converted grayscale image. Inspired by the Helmholtz principle (Desolneux et al. 2008 [16]) that "we immediately perceive whatever could not happen by chance", we propose a new algorithm for color-to-gray to solve this problem. In particular, we first define the Chance of Happening (CoH) to measure the attentional level of each pixel in a color image. Afterward, natural image statistics are introduced to estimate the CoH of each pixel. In order to preserve the CoH of the color image in the converted grayscale image, we finally cast the color-to-gray to a supervised dimension reduction problem and present locally sliced inverse regression that can be efficiently solved by singular value decomposition. Experiments on both natural images and artificial pictures suggest (1) that the proposed approach makes the CoH of the color image and that of the converted grayscale image consistent and (2) the effectiveness and the efficiency of the proposed approach by comparing with representative baseline algorithms. In addition, it requires no human-computer interactions. |
Lok-Kin Yeung; Jennifer D. Ryan; Rosemary A. Cowell; Morgan D. Barense Recognition memory impairments caused by false recognition of novel objects Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 4, pp. 1384–1397, 2013. @article{Yeung2013,A fundamental assumption underlying most current theories of amnesia is that memory impairments arise because previously studied information either is lost rapidly or is made inaccessible (i.e., the old information appears to be new). Recent studies in rodents have challenged this view, suggesting instead that under conditions of high interference, recognition memory impairments following medial temporal lobe damage arise because novel information appears as though it has been previously seen. Here, we developed a new object recognition memory paradigm that distinguished whether object recognition memory impairments were driven by previously viewed objects being treated as if they were novel or by novel objects falsely recognized as though they were previously seen. In this indirect, eyetracking-based passive viewing task, older adults at risk for mild cognitive impairment showed false recognition to high-interference novel items (with a significant degree of feature overlap with previously studied items) but normal novelty responses to low-interference novel items (with a lower degree of feature overlap). The indirect nature of the task minimized the effects of response bias and other memory-based decision processes, suggesting that these factors cannot solely account for false recognition. These findings support the counterintuitive notion that recognition memory impairments in this memory-impaired population are not characterized by forgetting but rather are driven by the failure to differentiate perceptually similar objects, leading to the false recognition of novel objects as having been seen before. |
Pingping Liu; Xingshan Li Optimal viewing position effects in the processing of isolated Chinese words Journal Article In: Vision Research, vol. 81, pp. 45–57, 2013. @article{Liu2013a,Previous studies have found that words are identified most quickly when the eyes fixate near the word center (the Optimal Viewing Position, OVP) in alphabetic languages. Two experiments were performed to determine the presence of OVP effects during the processing of isolated Chinese words. Participants' eye movements were recorded while they performed a lexical decision task. The results suggest that Chinese readers exhibit OVP effects and that the OVP tends to be the first character for 2-character words. For 3- and 4-character words, the OVP effects appear as a U-shaped curve with a minimum towards the second character. As fixations deviate from the OVP, word processing times increase at a rate of 30-70. ms per character, and fixation duration is strongly influenced by the initial viewing position. Moreover, the present study did not observe an I-OVP effect for first fixation durations nor a fixation-duration trade-off in two-fixation cases in the case of isolated Chinese words processing. |
Josephine Hartwig; Katharina M. Schnitzspahn; Matthias Kliegel; Boris M. Velichkovsky; Jens R. Helmert I see you remembering: What eye movements can reveal about process characteristics of prospective memory Journal Article In: International Journal of Psychophysiology, vol. 88, no. 2, pp. 193–199, 2013. @article{Hartwig2013,Prospective memory performance describes the delayed execution of an intended action. As this requires a mixture of memory and attentional control functions, current research aims at delineating the specific processes associated with solving a prospective memory task. Therefore, the current study measured, analysed and compared eye movements of participants who performed a prospective memory, a free viewing, and a visual search task. By keeping constant the prospective memory cue as well as the context of tasks, we aimed at putting the processes of solving prospective memory tasks into context. The results show, that when a prospective memory task is missed, the continuous gaze behaviour is rather similar to the gaze behaviour during free viewing. When the prospective memory task is successfully solved, on the other hand, average gaze behaviour is between free viewing and visual search. Furthermore, individual differences in eye movements were found between low and high performers. Our data suggest that a prospective memory task can be solved in different ways, therefore different processes can be observed. |
Melanie R. Burke; P. Bramley; Claudia C. Gonzalez; D. J. McKeefry The contribution of the right supra-marginal gyrus to sequence learning in eye movements Journal Article In: Neuropsychologia, vol. 51, no. 14, pp. 3048–3056, 2013. @article{Burke2013,We investigated the role of the human right Supra-Marginal Gyrus (SMG) in the generation of learned eye movement sequences. Using MRI-guided transcranial magnetic stimulation (TMS) we disrupted neural activity in the SMG whilst human observers performed saccadic eye movements to multiple presentations of either predictable or random target sequences. For the predictable sequences we observed shorter saccadic latencies from the second presentation of the sequence. However, these anticipatory improvements in performance were significantly reduced when TMS was delivered to the right SMG during the inter-trial retention periods. No deficits were induced when TMS was delivered concurrently with the onset of the target visual stimuli. For the random version of the task, neither delivery of TMS to the SMG during the inter-trial period nor during the presentation of the target visual stimuli produced any deficit in performance that was significantly different from the no-TMS or control conditions. These findings demonstrate that neural activity within the right SMG is causally linked to the ability to perform short latency predictive saccades resulting from sequence learning. We conclude that neural activity in rSMG constitutes an instruction set with spatial and temporal directives that are retained and subsequently released for predictive motor planning and responses. |
Kirsten A. Dalrymple; Alexander K. Gray; Brielle L. Perler; Elina Birmingham; Walter F. Bischof; Jason J. S. Barton; Alan Kingstone Eyeing the eyes in social scenes: Evidence for top-down control of stimulus selection in simultanagnosia Journal Article In: Cognitive Neuropsychology, vol. 30, no. 1, pp. 25–40, 2013. @article{Dalrymple2013,Simultanagnosia is a disorder of visual attention resulting from bilateral parieto-occipital lesions. Healthy individuals look at eyes to infer people's attentional states, but simultanagnosics allocate abnormally few fixations to eyes in scenes. It is unclear why simultanagnosics fail to fixate eyes, but it might reflect that they are (a) unable to locate and fixate them, or (b) do not prioritize attentional states. We compared eye movements of simultanagnosic G.B. to those of healthy subjects viewing scenes normally or through a restricted window of vision. They described scenes and explicitly inferred attentional states of people in scenes. G.B. and subjects viewing scenes through a restricted window made few fixations on eyes when describing scenes, yet increased fixations on eyes when inferring attention. Thus G.B. understands that eyes are important for inferring attentional states and can exert top-down control to seek out and process the gaze of others when attentional states are of interest. |
Alexander C. Schutz; Felix Lossin; Dirk Kerzel Temporal stimulus properties that attract gaze to the periphery and repel gaze from fixation Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–17, 2013. @article{Schutz2013,Humans use saccadic eye movements to fixate different parts of their visual environment. While stimulus features that determine the location of the next fixation in static images have been extensively studied, temporal stimulus features that determine the timing of the gaze shifts received less attention. It is also unclear if stimulus features at the present gaze location can trigger gaze shifts to another location. To investigate these questions, we asked observers to switch their gaze between two blobs. In three different conditions, either the fixated blob, the peripheral blob, or both blobs were flickering. A time-frequency analysis of the flickering noise values, time locked to the gaze shifts, revealed significant phase locking in a time window 300 to 100 ms before the gaze shift at temporal frequencies below 20 Hz. The average phase angles at these time-frequency points indicated that observer's gaze was repelled by decreasing contrast of the fixated blob and attracted by increasing contrast of the peripheral blob. These results show that temporal properties of both, fixated, and peripheral stimuli are capable of triggering gaze shifts. |
Marco Marelli; Simona Amenta; Elena Angela Morone; Davide Crepaldi Meaning is in the beholder's eye: Morpho-semantic effects in masked priming Journal Article In: Psychonomic Bulletin & Review, vol. 20, no. 3, pp. 534–541, 2013. @article{Marelli2013,A substantial body of literature indicates that, at least at some level of processing, complex words are broken down into their morphemes solely on the basis of their orthographic form (e.g., Rastle, Davis, & New, Psychonomic Bulletin and Review 11:1090-1098, 2004). Recent evidence has shown that this process might not be obligatory, as indicated by the fact that morpho-orthographic effects were not found in a cross-case same-different task-that is, when lexical access was not necessarily required (Duñabeitia, Kinoshita, Carreiras, & Norris, Language and Cognitive Processes 26:509-529, 2011). In this study, we employed a task that required understanding a series of words and, thus, implied lexical access. Masked primes were shown very briefly right before the appearance of the target word; prime-target pairs entertained a morpho-semantic (dealer-DEAL), a morpho-orthographic (corner-CORN), or a purely orthographic (brothel-BROTH) relationship. Eye fixation times clearly indicated facilitation for transparent pairs, but not for opaque pairs (or for orthographic pairs, which were used as a baseline). Conversely, the usual morpho-orthographic pattern was found in a control experiment, employing a lexical decision task. These results indicate that the access to a morpho-orthographic level of representation is not always necessary for lexical identification, which challenges models of visual word identification that cannot account for task-induced effects. |
Katherine Guérard; Jean Saint-Aubin; Marilyne Maltais The role of verbal memory in regressions during reading Journal Article In: Memory & Cognition, vol. 41, no. 1, pp. 122–136, 2013. @article{Guerard2013,During reading, a number of eye movements are made backward, on words that have already been read. Recent evidence suggests that such eye movements, called regressions, are guided by memory. Several studies point to the role of spatial memory, but evidence for the role of verbal memory is more limited. In the present study, we examined the factors that modulate the role of verbal memory in regressions. Participants were required to make regressions on target words located in sentences displayed on one or two lines. Verbal interference was shown to affect regressions, but only when participants executed a regression on a word located in the first part of the sentence, irrespective of the number of lines on which the sentence was displayed. Experiments 2 and 3 showed that the effect of verbal interference on words located in the first part of the sentence disappeared when participants initiated the regression from the middle of the sentence. Our results suggest that verbal memory is recruited to guide regressions, but only for words read a longer time ago. |
Alistair J. Harvey; Wendy Kneller; Alison C. Campbell The effects of alcohol intoxication on attention and memory for visual scenes. Journal Article In: Memory, vol. 21, no. 8, pp. 969–980, 2013. @article{Harvey2013,This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes. |
Romy Müller; Jens R. Helmert; Sebastian Pannasch; Boris M. Velichkovsky Gaze transfer in remote cooperation: Is it always helpful to see what your partner is attending to? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 7, pp. 1302–1316, 2013. @article{Mueller2013,Establishing common ground in remote cooperation is challenging because nonverbal means of ambiguity resolution are limited. In such settings, information about a partner's gaze can support cooperative performance, but it is not yet clear whether and to what extent the abundance of information reflected in gaze comes at a cost. Specifically, in tasks that mainly rely on spatial referencing, gaze transfer might be distracting and leave the partner uncertain about the meaning of the gaze cursor. To examine this question, we let pairs of participants perform a joint puzzle task. One partner knew the solution and instructed the other partner's actions by (1) gaze, (2) speech, (3) gaze and speech, or (4) mouse and speech. Based on these instructions, the acting partner moved the pieces under conditions of high or low autonomy. Performance was better when using either gaze or mouse transfer compared to speech alone. However, in contrast to the mouse, gaze transfer induced uncertainty, evidenced in delayed responses to the cursor. Also, participants tried to resolve ambiguities by engaging in more verbal effort, formulating more explicit object descriptions and fewer deictic references. Thus, gaze transfer seems to increase uncertainty and ambiguity, thereby complicating grounding in this spatial referencing task. The results highlight the importance of closely examining task characteristics when considering gaze transfer as a means of support. |
Ruyuan Zhang; Oh-Sang Kwon; Duje Tadin Illusory movement of stationary stimuli in the visual periphery: Evidence for a strong centrifugal prior in motion processing Journal Article In: Journal of Neuroscience, vol. 33, no. 10, pp. 4415–4423, 2013. @article{Zhang2013c,Visual input is remarkably diverse. Certain sensory inputs are more probable than others, mirroring statistical regularities of the visual environment. The visual system exploits many of these regularities, resulting, on average, in better inferences about visual stimuli. However, by incorporating prior knowledge into perceptual decisions, visual processing can also result in perceptions that do not match sensory inputs. Such perceptual biases can often reveal unique insights into underlying mechanisms and computations. For example, a prior assumption that objects move slowly can explain a wide range of motion phenomena. The prior on slow speed is usually rationalized by its match with visual input, which typically includes stationary or slow moving objects. However, this only holds for foveal and parafoveal stimulation. The visual periphery tends to be exposed to faster motions, which are biased toward centrifugal directions. Thus, if prior assumptions derive from experience, peripheral motion processing should be biased toward centrifugal speeds. Here, in experiments with human participants, we support this hypothesis and report a novel visual illusion where stationary objects in the visual periphery are perceived as moving centrifugally, while objects moving as fast as 7°/s toward fovea are perceived as stationary. These behavioral results were quantitatively explained by a Bayesian observer that has a strong centrifugal prior. This prior is consistent with both the prevalence of centrifugal motions in the visual periphery and a centrifugal bias of direction tuning in cortical area MT, supporting the notion that visual processing mirrors its input statistics. |
Joost C. Dessing; Michael Vesia; J. Douglas Crawford The role of areas MT+/V5 and SPOC in spatial and temporal control of manual interception: An rTMS study Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 15, 2013. @article{Dessing2013,Manual interception, such as catching or hitting an approaching ball, requires the hand to contact a moving object at the right location and at the right time. Many studies have examined the neural mechanisms underlying the spatial aspects of goal-directed reaching, but the neural basis of the spatial and temporal aspects of manual interception are largely unknown. Here, we used repetitive transcranial magnetic stimulation (rTMS) to investigate the role of the human middle temporal visual motion area (MT+/V5) and superior parieto-occipital cortex (SPOC) in the spatial and temporal control of manual interception. Participants were required to reach-to-intercept a downward moving visual target that followed an unpredictably curved trajectory, presented on a screen in the vertical plane. We found that rTMS to MT+/V5 influenced interceptive timing and positioning, whereas rTMS to SPOC only tended to increase the spatial variance in reach end points for selected target trajectories. These findings are consistent with theories arguing that distinct neural mechanisms contribute to spatial, temporal, and spatiotemporal control of manual interception. |
Alistair J. Harvey; Wendy Kneller; Alison C. Campbell The elusive effects of alcohol intoxication on visual attention and eyewitness memory Journal Article In: Applied Cognitive Psychology, vol. 27, pp. 617–624, 2013. @article{Harvey2013a,Alcohol is a contributing factor in many crimes, yet little is known of its effects on eyewitness memory and face iden- tification. Some authors suggest that intoxication impairs attention and memory, particularly for peripheral scene information, but the data supporting this claim are limited. The present study therefore sought to determine whether (i) intoxicated participants spend less time fixating on peripheral regions ofcrime images than sober counterparts, (ii) whether less information is recognised from image regions receiving fewer gaze fixations and (iii) whether intoxicated participants are less able to identify the perpetrator of a crime than sober participants. Contrary to expectations, participants' ability to explore and subsequently recognise the contents of the stimulus scenes was unaffected by alcohol, suggesting that the relationship between intoxication, attention and eyewitness memory requires closer scrutiny. |
Michael Dambacher; Timothy J. Slattery; Jinmian Yang; Reinhold Kliegl; Keith Rayner Evidence for direct control of eye movements during reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1468–1484, 2013. @article{Dambacher2013,It is well established that fixation durations during reading vary with processing difficulty, but there are different views on how oculomotor control, visual perception, shifts of attention, and lexical (and higher cognitive) processing are coordinated. Evidence for a one-to-one translation of input delay into saccadic latency would provide a much needed constraint for current theoretical proposals. Here, we tested predictions of such a direct-control perspective using the stimulus-onset delay (SOD) paradigm. Words in sentences were initially masked and, on fixation, were individually unmasked with a delay (0-, 33-, 66-, 99-ms SODs). In Experiment 1, SODs were constant for all words in a sentence; in Experiment 2, SODs were manipulated on target words, while nontargets were unmasked without delay. In accordance with predictions of direct control, nonzero SODs entailed equivalent increases in fixation durations in both experiments. Yet, a population of short fixations pointed to rapid saccades as a consequence of low-level information at nonoptimal viewing positions rather than of lexical processing. Implications of these results for theoretical accounts of oculomotor control are discussed. |
