All EyeLink Publications
All 11,000+ peer-reviewed EyeLink research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
Lise Van der Haegen; Marc Brysbaert
The mechanisms underlying the interhemispheric integration of information in foveal word recognition: Evidence for transcortical inhibition Journal Article
In: Brain and Language, vol. 118, no. 3, pp. 81–89, 2011.
Words are processed as units. This is not as evident as it seems, given the division of the human cerebral cortex in two hemispheres and the partial decussation of the optic tract. In two experiments, we investigated what underlies the unity of foveally presented words: A bilateral projection of visual input in foveal vision, or interhemispheric inhibition and integration as proposed by the SERIOL model of visual word recognition. Experiment 1 made use of pairs of words and nonwords with a length of four letters each. Participants had to name the word and ignore the nonword. The visual field in which the word was presented and the distance between the word and the nonword were manipulated. The results showed that the typical right visual field advantage was observed only when the word and the nonword were clearly separated. When the distance between them became smaller, the right visual field advantage turned into a left visual field advantage, in line with the interhemispheric inhibition mechanism postulated by the SERIOL model. Experiment 2, using 5-letters stimuli, confirmed that this result was not due to the eccentricity of the word relative to the fixation location but to the distance between the word and the nonword.
Lise Van der Haegen; Qing Cai; Ruth Seurinck; Marc Brysbaert
Further fMRI validation of the visual half field technique as an indicator of language laterality: A large-group analysis Journal Article
In: Neuropsychologia, vol. 49, no. 10, pp. 2879–2888, 2011.
The best established lateralized cerebral function is speech production, with the majority of the population having left hemisphere dominance. An important question is how to best assess the laterality of this function. Neuroimaging techniques such as functional Magnetic Resonance Imaging (fMRI) are increasingly used in clinical settings to replace the invasive Wada-test. We evaluated the usefulness of behavioral visual half field (VHF) tasks for screening a large sample of healthy left-handers. Laterality indices (LIs) calculated on the basis of the latencies in a word and picture naming VHF task were compared to the brain activity measured in a silent word generation task in fMRI (pars opercularis/BA44 and pars triangularis/BA45). Results confirmed the usefulness of the VHF-tasks as a screening device. None of the left-handed participants with clear right visual field (RVF) advantages in the picture and word naming task showed right hemisphere dominance in the scanner. In contrast, 16/20 participants with a left visual field (LVF) advantage in both word and picture naming turned out to have atypical right brain dominance. Results were less clear for participants who failed to show clear VHF asymmetries (below 20 ms RVF advantage and below 60 ms LVF advantage) or who had inconsistent asymmetries in picture and word naming. These results indicate that the behavioral tasks can mainly provide useful information about the direction of speech dominance when both VHF differences clearly point in the same direction.
Stefan Van der Stigchel; Jelmer P. De Vries; R. Bethlehem; Jan Theeuwes
A global effect of capture saccades Journal Article
In: Experimental Brain Research, vol. 210, no. 1, pp. 57–65, 2011.
When two target elements are presented in close proximity, the endpoint of a saccade is generally positioned at an intermediate location ('global effect'). Here, we investigated whether the global effect also occurs for eye movements executed to distracting elements. To this end, we adapted the oculomotor capture paradigm such that on a subset of trials, two distractors were presented. When the two distractors were closely aligned, erroneous eye movements were initiated to a location in between the two distractors. Even though to a lesser extent, this effect was also present when the two distractors were presented further apart. In a second experiment, we investigated the global effect for eye movements in the presence of two targets. A strong global effect was observed when two targets were presented closely aligned, while this effect was absent when the targets were further apart. This study shows that there is a global effect when saccades are captured by distractors. This 'capture global' effect is different from the traditional global effect that occurs when two targets are presented because the global effect of capture saccades also occurs for remote elements. The spatial dynamics of this global effect will be explained in terms of the population coding theory.
Julie A. Van Dyke; Brian McElree
Cue-dependent interference in comprehension Journal Article
In: Journal of Memory and Language, vol. 65, no. 3, pp. 247–263, 2011.
The role of interference as a primary determinant of forgetting in memory has long been accepted, however its role as a contributor to poor comprehension is just beginning to be understood. The current paper reports two studies, in which speed-accuracy tradeoff and eye-tracking methodologies were used with the same materials to provide converging evidence for the role of syntactic and semantic cues as mediators of both proactive (PI) and retroactive interference (RI) during comprehension. Consistent with previous work (e.g., Van Dyke & Lewis, 2003), we found that syntactic constraints at the retrieval site are among the cues that drive retrieval in comprehension, and that these constraints effectively limit interference from potential distractors with semantic/pragmatic properties in common with the target constituent. The data are discussed in terms of a cue-overload account, in which interference both arises from and is mediated through a direct-access retrieval mechanism that utilizes a linear, weighted cue-combinatoric scheme.
Lore Thaler; Melvyn A. Goodale
Reaction times for allocentric movements are 35 ms slower than reaction times for target-directed movements Journal Article
In: Experimental Brain Research, vol. 211, no. 2, pp. 313–328, 2011.
Many movements that people perform every day are directed at visual targets, e.g., when we press an elevator button. However, many other movements are not target-directed, but are based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing or copying. Here, show a reaction time difference between these two types of movements in four separate experiments. In Exp. 1, subjects moved their eyes freely and used direct hand movements. In Exp. 2, subjects moved their eyes freely and their movements were tool-mediated (computer mouse). In Exp. 3, subjects fixated a central target and the visual field in which visual information was presented was manipulated. Experiment 4 was identical to Exp. 3 except for the fact that visual information about targets disappeared before movement onset. In all four experiments, reaction times in the allocentric task were approximately 35 ms slower than they were in the target-directed task. We suggest that this difference in reaction time between the two tasks reflects the fact that allocentric, but not target-directed, movements recruit the ventral stream, in particular lateral occipital cortex, which increases processing time. We also observed an advantage for movements made in the lower visual field as measured by movement variability, whether or not those movements were allocentric or target-directed. This latter result, we argue, reflects the role of the dorsal visual stream in the online control of movements in both kinds of tasks.
Mervyn G. Thomas; Moira Crosier; Susan Lindsay; Anil Kumar; Shery Thomas; Masasuke Araki; Chris J. Talbot; Rebecca J. McLean; Mylvaganam Surendran; Katie Taylor; Bart P. Leroy; Anthony T. Moore; David G. Hunter; Richard W. Hertle; Patrick Tarpey; Andrea Langmann; Susanne Lindner; Martina Brandner; Irene Gottlob
The clinical and molecular genetic features of idiopathic infantile periodic alternating nystagmus Journal Article
In: Brain, vol. 134, no. 3, pp. 892–902, 2011.
Periodic alternating nystagmus consists of involuntary oscillations of the eyes with cyclical changes of nystagmus direction. It can occur during infancy (e.g. idiopathic infantile periodic alternating nystagmus) or later in life. Acquired forms are often associated with cerebellar dysfunction arising due to instability of the optokinetic-vestibular systems. Idiopathic infantile periodic alternating nystagmus can be familial or occur in isolation; however, very little is known about the clinical characteristics, genetic aetiology and neural substrates involved. Five loci (NYS1-5) have been identified for idiopathic infantile nystagmus; three are autosomal (NYS2, NYS3 and NYS4) and two are X-chromosomal (NYS1 and NYS5). We previously identified the FRMD7 gene on chromosome Xq26 (NYS1 locus); mutations of FRMD7 are causative of idiopathic infantile nystagmus influencing neuronal outgrowth and development. It is unclear whether the periodic alternating nystagmus phenotype is linked to NYS1, NYS5 (Xp11.4-p11.3) or a separate locus. From a cohort of 31 X-linked families and 14 singletons (70 patients) with idiopathic infantile nystagmus we identified 10 families and one singleton (21 patients) with periodic alternating nystagmus of which we describe clinical phenotype, genetic aetiology and neural substrates involved. Periodic alternating nystagmus was not detected clinically but only on eye movement recordings. The cycle duration varied from 90 to 280 s. Optokinetic reflex was not detectable horizontally. Mutations of the FRMD7 gene were found in all 10 families and the singleton (including three novel mutations). Periodic alternating nystagmus was predominantly associated with missense mutations within the FERM domain. There was significant sibship clustering of the phenotype although in some families not all affected members had periodic alternating nystagmus. In situ hybridization studies during mid-late human embryonic stages in normal tissue showed restricted FRMD7 expression in neuronal tissue with strong hybridization signals within the afferent arms of the vestibulo-ocular reflex consisting of the otic vesicle, cranial nerve VIII and vestibular ganglia. Similarly within the afferent arm of the optokinetic reflex we showed expression in the developing neural retina and ventricular zone of the optic stalk. Strong FRMD7 expression was seen in rhombomeres 1 to 4, which give rise to the cerebellum and the common integrator site for both these reflexes (vestibular nuclei). Based on the expression and phenotypic data, we hypothesize that periodic alternating nystagmus arises from instability of the optokinetic-vestibular systems. This study shows for the first time that mutations in FRMD7 can cause idiopathic infantile periodic alternating nystagmus and may affect neuronal circuits that have been implicated in acquired forms.
Mervyn G. Thomas; Irene Gottlob; Rebecca J. McLean; Gail Maconachie; Anil Kumar; Frank A. Proudlock
Reading strategies in infantile nystagmus syndrome Journal Article
In: Investigative Ophthalmology & Visual Science, vol. 52, no. 11, pp. 8156–8165, 2011.
PURPOSE: The adaptive strategies adopted by individuals with infantile nystagmus syndrome (INS) during reading are not clearly understood. Eye movement recordings were used to identify ocular motor strategies used by patients with INS during reading. METHODS: Eye movements were recorded at 500 Hz in 25 volunteers with INS and 7 controls when reading paragraphs of text centered at horizontal gaze angles of -20°, -10°, 0°, 10°, and 20°. At each location, reading speeds were measured, along with logMAR visual acuity and nystagmus during gaze-holding. Adaptive strategies were identified from slow and quick-phase patterns in the nystagmus waveform. RESULTS: Median reading speeds were 204.3 words per minute in individuals with INS and 273.6 words per minute in controls. Adaptive strategies included (1) suppression of corrective quick phases allowing involuntary slow phases to achieve the desired goal, (2) voluntarily changing the character of the involuntary slow phases using quick phases, and (3) correction of involuntary slow phases using quick phases. Several individuals with INS read more rapidly than healthy control volunteers. CONCLUSIONS: These findings demonstrate that volunteers with INS learn to manipulate their nystagmus using a range of strategies to acquire visual information from the text. These strategies include taking advantage of the stereotypical and periodic nature of involuntary eye movements to allow the involuntary eye movements to achieve the desired goal. The versatility of these adaptations yields reading speeds in those with nystagmus that are often much better than might be expected, given the degree of foveal and ocular motor deficits.
Debra Titone; Maya R. Libben; Julie Mercier; Veronica Whitford; Irina Pivneva
Bilingual lexical access during L1 sentence reading: The effects of L2 knowledge, semantic constraint, and L1-L2 intermixing Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 6, pp. 1412–1431, 2011.
Libben and Titone (2009) recently observed that cognate facilitation and interlingual homograph interference were attenuated by increased semantic constraint during bilingual second language (L2) reading, using eye movement measures. We now investigate whether cross-language activation also occurs during first language (L1) reading as a function of age of L2 acquisition and task demands (i.e., inclusion of L2 sentences). In Experiment 1, participants read high and low constraint English (L1) sentences containing interlingual homographs, cognates, or control words. In Experiment 2, we included French (L2) filler sentences to increase salience of the L2 during L1 reading. The results suggest that bilinguals reading in their L1 show nonselective activation to the extent that they acquired their L2 early in life. Similar to our previous work on L2 reading, high contextual constraint attenuated cross-language activation for cognates. The inclusion of French filler items promoted greater cross-language activation, especially for late stage reading measures. Thus, L1 bilingual reading is modulated by L2 knowledge, semantic constraint, and task demands.
K. Torab; T. S. Davis; D. J. Warren; Paul A. House; R. A. Normann; Bradley Greger
Multiple factors may influence the performance of a visual prosthesis based on intracortical microstimulation: Nonhuman primate behavioural experimentation Journal Article
In: Journal of Neural Engineering, vol. 8, no. 3, pp. 1–13, 2011.
We hypothesize that a visual prosthesis capable of evoking high-resolution visual perceptions can be produced using high-electrode-count arrays of penetrating microelectrodes implanted into the primary visual cortex of a blind human subject. To explore this hypothesis, and as a prelude to human psychophysical experiments, we have conducted a set of experiments in primary visual cortex (V1) of non-human primates using chronically implanted Utah Electrode Arrays (UEAs). The electrical and recording properties of implanted electrodes, the high-resolution visuotopic organization of V1, and the stimulation levels required to evoke behavioural responses were measured. The impedances of stimulated electrodes were found to drop significantly immediately following stimulation sessions, but these post-stimulation impedances returned to pre-stimulation values by the next experimental session. Two months of periodic microstimulation at currents of up to 96 µA did not impair the mapping of receptive fields from local field potentials or multi-unit activity, or impact behavioural visual thresholds of light stimuli that excited regions of V1 that were implanted with UEAs. These results demonstrate that microstimulation at the levels used did not cause functional impairment of the electrode array or the neural tissue. However, microstimulation with current levels ranging from 18 to 76 µA (46 ± 19 µA, mean ± std) was able to elicit behavioural responses on eight out of 82 systematically stimulated electrodes. We suggest that the ability of microstimulation to evoke phosphenes and elicit a subsequent behavioural response may depend on several factors: the location of the electrode tips within the cortical layers of V1, distance of the electrode tips to neuronal somata, and the inability of nonhuman primates to recognize and respond to a generalized set of evoked percepts.
Tamara L. Watson; B. Krekelberg
An equivalent noise investigation of saccadic suppression Journal Article
In: Journal of Neuroscience, vol. 31, no. 17, pp. 6535–6541, 2011.
Visual stimuli presented just before or during an eye movement are more difficult to detect than those same visual stimuli presented during fixation. This laboratory phenomenon-behavioral saccadic suppression-is thought to underlie the everyday experience of not perceiving the motion created by our own eye movements-saccadic omission. At the neural level, many cortical and sub cortical areas respond differently to perisaccadic visual stimuli than to stimuli presented during fixation. Those neural response changes, however, are complex and the link to the behavioral phenomena of reduced detect ability remains tentative.We used awellestablished model of human visual detection perform ance to provide a quantitative description of behavioral saccadic suppression and thereby allow amore focused search for its neural mechanisms. We used an equivalent noise method to distinguish between three mechanisms that could underlie saccadic suppression. The first hypothesized mechanism reduces the gain of the visual system, the second increases internal noise levels in a stimulus-dependent manner, and the third increases stimulus uncertainty. All three mechanisms predict that perisaccadic stimuli should be more difficult to detect, but each mechanism predicts a unique pattern of detectability as a function of the amount of external noise. Our experimental finding was that saccades increased detection threshold sat low external noise, but had little influence on thresholds at high levels of external noise. A formal analysis of these data in the equivalent noise analysis framework showed that the most parsimonious mechanism underlying saccadic suppression is a stimulus-independent reduction in response gain.
Matthew David Weaver; Johan Lauwereyns
Attentional capture and hold: the oculomotor correlates of the change detection advantage for faces Journal Article
In: Psychological Research, vol. 75, no. 1, pp. 10–23, 2011.
The present study investigated the influence of semantic information on overt attention. Semantic influence on attentional capture and hold mechanisms was explored by measuring oculomotor correlates of the reaction time (RT) and accuracy advantage for faces in the change detection task. We also examined whether the face advantage was due to mandatory processing of faces or an idiosyncratic strategy by participants, by manipulating preknowledge of the object category in which to expect a change. An RT and accuracy advantage was found for detecting changes in faces compared to other objects of less social and biological significance, in the form of greater attentional capture and hold. The faster attentional capture by faces appeared to overcompensate for the longer hold, to produce faster and more accurate manual responses. Preknowledge did not eliminate the face advantage, suggesting that faces receive mandatory processing when competing for attention with stimuli of less sociobiological salience.
Matthew David Weaver; Johan Lauwereyns; Jan Theeuwes
The effect of semantic information on saccade trajectory deviations Journal Article
In: Vision Research, vol. 51, no. 10, pp. 1124–1128, 2011.
In recent years, many studies have explored the conditions in which irrelevant visual distractors affect saccades trajectories. These previous studies mainly focused on the low-level stimulus characteristics and how they affect the magnitude of curvature. The present study explored the possible effect of high level semantic information on saccade curvature. Semantic saliency was manipulated by presenting irrelevant peripheral taboo versus neutral cue words in a spatial cuing paradigm that allowed for the measurement of trajectory deviations. Findings showed larger saccade trajectory deviations away from taboo (versus neutral) cue words when making a saccade towards another location. This indicates that due to their high semantic saliency, more inhibition was necessarily applied to taboo cue locations to effectively suppress their competing as saccade targets.
Alice K. Welham; Andy J. Wills
Unitization, similarity, and overt attention in categorization and exposure Journal Article
In: Memory and Cognition, vol. 39, no. 8, pp. 1518–1533, 2011.
Unitization, the creation of new stimulus features by the fusion of preexisting features, is one of the hypothesized processes of perceptual learning (Goldstone Annual Review of Psychology, 49:585-612, 1998). Some argue that unitization occurs to the extent that it is required for successful task performance (e.g., Shiffrin & Lightfoot, 1997), while others argue that unitization is largely independent of functionality (e.g., McLaren & Mackintosh Animal Learning & Behavior, 30:177-200, 2000). Across three experiments, employing supervised category learning and unsupervised exposure, we investigated three predictions of the McLaren and Mackintosh (Animal Learning & Behavior, 30:177-200, 2000) model: (1) Unitization is accompanied by an initial increase in the subjective similarity of stimuli sharing a unitized component; (2) unitization of a configuration occurs through exposure to its components, even when the task does not require it; (3) as unitization approaches completion, salience of the unitized component may be reduced. Our data supported these predictions. We also found that unitization is associated with increases in overt attention to the unitized component, as measured through eye tracking.
Jessica Werthmann; Anne Roefs; Chantal Nederkoorn; Karin Mogg; Brendan P. Bradley; Anita Jansen
Can(not) take my eyes off it: Attention bias for food in overweight participants Journal Article
In: Health Psychology, vol. 30, no. 5, pp. 561–569, 2011.
Objective: The aim of the current study was to investigate attention biases for food cues, craving, and overeating in overweight and healthy-weight participants. Specifically, it was tested whether attention allocation processes toward high-fat foods differ between overweight and normal weight individuals and whether selective attention biases for food cues are related to craving and food intake. Method: Eye movements were recorded as a direct index of attention allocation in a sample of 22 overweight/obese and 29 healthy-weight female students during a visual probe task with food pictures. In addition, self-reported craving and actual food intake during a bogus "taste-test" were assessed. Results: Overweight participants showed an approach-avoidance pattern of attention allocation toward high-fat food. Overweight participants directed their first gaze more often toward food pictures than healthy-weight individuals, but subsequently showed reduced maintenance of attention on these pictures. For overweight participants, craving was related to initial orientation toward food. Moreover, overweight participants consumed significantly more snack food than healthy-weight participants. Conclusion: Results emphasize the importance of identifying different attention bias components in overweight individuals with regard to craving and subsequent overeating.
Gregory L. West; Naseem Al-Aidroos; Josh Susskind; Jay Pratt
Emotion and action: The effect of fear on saccadic performance Journal Article
In: Experimental Brain Research, vol. 209, no. 1, pp. 153–158, 2011.
According to evolutionary accounts, emotions originated to prepare an organism for action (Darwin 1872; Frijda 1986). To investigate this putative relationship between emotion and action, we examined the effect of an emotional stimulus on oculomotor actions controlled by the superior colliculus (SC), which has connections with subcortical structures involved in the perceptual prioritization of emotion, such as the amygdala through the pulvinar. The pulvinar connects the amygdala to cells in the SC responsible for the speed of saccade execution, while not affecting the spatial component of the saccade. We tested the effect of emotion on both temporal and spatial signatures of oculomotor functioning using a gap-distractor paradigm. Changes in spatial programming were examined through saccadic curvature in response to a remote distractor stimulus, while changes in temporal execution were examined using a fixation gap manipulation. We show that following the presentation of a task-irrelevant fearful face, the temporal but not the spatial component of the saccade generation system was affected.
Sarah J. White; Tessa Warren; Erik D. Reichle
Parafoveal preview during reading: Effects of sentence position Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1221–1238, 2011.
Two experiments examined parafoveal preview for words located in the middle of sentences and at sentence boundaries. Parafoveal processing was shown to occur for words at sentence-initial, mid-sentence, and sentence-final positions. Both Experiments 1 and 2 showed reduced effects of preview on regressions out for sentence-initial words. In addition, Experiment 2 showed reduced preview effects on first-pass reading times for sentence-initial words. These effects of sentence position on preview could result from either reduced parafoveal processing for sentence-initial words or other processes specific to word reading at sentence boundaries. In addition to the effects of preview, the experiments also demonstrate variability in the effects of sentence wrap-up on different reading measures, indicating that the presence and time course of wrap-up effects may be modulated by text-specific factors. We also report simulations of Experiment 2 using version 10 of E-Z Reader (Reichle, Warren, & McConnell, 2009), designed to explore the possible mechanisms underlying parafoveal preview at sentence boundaries.
Wieske Zoest; Amelia R. Hunt
Saccadic eye movements and perceptual judgments reveal a shared visual representation that is increasingly accurate over time Journal Article
In: Vision Research, vol. 51, no. 1, pp. 111–119, 2011.
Although there is evidence to suggest visual illusions affect perceptual judgments more than actions, many studies have failed to detect task-dependant dissociations. In two experiments we attempt to resolve the contradiction by exploring the time-course of visual illusion effects on both saccadic eye movements and perceptual judgments, using the Judd illusion. The results showed that, regardless of whether a saccadic response or a perceptual judgement was made, the illusory bias was larger when responses were based on less information, that is, when saccadic latencies were short, or display duration was brief. The time-course of the effect was similar for both the saccadic responses and perceptual judgements, suggesting that both modes may be driven by a shared visual representation. Changes in the strength of the illusion over time also highlight the importance of controlling for the latency of different response systems when evaluating possible dissociations between them.
Joris Vangeneugden; Patrick A. De Maziere; Marc M. Van Hulle; Tobias Jaeggli; Luc Van Van Gool; Rufin Vogels
Distinct mechanisms for coding of visual actions in macaque temporal cortex Journal Article
In: Journal of Neuroscience, vol. 31, no. 2, pp. 385–401, 2011.
Temporal cortical neurons are known to respond to visual dynamic-action displays. Many human psychophysical and functional imaging studies examining biological motion perception have used treadmill walking, in contrast to previous macaque single-cell studies. We assessed the coding of locomotion in rhesus monkey (Macacamulatta) temporal cortex using movies of stationary walkers,varying both form and motion (i.e.,different facing directions) or varying only the frame sequence (i.e.,forward vs backward walking). The majority of superiortemporal sulcus and inferior temporal neurons were selective for facing direction, whereas a minority distinguished forward from backward walking. Support vector machines using the temporal cortical population responses as input classified facing direction well, but forward and backward walking less so. Classification performance for the latter improved markedly when the within-action response modulation was considered, reflecting differences in momentarybody poses within the locomotion sequences. Responses to static pose presentations predicted the responses during the course of the action. Analyses of the responses to walking sequences wherein the start frame was varied across trials showed that some neurons also carried a snapshot sequence signal. Such sequence information was present in neurons that responded to static snapshot presen- tations and in neurons that required motion. Our data suggest that actions area nalyzed by temporal cortical neurons using distinct mechanisms. Most neurons predominantly signal momentary pose. In addition, temporal cortical neurons, including those responding to static pose, are sensitive to pose sequence, which can contribute to the signaling oflearned action sequences.
Shravan Vasishth; Heiner Drenhaus
Locality in German Journal Article
In: Dialogue and Discourse, vol. 2, no. 1, pp. 59–82, 2011.
Three experiments (self-paced reading, eyetracking and an ERP study) show that in relative clauses, increasing the distance between the relativized noun and the relative-clause verb makes it more difficult to process the relative-clause verb (the so-called locality effect). This result is consistent with the predictions of several theories (Gibson, 2000; Lewis and Vasishth, 2005), and contradicts the recent claim (Levy, 2008) that in relative-clause structures increasing argument-verb distance makes processing easier at the verb. Levy's expectation-based account predicts that the expectation for a verb becomes sharper as distance is increased and therefore processing becomes easier at the verb. We argue that, in addition to expectation effects (which are seen in the eyetracking study in first-pass regression probability), processing load alsoincreases with increasing distance. This contradicts Levy's claim that heightened expectation leadsto lower processing cost. Dependency- resolution cost and expectation-based facilitation are jointly responsible for determining processing cost.
B. -E. Verhoef; Rufin Vogels; Peter Janssen
Synchronization between the end stages of the dorsal and the ventral visual stream Journal Article
In: Journal of Neurophysiology, vol. 105, no. 5, pp. 2030–2042, 2011.
The end stage areas of the ventral (IT) and the dorsal (AIP) visual streams encode the shape of disparity-defined three-dimensional (3D) surfaces. Recent anatomical tracer studies have found direct reciprocal connections between the 3D-shape selective areas in IT and AIP. Whether these anatomical connections are used to facilitate 3D-shape perception is still unknown. We simultaneously recorded multi-unit activity (MUA) and local field potentials in IT and AIP while monkeys discriminated between concave and convex 3D shapes and measured the degree to which the activity in IT and AIP synchronized during the task. We observed strong beta-band synchronization between IT and AIP preceding stimulus onset that decreased shortly after stimulus onset and became modulated by stereo-signal strength and stimulus contrast during the later portion of the stimulus period. The beta-coherence modulation was unrelated to task-difficulty, regionally specific, and dependent on the MUA selectivity of the pairs of sites under study. The beta-spike-field coherence in AIP predicted the upcoming choice of the monkey. Several convergent lines of evidence suggested AIP as the primary source of the AIP-IT synchronized activity. The synchronized beta activity seemed to occur during perceptual anticipation and when the system has stabilized to a particular perceptual state but not during active visual processing. Our findings demonstrate for the first time that synchronized activity exists between the end stages of the dorsal and ventral stream during 3D-shape discrimination.
Marine Vernet; Qing Yang; Zoï Kapoula
Guiding binocular saccades during reading: A TMS study of the PPC Journal Article
In: Frontiers in Human Neuroscience, vol. 5, pp. 14, 2011.
Reading is an activity based on complex sequences of binocular saccades and fixations. During saccades, the eyes do not move together perfectly: saccades could end with a misalignment, compromising fused vision. During fixations, small disconjugate drift can partly reduce this misalignment. We hypothesized that maintaining eye alignment during reading involves active monitoring from posterior parietal cortex (PPC); this goes against traditional views considering only downstream binocular control. Nine young adults read a text; transcranial magnetic stimulation (TMS) was applied over the PPC every 5 ± 0.2 s. Eye movements were recorded binocularly with Eyelink II. Stimulation had three major effects: (1) disturbance of eye alignment during fixation; (2) increase of saccade disconjugacy leading to eye misalignment; (3) decrease of eye alignment reduction during fixation drift. The effects depend on the side; the right PPC was more involved in maintaining alignment over the motor sequence. Thus, the PPC is actively involved in the control of binocular eye alignment during reading, allowing clear vision. Cortical activation during reading is related to linguistic processes and motor control per se. The study might be of interest for the understanding of deficits of binocular coordination, encountered in several populations, e.g., in children with dyslexia.
Eduardo Vidal-Abarca; Tomás Martinez; Ladislao Salmerón; Raquel Cerdán; Ramiro Gilabert; Laura Gil; Amelia Mañá; Ana C. Llorens; Ricardo Ferris
Recording online processes in task-oriented reading with Read&Answer Journal Article
In: Behavior Research Methods, vol. 43, no. 1, pp. 179–192, 2011.
We present an application to study task-oriented reading processes called Read&Answer. The application mimics paper-and-pencil situations in which a reader interacts with one or more documents to perform a specific task, such as answering questions, writing an essay, or similar activities. Read&Answer presents documents and questions with a mask. The reader unmasks documents and questions so that only a piece of information is available at a time. This way the entire interaction between the reader and the documents on the task is recorded and can be analyzed. We describe Read&Answer and present its applications for research and assessment. Finally, we explain two studies that compare readers' performance on Read&Answer with students' reading times and comprehension levels on a paper-and-pencil task, and on a computer task recorded with eye-tracking. The use of Read&Answer produced similar comprehension scores, although it changed the pattern of reading times.
Eleonora Vig; Michael Dorr; Thomas Martinetz; Erhardt Barth
Eye movements show optimal average anticipation with natural dynamic scenes Journal Article
In: Cognitive Computation, vol. 3, no. 1, pp. 79–88, 2011.
A less studied component of gaze allocation in dynamic real-world scenes is the time lag of eye movements in responding to dynamic attention-capturing events. Despite the vast amount of research on anticipatory gaze behaviour in natural situations, such as action execution and observation, little is known about the predictive nature of eye movements when viewing different types of natural or realistic scene sequences. In the present study, we quantify the degree of anticipation during the free viewing of dynamic natural scenes. The cross-correlation analysis of image-based saliency maps with an empirical saliency measure derived from eye movement data reveals the existence of predictive mechanisms responsible for a near-zero average lag between dynamic changes of the environment and the responding eye movements. We also show that the degree of anticipation is reduced when moving away from natural scenes by introducing camera motion, jump cuts, and film-editing.
Melissa L. -H. Võ; John M. Henderson
Object-scene inconsistencies do not capture gaze: evidence from the flash-preview moving-window paradigm Journal Article
In: Attention, Perception, and Psychophysics, vol. 73, no. 6, pp. 1742–1753, 2011.
In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene.
An exploration of visual behaviour in eyewitness identification tests Journal Article
In: Applied Cognitive Psychology, vol. 25, no. 2, pp. 244–254, 2011.
The contribution of internal (eyes, nose and mouth) and external (hair-line, cheek and jaw-line) features across eyewitness identification tests was examined using eye tracking. In Experiment 1, participants studied faces and were tested with lineups, either simultaneous (test faces presented in an array) or sequential (test faces presented one at a time). In Experiment 2, the recognition of previously studied faces was tested in a showup (a suspect face alone was presented). Results indicated that foils were analysed for a shorter period of time in the simultaneous compared to the sequential condition, whereas a positively identified face was analysed for a comparable period of time across lineup procedures. In simultaneous lineups and showups, a greater proportion of time was spent analysing internal features of the test faces compared to sequential lineups. Different decision processes across eyewitness identification tests are inferred based on the results.
Heather Flowe; Garrison W. Cottrell
An examination of simultaneous lineup identification decision processes using eye tracking Journal Article
In: Applied Cognitive Psychology, vol. 25, pp. 443–451, 2011.
Decision processes in simultaneous lineups (an array of faces in which a ‘suspect' face is displayed along with foil faces) were examined using eye tracking to capture the length and number oftimes that individual faces were visually analysed. The similarity of the lineup target face relative to the study face was manipulated, and face dwell times on the first visit and on return visits to the individual lineup faces were measured. On first visits, positively identified faces were examined for a longer duration compared to faces that were not identified. When no face was identified from the lineup, the suspect was visited for a longer duration compared to a foil face. On return visits, incorrectly identified faces were examined for a longer duration and visited more often compared to correctly identified faces. The results indicate that lineup decisions can be predicted by face dwell time and the number of visits made to faces.
Angélica Pérez Fornos; Jörg Sommerhalder; Marco Pelizzone
Reading with a simulated 60-channel implant Journal Article
In: Frontiers in Neuroscience, vol. 5, pp. 57, 2011.
First generation retinal prostheses containing 50-60 electrodes are currently in clinical trials. The purpose of this study was to evaluate the theoretical upper limit (best possible) reading performance attainable with a state-of-the-art 60-channel retinal implant and to find the optimum viewing conditions for the task. Four normal volunteers performed full-page text reading tasks with a low-resolution, 60-pixel viewing window that was stabilized in the central visual field. Two parameters were systematically varied: (1) spatial resolution (image magnification) and (2) the orientation of the rectangular viewing window. Performance was measured in terms of reading accuracy (% of correctly read words) and reading rates (words/min). Maximum reading performances were reached at spatial resolutions between 3.6 and 6 pixels/char. Performance declined outside this range for all subjects. In optimum viewing conditions (4.5 pixels/char), subjects achieved almost perfect reading accuracy and mean reading rates of 26 words/min for the vertical viewing window and of 34 words/min for the horizontal viewing window. These results suggest that, theoretically, some reading abilities can be restored with actual state-of-the-art retinal implant prototypes if "image magnification" is within an "optimum range." Future retinal implants providing higher pixel resolutions, thus allowing for a wider visual span might allow faster reading rates.
Tom Foulsham; Rana Alan; Alan Kingstone
Scrambled eyes? Disrupting scene structure impedes focal processing and increases bottom-up guidance Journal Article
In: Attention, Perception, and Psychophysics, vol. 73, no. 7, pp. 2008–2025, 2011.
Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.
Tom Foulsham; Jason J. S. Barton; Alan Kingstone; Richard Dewhurst; Geoffrey Underwood
Modeling eye movements in visual agnosia with a saliency map approach: Bottom-up guidance or top-down strategy? Journal Article
In: Neural Networks, vol. 24, no. 6, pp. 665–677, 2011.
Two recent papers (Foulsham, Barton, Kingstone, Dewhurst, & Underwood, 2009; Mannan, Kennard, & Husain, 2009) report that neuropsychological patients with a profound object recognition problem (visual agnosic subjects) show differences from healthy observers in the way their eye movements are controlled when looking at images. The interpretation of these papers is that eye movements can be modeled as the selection of points on a saliency map, and that agnosic subjects show an increased reliance on visual saliency, i.e., brightness and contrast in low-level stimulus features. Here we review this approach and present new data from our own experiments with an agnosic patient that quantifies the relationship between saliency and fixation location. In addition, we consider whether the perceptual difficulties of individual patients might be modeled by selectively weighting the different features involved in a saliency map. Our data indicate that saliency is not always a good predictor of fixation in agnosia: even for our agnosic subject, as for normal observers, the saliency-fixation relationship varied as a function of the task. This means that top-down processes still have a significant effect on the earliest stages of scanning in the setting of visual agnosia, indicating severe limitations for the saliency map model. Top-down, active strategies-which are the hallmark of our human visual system-play a vital role in eye movement control, whether we know what we are looking at or not.
Tom Foulsham; Robert Teszka; Alan Kingstone
Saccade control in natural images is shaped by the information visible at fixation: Evidence from asymmetric gaze-contingent windows Journal Article
In: Attention, Perception, and Psychophysics, vol. 73, no. 1, pp. 266–283, 2011.
When people view images, their saccades are predominantly horizontal and show a positively skewed distribution of amplitudes. How are these patterns affected by the information close to fixation and the features in the periphery? We recorded saccades while observers encoded a set of scenes with a gaze-contingent window at fixation: Features inside a rectangular (Experiment 1) or elliptical (Experiment 2) window were intact; peripheral background was masked completely or blurred. When the window was asymmetric, with more information preserved either horizontally or vertically, saccades tended to follow the information within the window, rather than exploring unseen regions, which runs counter to the idea that saccades function to maximize information gain on each fixation. Window shape also affected fixation and amplitude distributions, but horizontal windows had less of an impact. The findings suggest that saccades follow the features currently being processed and that normal vision samples these features from a horizontally elongated region.
Tom Foulsham; Geoffrey Underwood
If visual saliency predicts search, then why? Evidence from normal and gaze-contingent search tasks in natural scenes Journal Article
In: Cognitive Computation, vol. 3, no. 1, pp. 48–63, 2011.
The Itti and Koch (Vision Research 40: 14891506, 2000) saliency map model has inspired a wealth of research testing the claim that bottom-up saliency determines the placement of eye fixations in natural scenes. Although saliency seems to correlate with (although not necessarily cause) fixation in free-viewing or encoding tasks, it has been suggested that visual saliency can be overridden in a search task, with saccades being planned on the basis of target features, rather than being captured by saliency. Here, we find that target regions of a scene that are salient according to this model are found quicker than control regions (Experiment 1). However, this does not seem to be altered by filtering features in the periphery using a gaze-contingent display (Experiment 2), and a deeper analysis of the eye movements made suggests that the saliency effect is instead due to the meaning of the scene regions. Experiment 3 supports this interpretation, showing that scene inversion reduces the saliency effect. These results suggest that saliency effects on search may have nothing to do with bottom-up saccade guidance.
Tom Foulsham; Esther Walker; Alan Kingstone
The where, what and when of gaze allocation in the lab and the natural environment Journal Article
In: Vision Research, vol. 51, no. 17, pp. 1920–1931, 2011.
How do people distribute their visual attention in the natural environment? We and our colleagues have usually addressed this question by showing pictures, photographs or videos of natural scenes under controlled conditions and recording participants' eye movements as they view them. In the present study, we investigated whether people distribute their gaze in the same way when they are immersed and moving in the world compared to when they view video clips taken from the perspective of a walker. Participants wore a mobile eye tracker while walking to buy a coffee, a trip that required a short walk outdoors through the university campus. They subsequently watched first-person videos of the walk in the lab. Our results focused on where people directed their eyes and their head, what objects were gazed at and when attention-grabbing items were selected. Eye movements were more centralised in the real world, and locations around the horizon were selected with head movements. Other pedestrians, the path, and objects in the distance were looked at often in both the lab and the real world. However, there were some subtle differences in how and when these items were selected. For example, pedestrians close to the walker were fixated more often when viewed on video than in the real world. These results provide a crucial test of the relationship between real behaviour and eye movements measured in the lab.
Jeremy Freeman; G. J. Brouwer; David J. Heeger; Elisha P. Merriam
Orientation decoding depends on maps, not columns Journal Article
In: Journal of Neuroscience, vol. 31, no. 13, pp. 4792–4804, 2011.
The representation of orientation in primary visual cortex (V1) has been examined at a fine spatial scale corresponding to the columnar architecture. We present functional magnetic resonance imaging (fMRI) measurements providing evidence for a topographic map of orientation preference in human V1 at a much coarser scale, in register with the angular-position component of the retinotopic map of V1. This coarse-scale orientation map provides a parsimonious explanation for why multivariate pattern analysis methods succeed in decoding stimulus orientation from fMRI measurements, challenging the widely held assumption that decoding results reflect sampling of spatial irregularities in the fine-scale columnar architecture. Decoding stimulus attributes and cognitive states from fMRI measurements has proven useful for a number of applications, but our results demonstrate that the interpretation cannot assume decoding reflects or exploits columnar organization.
Jeremy Freeman; Eero P. Simoncelli
Metamers of the ventral stream Journal Article
In: Nature Neuroscience, vol. 14, no. 9, pp. 1195–1204, 2011.
The human capacity to recognize complex visual patterns emerges in a sequence of brain areas known as the ventral stream, beginning with primary visual cortex (V1). We developed a population model for mid-ventral processing, in which nonlinear combinations of V1 responses are averaged in receptive fields that grow with eccentricity. To test the model, we generated novel forms of visual metamers, stimuli that differ physically but look the same. We developed a behavioral protocol that uses metameric stimuli to estimate the receptive field sizes in which the model features are represented. Because receptive field sizes change along the ventral stream, our behavioral results can identify the visual area corresponding to the representation. Measurements in human observers implicate visual area V2, providing a new functional account of neurons in this area. The model also explains deficits of peripheral vision known as crowding, and provides a quantitative framework for assessing the capabilities and limitations of everyday vision.
Hans Peter Frey; Kerstin Wirz; Verena Willenbockel; Torsten Betz; Cornell Schreiber; Tom Troscianko; Peter König
Beyond correlation: Do color features influence attention in rainforest? Journal Article
In: Frontiers in Human Neuroscience, vol. 5, pp. 36, 2011.
Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous. We recorded eye-movements of color-normal and deuteranope human participants freely viewing original and modified rainforest images. Eliminating red-green color information dramatically alters fixation behavior in color-normal participants. Changes in feature correlations and variability over subjects and conditions provide evidence for a causal effect of red-green color-contrast. The effects of blue-yellow contrast are much smaller. However, globally rotating hue in color space in these images reveals a mechanism analyzing color-contrast invariant of a specific axis in color space. Surprisingly, in deuteranope participants we find significantly elevated red-green contrast at fixation points, comparable to color-normal participants. Temporal analysis indicates that this is due to compensatory mechanisms acting on a slower time scale. Taken together, our results suggest that under natural conditions red-green color information contributes to overt attention at a low-level (bottom-up). Nevertheless, the results of the image modifications and deuteranope participants indicate that evaluation of color information is done in a hue-invariant fashion.
Jared Frey; Dario L. Ringach
Binocular eye movements evoked by self-induced motion parallax Journal Article
In: Journal of Neuroscience, vol. 31, no. 47, pp. 17069–17073, 2011.
Perception often triggers actions, but actions may sometimes be necessary to evoke percepts. This is most evident in the recovery of depth by self-induced motion parallax. Here we show that depth information derived from one's movement through a stationary environment evokes binocular eye movements consistent with the perception of three-dimensional shape. Human subjects stood in front of a display and viewed a simulated random-dot sphere presented monocularly or binocularly. Eye movements were recorded by a head-mounted eye tracker, while head movements were monitored by a motion capture system. The display was continuously updated to simulate the perspective projection of a stationary, transparent random dot sphere viewed from the subject's vantage point. Observers were asked to keep their gaze on a red target dot on the surface of the sphere as they moved relative to the display. The movement of the target dot simulated jumps in depth between the front and back surfaces of the sphere along the line of sight. We found the subjects' eyes converged and diverged concomitantly with changes in the perceived depth of the target. Surprisingly, even under binocular viewing conditions, when binocular disparity signals conflict with depth information from motion parallax, transient vergence responses were observed. These results provide the first demonstration that self-induced motion parallax is sufficient to drive vergence eye movements under both monocular and binocular viewing conditions.
Jelmer P. Vries; Ignace T. C. Hooge; Marco A. Wiering; Frans A. J. Verstraten
How longer saccade latencies lead to a competition for salience Journal Article
In: Psychological Science, vol. 22, no. 7, pp. 916–923, 2011.
It has been suggested that independent bottom-up and top-down processes govern saccadic selection. However, recent findings are hard to explain in such terms. We hypothesized that differences in visual-processing time can explain these findings, and we tested this using search displays containing two deviating elements, one requiring a short processing time and one requiring a long processing time. Following short saccade latencies, the deviation requiring less processing time was selected most frequently. This bias disappeared following long saccade latencies. Our results suggest that an element that attracts eye movements following short saccade latencies does so because it is the only element processed at that time. The temporal constraints of processing visual information therefore seem to be a determining factor in saccadic selection. Thus, relative saliency is a time-dependent phenomenon.
Jelmer P. De Vries; Ignace T. C. Hooge; Marco A. Wiering; Frans A. J. Verstraten
Saccadic selection and crowding in visual search: Stronger lateral masking leads to shorter search times Journal Article
In: Experimental Brain Research, vol. 211, no. 1, pp. 119–131, 2011.
We investigated the role of crowding in saccadic selection during visual search. To guide eye movements, often information from the visual periphery is used. Crowding is known to deteriorate the quality of peripheral information. In four search experiments, we studied the role of crowding, by accompanying individual search elements by flankers. Varying the difference between target and flankers allowed us to manipulate crowding strength throughout the stimulus. We found that eye movements are biased toward areas with little crowding for conditions where a target could be discriminated peripherally. Interestingly, for conditions in which the target could not be discriminated peripherally, this bias reversed to areas with strong crowding. This led to shorter search times for a target presented in areas with stronger crowding, compared to a target presented in areas with less crowding. These findings suggest a dual role for crowding in visual search. The presence of flankers similar to the target deteriorates the quality of the peripheral target signal but can also attract eye movements, as more potential targets are present over the area.
Louis F. Dell'Osso; Richard W. Hertle; R. John Leigh; Jonathan B. Jacobs; Susan King; Stacia Yaniglos
Effects of topical brinzolamide on infantile nystagmus syndrome waveforms: Eyedrops for nystagmus Journal Article
In: Journal of Neuro-Ophthalmology, vol. 31, no. 3, pp. 228–233, 2011.
BACKGROUND: Recent advances in infantile nystagmus syndrome (INS) surgery have uncovered the therapeutic importance of proprioception. In this report, we test the hypothesis that the topical carbonic anhydrase inhibitor (CAI) brinzolamide (Azopt) has beneficial effects on measures of nystagmus foveation quality in a subject with INS. METHODS: Eye movement data were taken, using a high-speed digital video recording system, before and after 3 days of the application of topical brinzolamide 3 times daily in each eye. Nystagmus waveforms were analyzed by applying the eXpanded Nystagmus Acuity Function (NAFX) at different gaze angles and determining the longest foveation domain (LFD) and compared to previously published data from the same subject after the use of a systemic CAI, contact lenses, and convergence and to other subjects before and after eye muscle surgery for INS. RESULTS:: Topical brinzolamide improved foveation by both a 51.9% increase in the peak value of the NAFX function (from 0.395 to 0.600) and a 50% broadening of the NAFX vs Gaze Angle curve (the LFD increased from 20 degrees to 30 degrees ). The improvements in NAFX after topical brinzolamide were equivalent to systemic acetazolamide or eye muscle surgery and were intermediate between those of soft contact lenses or convergence. Topical brinzolamide and contact lenses had equivalent LFD improvements and were less effective than convergence. CONCLUSIONS: In this subject with INS, topical brinzolamide resulted in improved-foveation INS waveforms over a broadened range of gaze angles. Its therapeutic effects were equivalent to systemic CAI. Although a prospective clinical trial is needed to prove efficacy or effectiveness in other subjects, an eyedrops-based therapy for INS may emerge as a viable addition to optical, surgical, behavioral, and systemic drug therapies.
Stefan Van Stigchel; Puck Imants; K. Richard Ridderinkhof; Stefan Van Stigchel; Puck Imants; K. Richard Ridderinkhof
Positive affect increases cognitive control in the antisaccade task Journal Article
In: Brain and Cognition, vol. 75, no. 2, pp. 177–181, 2011.
To delineate the modulatory effects of induced positive affect on cognitive control, the current study investigated whether positive affect increases the ability to suppress a reflexive saccade in the antisaccade task. Results of the antisaccade task showed that participants made fewer erroneous prosaccades in the condition in which a positive mood was induced compared to the neutral condition (i.e. in which no emotional mood was induced). This improvement of oculomotor inhibition was restricted to saccades with an express latency. These results are in line with the idea that enhanced performance in the positive affect condition could be caused by increased dopaminergic neurotransmission the brain. textcopyright 2010 Elsevier Inc.
Loni Desanghere; J. J. Marotta
"Graspability" of objects affects gaze patterns during perception and action tasks Journal Article
In: Experimental Brain Research, vol. 212, no. 2, pp. 177–187, 2011.
When grasping an object, our gaze marks key positions to which the fingertips are directed. In contrast, eye fixations during perceptual tasks are typically concentrated on an object's centre of mass (COM). However, previous studies have typically required subjects to either grasp the object at predetermined sites or just look at computer-generated shapes "as a whole". In the current study, we investigated gaze fixations during a reaching and grasping task to symmetrical objects and compared these fixations with those made during a perceptual size estimation task using real (Experiment 1) and computer-generated objects (Experiment 2). Our results demonstrated similar gaze patterns in both perception and action to real objects. Participants first fixated a location towards the top edge of the object, consistent with index finger location during grasping, followed by a subsequent fixation towards the object's COM. In contrast, during the perceptual task to computer-generated objects, an opposite pattern in fixation locations was observed, where first fixations were closer to the COM, followed by a subsequent fixation towards the top edge. Even though differential fixation patterns were observed between studies, the area in which these fixations occurred, between the centre of the object and top edge, was the same in all tasks. These results demonstrate for the first time consistencies in fixation locations across both perception and action tasks, particularly when the same type of information (e.g. object size) is important for the completion of both tasks, with fixation locations increasing relative to the object's COM with increases in block height.
Joost C. Dessing; J. Douglas Crawford; W. Pieter Medendorp
Spatial updating across saccades during manual interception Journal Article
In: Journal of Vision, vol. 11, no. 10, pp. 1–18, 2011.
We studied the effect of intervening saccades on the manual interception of a moving target. Previous studies suggest that stationary reach goals are coded and updated across saccades in gaze-centered coordinates, but whether this generalizes to interception is unknown. Subjects (n = 9) reached to manually intercept a moving target after it was rendered invisible. Subjects either fixated throughout the trial or made a saccade before reaching (both fixation points were in the range of -10° to 10°). Consistent with previous findings and our control experiment with stationary targets, the interception errors depended on the direction of the remembered moving goal relative to the new eye position, as if the target is coded and updated across the saccade in gaze-centered coordinates. However, our results were also more variable in that the interception errors for more than half of our subjects also depended on the goal direction relative to the initial gaze direction. This suggests that the feedforward transformations for interception differ from those for stationary targets. Our analyses show that the interception errors reflect a combination of biases in the (gaze-centered) representation of target motion and in the transformation of goal information into body-centered coordinates for action.
Teresa C. Frohman; Scott L. Davis; Elliot M. Frohman
Modeling the mechanisms of Uhthoff's phenomenon in MS patients with internuclear ophthalmoparesis Journal Article
In: Annals of the New York Academy of Sciences, vol. 1233, no. 1, pp. 313–319, 2011.
Internuclear ophthalmoparesis (INO) is the most common saccadic eye movement disorder observed in patients with multiple sclerosis (MS). It is characterized by slowing of the adducting eye during horizontal saccades, and most commonly results from a demyelinating lesion in the medial longitudinal fasciculus (MLF) within the midline tegmentum of the pons (ventral to the fourth ventricle) or midbrain (ventral to the cerebral aqueduct). Recent research has demonstrated that adduction velocity in MS-related INO, as measured by infrared eye movement recording techniques, is further reduced by a systematic increase in core body temperature (utilizing tube-lined water infusion suits in conjunction with an ingestible temperature probe and transabdominal telemetry) and reversed to baseline with active cooling. These results suggest that INO may represent a model syndrome by which we can carefully study the Uhthoff's phenomenon and objectively test therapeutic agents for its prevention.
Isabella Fuchs; Ulrich Ansorge; Christoph Redies; Helmut Leder
Salience in paintings: Bottom-up influences on eye fixations Journal Article
In: Cognitive Computation, vol. 3, no. 1, pp. 25–36, 2011.
In the current study, we investigated whether visual salience attracts attention in a bottom-up manner. We presented abstract and depictive paintings as well as photographs to naı¨ve participants in free-viewing (Experiment 1) and target-search (Experiment 2) tasks. Image salience was computed in terms of local feature contrasts in color, luminance, and orientation. Based on the theories of stimulus-driven salience effects on attention and fixations, we expected salience effects in all conditions and a characteristic short-lived temporal profile of the salience-driven effect on fixations. Our results confirmed the predictions. Results are discussed in terms of their potential implications.
Shai Gabay; Yoni Pertzov; Avishai Henik
Orienting of attention, pupil size, and the norepinephrine system Journal Article
In: Attention, Perception, and Psychophysics, vol. 73, no. 1, pp. 123–129, 2011.
This research examined a novel suggestion regard-ing the involvement of the locus coeruleus–norepinephrine (LC–NE) system in orienting reflexive (exogenous) attention. A common procedure for studying exogenous orienting of attention is Posner's cuing task. Importantly, one can manipulate the required level of target processing by changing task requirements, which, in turn, can elicit a different time course of inhibition of return (IOR). An easy task (responding to target location) produces earlier onset IOR, whereas a demanding task (responding to target identity) produces later onset IOR. Aston-Jones and Cohen (Annual Review of Neuroscience, 28, 403–450, 2005) presented a theory suggesting two different modes of LC activity: tonic and phasic. Accordingly, we suggest that in the more demanding task, the LC–NE system is activated in phasic mode, and in the easier task, it is activated in tonic mode. This, in turn, influences the appearance of IOR. We examined this suggestion by measuring participants' pupil size, which has been demonstrated to correlate with the LC–NE system, while they performed cuing tasks. We found a response-locked phasic dilation of the pupil in the discrimination task, as compared with the localization task, which may reflect different firing modes of the LC–NE system during the two tasks. We also demonstrated a correlation between pupil size at the time of cue presentation and magnitude of IOR.
Benjamin Gagl; Stefan Hawelka; Florian Hutzler
Systematic influence of gaze position on pupil size measurement: Analysis and correction Journal Article
In: Behavior Research Methods, vol. 43, no. 4, pp. 1171–1181, 2011.
Cognitive effort is reflected in pupil dilation, but the assessment of pupil size is potentially susceptible to changes in gaze position. This study exemplarily used sentence reading as a stand-in for paradigms that assess pupil size in tasks during which changes in gaze position are unavoidable. The influence of gaze position on pupil size was first investigated by an artificial eye model with a fixed pupil size. Despite its fixed pupil size, the systematic measurements of the artificial eye model revealed substantial gaze-position-dependent changes in the measured pupil size. We evaluated two functions and showed that they can accurately capture and correct the gaze-dependent measurement error of pupil size recorded during a sentence-reading and an effortless z-string-scanning task. Implications for previous studies are discussed, and recommendations for future studies are provided.
Xiao Gao; Quanchuan Wang; Todd Jackson; Guang Zhao; Yi Liang; Hong Chen
Biases in orienting and maintenance of attention among weight dissatisfied women: An eye-movement study Journal Article
In: Behaviour Research and Therapy, vol. 49, no. 4, pp. 252–259, 2011.
Despite evidence indicating fatness and thinness information are processed differently among weight-preoccupied and eating disordered individuals, the exact nature of these attentional biases is not clear. In this research, eye movement (EM) tracking assessed biases in specific component processes of visual attention (i.e., orientation, detection, maintenance and disengagement of gaze) in relation to body-related stimuli among 20 weight dissatisfied (WD) and 20 weight satisfied young women. Eye movements were recorded while participants completed a dot-probe task that featured fatness-neutral and thinness-neutral word pairs. Compared to controls, WD women were more likely to direct their initial gaze toward fatness words, had a shorter mean latency of first fixation on both fatness and thinness words, had longer first fixation on fatness words but shorter first fixation on thinness words, and shorter total gaze duration on thinness words. Reaction time data showed a maintenance bias towards fatness words among the WD women. In sum, results indicated WD women show initial orienting, speeded detection and initial maintenance biases towards fat body words in addition to a speeded detection - avoidance pattern of biases in relation to thin body words. In sum, results highlight the importance of the utility of EM-tracking as a means of identifying subtle attentional biases among weight dissatisfied women drawn from a non-clinical setting and the need to assess attentional biases as a dynamic process.
Tyler W. Garaas; Marc Pomplun
Distorted object perception following whole-field adaptation of saccadic eye movements Journal Article
In: Journal of Vision, vol. 11, no. 1, pp. 1–11, 2011.
The adaptation of an observer's saccadic eye movements to artificial post-saccadic visual error can lead to perceptual mislocalization of individual, transient visual stimuli. In this study, we demonstrate that simultaneous saccadic adaptation to a consistent error pattern across a large number of saccade vectors is accompanied by corresponding spatial distortions in the perception of persistent objects. To induce this adaptation, we artificially introduced several post-saccadic error patterns, which led to a systematic distortion in participants' oculomotor space and a corresponding distortion in their perception of the relative dimensions of a cross-figure. The results indicate a tight coupling between the oculomotor and visual-perceptual spaces that is not limited to misperception of individual visual locations but also affects metrics in the visual-perceptual space. This coupling suggests that our visual perception is continuously recalibrated by the post-saccadic error signal.
Peggy Gerardin; Valérie Gaveau; Denis Pélisson; Claude Prablanc
Integration of visual information for saccade production Journal Article
In: Human Movement Science, vol. 30, no. 6, pp. 1009–1021, 2011.
To foveate a visual target, subjects usually execute a primary hypometric saccade (S1) bringing the target in perifoveal vision, followed by a corrective saccade (S2) or by more than one S2. It is still debated to what extent these S2 are pre-programmed or dependent only on post-saccadic retinal error. To answer this question, we used a visually-triggered saccade task in which target position and target visibility were manipulated. In one-third of the trials, the target was slightly displaced at S1 onset (so-called double step paradigm) and was maintained until the end of S1, until the start of the first S2 or until the end of the trial. Experiments took place in two visual environments: in the dark and in a dimly lit room with a visible random square background. The results showed that S2 were less accurate for shortest target durations. The duration of post-saccadic visual integration thus appears as the main factor responsible for corrective saccade accuracy. We also found that the visual context modulates primary saccade accuracy, especially for the most hypometric subjects. These findings suggest that the saccadic system is sensitive to the visual properties of the environment and uses different strategies to maintain final gaze accuracy.
Jan Drewes; Julia Trommershäuser; Karl R. Gegenfurtner
Parallel visual search and rapid animal detection in natural scenes Journal Article
In: Journal of Vision, vol. 11, no. 2, pp. 1–21, 2011.
Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Recent studies found human response times to be as fast as 120 ms in a dual-presentation (2-AFC) setup (H. Kirchner & S. J. Thorpe, 2005). In most previous experiments, pairs of randomly chosen images were presented, frequently from very different contexts (e.g., a zebra in Africa vs. the New York Skyline). Here, we tested the effect of background size and contiguity on human performance by using a new, contiguous background image set. Individual images contained a single animal surrounded by a large, animal-free image area. The image could be positioned and cropped in such a manner that the animal could occur in one of eight evenly spaced positions on an imaginary circle (radius 10-deg visual angle). In the first (8-Choice) experiment, all eight positions were used, whereas in the second (2-Choice) and third (2-Image) experiments, the animals were only presented on the two positions to the left and right of the screen center. In the third experiment, additional rectangular frames were used to mimic the conditions of earlier studies. Average latencies on successful trials differed only slightly between conditions, indicating that the number of possible animal locations within the display does not affect decision latency. Detailed analysis of saccade targets revealed a preference toward both the head and the center of gravity of the target animal, affecting hit ratio, latency, and the number of saccades required to reach the target. These results illustrate that rapid animal detection operates scene-wide and is fast and efficient even when the animals are embedded in their natural backgrounds.
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold
What can eye movements tell us about Symbol Digit substitution by patients with schizophrenia? Journal Article
In: Schizophrenia Research, vol. 127, no. 1-3, pp. 137–143, 2011.
Substitution tests are sensitive to cognitive impairment and reliably discriminate patients with schizophrenia from healthy individuals better than most other neuropsychological instruments. However, due to their multifaceted nature, substitution test scores cannot pinpoint the specific cognitive deficits that lead to poor performance. The current study investigated eye movements during performance on a substitution test in order to better understand what aspect of substitution test performance underlies schizophrenia-related impairment. Twenty-five patients with schizophrenia and 25 healthy individuals performed a computerized version of the Symbol Digit Modalities Test while their eye movements were monitored. As expected, patients achieved lower overall performance scores. Moreover, analysis of participants' eye movements revealed that patients spent more time searching for the target symbol every time they visited the key area. Patients also made more visits to the key area for each response that they made. Regression analysis suggested that patients' impaired performance on substitution tasks is primarily related to a less efficient visual search and, secondarily, to impaired memory.
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold
Controlling the spotlight of attention: Visual span size and flexibility in schizophrenia Journal Article
In: Neuropsychologia, vol. 49, no. 12, pp. 3370–3376, 2011.
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed.
Ava Elahipanah; Bruce K. Christensen; Eyal M. Reingold
Attentional guidance during visual search among patients with schizophrenia Journal Article
In: Schizophrenia Research, vol. 131, no. 1-3, pp. 224–230, 2011.
The current study investigated visual guidance and saccadic selectivity during visual search among patients with schizophrenia (SCZ). Data from a previous study (Elahipanah, A., Christensen, B.K., & Reingold, E.M., 2008. Visual selective attention among persons with schizophrenia: The distractor ratio effect. Schizophr. Res. 105, 61-67.) suggested that visual guidance for the less frequent distractors in a conjunction search display (i.e., the distractor ratio effect) is intact among SCZ patients. The current study investigated the distractor ratio effect among SCZ patients when: 1) search is more demanding, and 2) search involves motion perception. In addition, eye tracking was employed to directly study saccadic selectivity for the different types of distractors. Twenty-eight SCZ patients receiving a single antipsychotic medication and 26 healthy control participants performed two conjunction search tasks: a within-dimension (i.e., colour × colour) search task; and a cross-dimension (i.e., motion × colour) search task. In each task the relative frequency of distractors was manipulated across 5 levels. Despite slower search times, patients' eye movement data indicated unimpaired visual guidance in both tasks. However, in the motion × colour conjunction search task, patients displayed disproportionate difficulty detecting the moving target when the majority of distractors were also moving. Results demonstrate that bottom-up attentional guidance is unimpaired among patients with SCZ; however, patients' impairment in motion discrimination impedes their ability to detect a moving target against noisy backgrounds.
Jessica J. Ellis; Mackenzie G. Glaholt; Eyal M. Reingold
Eye movements reveal solution knowledge prior to insight Journal Article
In: Consciousness and Cognition, vol. 20, no. 3, pp. 768–776, 2011.
In two experiments, participants solved anagram problems while their eye movements were monitored. Each problem consisted of a circular array of five letters: a scrambled four-letter solution word containing three consonants and one vowel, and an additional randomly-placed distractor consonant. Viewing times on the distractor consonant compared to the solution consonants provided an online measure of knowledge of the solution. Viewing times on the distractor consonant and the solution consonants were indistinguishable early in the trial. In contrast, several seconds prior to the response, viewing times on the distractor consonant decreased in a gradual manner compared to viewing times on the solution consonants. Importantly, this pattern was obtained across both trials in which participants reported the subjective experience of insight and trials in which they did not. These findings are consistent with the availability of partial knowledge of the solution prior to such information being accessible to subjective phenomenal awareness.
Peter J. Etchells; Christopher P. Benton; Casimir J. H. Ludwig; Iain D. Gilchrist
Testing a simplified method for measuring velocity integration in saccades using a manipulation of target contrast Journal Article
In: Frontiers in Psychology, vol. 2, pp. 115, 2011.
A growing number of studies in vision research employ analyses of how perturbations in visual stimuli influence behavior on single trials. Recently, we have developed a method along such lines to assess the time course over which object velocity information is extracted on a trial-by-trial basis in order to produce an accurate intercepting saccade to a moving target. Here, we present a simplified version of this methodology, and use it to investigate how changes in stimulus contrast affect the temporal velocity integration window used when generating saccades to moving targets. Observers generated saccades to one of two moving targets which were presented at high (80%) or low (7.5%) contrast. In 50% of trials, target velocity stepped up or down after a variable interval after the saccadic go signal. The extent to which the saccade endpoint can be accounted for as a weighted combination of the pre- or post-step velocities allows for identification of the temporal velocity integration window. Our results show that the temporal integration window takes longer to peak in the low when compared to high contrast condition. By enabling the assessment of how information such as changes in velocity can be used in the programming of a saccadic eye movement on single trials, this study describes and tests a novel methodology with which to look at the internal processing mechanisms that transform sensory visual inputs into oculomotor outputs.
William S. Evans; David Caplan; Gloria Waters
Effects of concurrent arithmetical and syntactic complexity on self-paced reaction times and eye fixations Journal Article
In: Psychonomic Bulletin & Review, vol. 18, no. 6, pp. 1203–1211, 2011.
Two dual-task experiments (replications of Experiments 1 and 2 in Fedorenko, Gibson, & Rohde, Journal of Memory and Language, 56, 246-269 2007) were conducted to determine whether syntactic and arithmetical operations share working memory resources. Subjects read object- or subject-extracted relative clause sentences phrase by phrase in a self-paced task while simultaneously adding or subtracting numbers. Experiment 2 measured eye fixations as well as self-paced reaction times. In both experiments, there were main effects of syntax and of mathematical operation on self-paced reading times, but no interaction of the two. In the Experiment 2 eye-tracking results, there were main effects of syntax on first-pass reading time and total reading time and an interaction between syntax and math in total reading time on the noun phrase within the relative clause. The findings point to differences in the ways individuals process sentences under these dual-task conditions, as compared with viewing sentences during "normal" reading conditions, and do not support the view that arithmetical and syntactic integration operations share a working memory system.
Leandro Luigi Di Stasi; Adoración Antolí; José J. Cañas
Main sequence: An index for detecting mental workload variation in complex tasks Journal Article
In: Applied Ergonomics, vol. 42, no. 6, pp. 807–813, 2011.
The primary aim of this study was to validate the saccadic main sequence, in particular the peak velocity [PV], as an alternative psychophysiological measure of Mental Workload [MW]. Taking the Wickens' multiple resource model as the theoretical framework of reference, an experiment was conducted using the Firechieftextregisteredmicroworld. MW was manipulated by changing the task complexity (between groups) and the amount of training (within groups). There were significant effects on PV from both factors. These results provide additional empirical support for the sensitivity of PV to discriminate MW variation on visual-dynamic complex tasks. These findings and other recent results on PV could provide important information for the development of a new vigilance screening tool for the prevention of accidents in several fields of applied ergonomics.
Leandro Luigi Di Stasi; Adoración Antolí; Miguel Gea; José J. Cañas
A neuroergonomic approach to evaluating mental workload in hypermedia interactions Journal Article
In: International Journal of Industrial Ergonomics, vol. 41, no. 3, pp. 298–304, 2011.
Neuroergonomics could provide on-line methods for measuring mental effort while the operator interacts with hypermedia. We present an experimental study in which 28 participants interacted with a modified version of an existing Spanish e-commerce website in two searching tasks (Goal oriented shopping and Experiential shopping) that demand different amounts of cognitive resources. Mental workload was evaluated multidimensionally, using subjective rating, an interaction index, and eye-related indices. Eye movements and pupil diameter were recorded. The results showed visual scanning behaviour coincided with subjective test scores and performance data in showing a higher information processing load in Goal oriented shopping. However, pupil diameter was able to detect only the variation in user activation during the interaction task, a finding that replicates previous results on the validity of pupil size as an index of arousal. We conclude that a neuroergonomics approach could be a useful method for detecting variations in operators' attentional states. Relevance to industry: These results could provide important information for the development of a new attentional screening tool for the prevention of accidents in several application domains.
Leandro Luigi Di Stasi; D. Contreras; Antonio Cándido; José J. Cañas; A. Catena
Behavioral and eye-movement measures to track improvements in driving skills of vulnerable road users: First-time motorcycle riders Journal Article
In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 14, no. 1, pp. 26–35, 2011.
Motorcyclist deaths and injuries follow the trend in sales rather than in growth in the number of motorcycles, suggesting that fatalities are related to the lack of driver experience with recently purchased motorcycles. The aim of the present investigation was to assess the effects of experience and training in hazard perception. We compared first-time riders (people who are not yet riders/drivers) before and after training in six different riding scenarios to expert motorcycle riders. Thirty-three participants took part in the experiment. Volunteers rode a moped in a fixed-base virtual environment and were presented with a number of preset risky events. We used a multidimensional methodology, including behavioral, subjective and eye-movements data. The results revealed differences between experts and first-time riders, as well as the effect of training on the novice group. As expected, training led to an improvement in the riding skills of first-time riders, reducing the number of accidents, improving their capacity to adapt their speed to the situation, reducing trajectory-corrective movements, and changing their pattern of gaze exploration. We identified several behavioral and eye-related measures that are sensitive to both long-term experience and training in motorcycle riders. These findings will be useful for the design of on-line monitoring systems to evaluate changes in risk behavior and of programs for preventing and controlling risk behavior and improving situation awareness for novice riders, with the ultimate aim of reducing road-user mortality.
Alan F. Dixson; Barnaby J. Dixson
Venus figurines of the european paleolithic: Symbols of fertility or attractiveness? Journal Article
In: Journal of Anthropology, vol. 2011, pp. 1–11, 2011.
The earliest known representations of the human female form are the European Paleolithic “Venus figurines,” ranging in age from 23,000 to 25,000 years. We asked participants to rate images of Paleolithic figurines for their attractiveness, age grouping and reproductive status. Attractiveness was positively correlated with measures of the waist-to hip ratio (WHR) of figurines, consistent with the “sexually attractive symbolism” hypothesis. However, most figurines had high WHRs (>1.0) and received low attractiveness scores. Participants rated most figurines as representing middle-aged or young adult women, rather than being adolescent or older (postmenopausal). While some were considered to represent pregnant women, consistent with the “fertility symbol” hypothesis, most were judged as being non-pregnant. Some figurines depict obese, large-breasted women, who are in their mature reproductive years and usually regarded as being of lower attractiveness. At the time these figurines were made, Europe was in the grip of a severe ice age. Obesity and survival into middle age after multiple pregnancies may have been rare in the European Upper Paleolithic. We suggest that depictions of corpulent, middle-aged females were not “Venuses” in any conventional sense. They may, instead, have symbolized the hope for survival and longevity, within well-nourished and reproductively successful communities.
Barnaby J. Dixson; Gina M. Grimshaw; Wayne L. Linklater; Alan F. Dixson
Eye-tracking of men's preferences for waist-to-hip ratio and breast size of women Journal Article
In: Archives of Sexual Behavior, vol. 40, no. 1, pp. 43–50, 2011.
Studies of human physical traits and mate preferences often use questionnaires asking participants to rate the attractiveness of images. Female waist-to-hip ratio (WHR), breast size, and facial appearance have all been implicated in assessments by men of female attractiveness. However, very little is known about how men make fine-grained visual assessments of such images. We used eye-tracking techniques to measure the numbers of visual fixations, dwell times, and initial fixations made by men who viewed front-posed photographs of the same woman, computer-morphed so as to differ in her WHR (0.7 or 0.9) and breast size (small, medium, or large). Men also rated these images for attractiveness. Results showed that the initial visual fixation (occurring within 200 ms from the start of each 5 s test) involved either the breasts or the waist. Both these body areas received more first fixations than the face or the lower body (pubic area and legs). Men looked more often and for longer at the breasts, irrespective of the WHR of the images. However, men rated images with an hourglass shape and a slim waist (0.7 WHR) as most attractive, irrespective of breast size. These results provide quantitative data on eye movements that occur during male judgments of the attractiveness of female images, and indicate that assessments of the female hourglass figure probably occur very rapidly.
Barnaby J. Dixson; Gina M. Grimshaw; Wayne L. Linklater; Alan F. Dixson
Eye tracking of men's preferences for female breast size and areola pigmentation Journal Article
In: Archives of Sexual Behavior, vol. 40, no. 1, pp. 51–58, 2011.
Sexual selection via male mate choice has often been implicated in the evolution of permanently enlarged breasts in women. While questionnaire studies have shown that men find female breasts visually attractive, there is very little information about how they make such visual judgments. In this study, we used eye-tracking technology to test two hypotheses: (1) that larger breasts should receive the greatest number of visual fixations and longest dwell times, as well as being rated as most attractive; (2) that lightly pigmented areolae, indicative of youth and nubility, should receive most visual attention and be rated as most attractive. Results showed that men rated images with medium-sized or large breasts as significantly more attractive than small breasts. Images with dark and medium areolar pigmentation were rated as more attractive than images with light areolae. However, variations in breast size had no significant effect on eye-tracking measures (initial visual fixations, number of fixations, and dwell times). The majority of initial fixations during eye-tracking tests were on the areolae. However, areolar pigmentation did not affect measures of visual attention. While these results demonstrate that cues indicative of female sexual maturity (large breasts and dark areolae) are more attractive to men, patterns of eye movements did not differ based on breast size or areolar pigmentation. We conclude that areolar pigmentation, as well as breast size, plays a significant role in men's judgments of female attractiveness. However, fine-grained measures of men's visual attention to these morphological traits do not correlate, in a simplistic way, with their attractiveness judgments.
Isabel Dombrowe; Mieke Donk; Christian N. L. Olivers
The costs of switching attentional sets Journal Article
In: Attention, Perception, and Psychophysics, vol. 73, no. 8, pp. 2481–2488, 2011.
People prioritize those aspects of the visual environment that match their attentional set. In the present study, we investigated whether switching from one attentional set to another is associated with a cost. We asked observers to sequentially saccade toward two color-defined targets, one on the left side of the display, the other on the right, each among a set of heterogeneously colored distractors. The targets were of the same color (no attentional set switch required) or of different colors (switch of attentional sets necessary), with each color consistently tied to a side, to allow observers to maximally prepare for the switch. We found that saccades were less accurate and slower in the switch condition than in the no-switch condition. Furthermore, whenever one of the distractors had the color associated with the other attentional set, a substantial proportion of saccades did not end on the target, but on this distractor. A time course analysis revealed that this distractor preference turned into a target preference after about 250-300 ms, suggesting that this is the time required to switch attentional sets.
Mieke Donk; Wieske Zoest
No control in orientation search: The effects of instruction on oculomotor selection in visual search Journal Article
In: Vision Research, vol. 51, no. 19, pp. 2156–2166, 2011.
The present study aimed to investigate whether people can selectively use salience information in search for a target. Observers were presented with a display consisting of multiple homogeneously oriented background lines and two orientation singletons. The orientation singletons differed in salience, where salience was defined by their orientation contrast relative to the background lines. Observers had the task to make a speeded eye movement towards a target, which was either the most or the least salient element of the two orientation singletons. The specific orientation of the target was either constant or variable over a block of trials such that observers had varying knowledge concerning the target identity. The results demonstrated that instruction - whether people were instructed to move to the most or the least salient item - only minimally affected the results. Short-latency eye movements were completely salience driven; here it did not matter whether people were searching for the most or least salient element. Long-latency eye movements were marginally affected by instruction, in particular when observers knew the target identity. These results suggest that even though people use salience information in oculomotor selection, they cannot use this information in a goal-driven manner. The results are discussed in terms of current models on visual selection.
Nathan Faivre; Sid Kouider
Increased sensory evidence reverses nonconscious priming during crowding Journal Article
In: Journal of Vision, vol. 11, no. 13, pp. 1–13, 2011.
Sensory adaptation reflects the fact that the responsiveness of a perceptual system changes after the processing of a specific stimulus. Two manifestations of this property have been used in order to infer the mechanisms underlying vision: priming, in which the processing of a target is facilitated by prior exposure to a related adaptor, and habituation, in which this processing is hurt by overexposure to an adaptor. In the present study, we investigated the link between priming and habituation by measuring how sensory evidence (short vs. long adaptor exposure) and perceptual awareness (discriminable vs. undiscriminable adaptor stimulus) affects the adaptive response on a related target. Relying on gaze-contingent crowding, we manipulated independently adaptor discriminability and adaptor duration and inferred sensory adaptation from reaction times on the discrimination of a subsequent oriented target. When adaptor orientation was undiscriminable, we found that increasing its duration reversed priming into habituation. When adaptor orientation was discriminable, priming effects were larger after short exposure, but increasing adaptor duration led to a decrease of priming instead of a reverse into habituation. We discuss our results as reflecting changes in the temporal dynamics of angular orientation processing, depending on the mechanisms associated with perceptual awareness and attentional amplification.
Nathan Faivre; Sid Kouider
Multi-feature objects elicit nonconscious priming despite crowding Journal Article
In: Journal of Vision, vol. 11, no. 3, pp. 1–10, 2011.
The conscious representation we build from the visual environment appears jumbled in the periphery, reflecting a phenomenon known as crowding. Yet, it remains possible that object-level representations (i.e., resulting from the binding of the stimulus' different features) are preserved even if they are not consciously accessible. With a paradigm involving gaze-contingent substitution, which allows us to ensure the constant absence of peripheral stimulus discrimination, we show that, despite their jumbled appearance, multi-feature crowded objects, such as faces and directional symbols, are encoded in a nonconscious manner and can influence subsequent behavior. Furthermore, we show that the encoding of complex crowded contents is modulated by attention in the absence of consciousness. These results, in addition to bringing new insights concerning the fate of crowded information, illustrate the potential of the Gaze-Contingent Crowding (GCC) approach for probing nonconscious cognition.
Joost Felius; Valeria L. N. Fu; Eileen E. Birch; Richard W. Hertle; Reed M. Jost; Vidhya Subramanian
Quantifying nystagmus in infants and young children: Relation between foveation and visual acuity deficit Journal Article
In: Investigative Ophthalmology & Visual Science, vol. 52, no. 12, pp. 8724–8731, 2011.
PURPOSE. Nystagmus eye movement data from infants and young children are often not suitable for advanced quantitative analysis. A method was developed to capture useful informa- tion from noisy data and validate the technique by showing meaningful relationships with visual functioning. METHODS. Horizontal eye movements from patients (age 5 months–8 years) with idiopathic infantile nystagmus syndrome (INS) were used to develop a quantitative outcome measure that allowed for head and body movement during the record- ing. The validity of this outcome was assessed by evaluating its relation to visual acuity deficit in 130 subjects, its relation to actual fixation as assessed under simultaneous fundus imaging, its correlation with the established expanded nystagmus acuity function (NAFX), and its test–retest variability. RESULTS. The nystagmus optimal fixation function (NOFF) was defined as the logit transform of the fraction of data points meeting position and velocity criteria within a moving win- dow. A decreasing exponential relationship was found be- tween visual acuity deficit and the NOFF, yielding a 0.75 logMAR deficit for the poorest NOFF and diminishing deficits with improving foveation. As much as 96% of the points iden- tified as foveation events fell within 0.25° of the actual target. Good correlation (r ⫽ 0.96) was found between NOFF and NAFX. Test–retest variability was 0.49 logit units. CONCLUSIONS. The NOFF is a feasible method to quantify noisy nystagmus eye movement data. Its validation makes it a prom- ising outcome measure for the progression and treatment of nystagmus during early childhood.
Ian C. Fiebelkorn; John J. Foxe; John S. Butler; Manuel R. Mercier; Adam C. Snyder; Sophie Molholm
Ready, set, reset: Stimulus-locked Periodicity in behavioral performance demonstrates the consequences of cross-sensory phase reset Journal Article
In: Journal of Neuroscience, vol. 31, no. 27, pp. 9971–9981, 2011.
The simultaneous presentation of a stimulus in one sensory modality often enhances target detection in another sensory modality,but the neural mechanisms that govern these effects are still under investigation. Here, we test a hypothesis proposed in the neurophysiological literature: that auditory facilitation of visual-target detection operates through cross-sensory phase reset ofongoing neural oscillations (Lakatos et al., 2009). To date, measurement limitations have prevented this potentially powerful neural mechanism from being directly linked with its predicted behavioral consequences. The present experiment uses a psychophysical approach in humans to demonstrate, forthe first time, stimulus-locked periodicity in visual-target detection,following a temporally informative sound. Our data further demonstrate that periodicity in behavioral performance is strongly influenced by the probability of audiovisual co-occurrence. We argue that fluctuations in visual-target detection result from cross-sensory phase reset, both at the moment it occurs and persisting for seconds thereafter. The precise frequency at which this periodicity operates remains to be determined through a method that allows for a higher sampling rate.
Katja Fiehler; Immo Schütz; Denise Y. P. Henriques
Gaze-centered spatial updating of reach targets across different memory delays Journal Article
In: Vision Research, vol. 51, no. 8, pp. 890–897, 2011.
Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present.
Ruth Filik; Emma Barber
Inner speech during silent reading reflects the reader's regional accent Journal Article
In: PLoS ONE, vol. 6, no. 10, pp. e25782, 2011.
While reading silently, we often have the subjective experience of inner speech. However, there is currently little evidence regarding whether this inner voice resembles our own voice while we are speaking out loud. To investigate this issue, we compared reading behaviour of Northern and Southern English participants who have differing pronunciations for words like 'glass', in which the vowel duration is short in a Northern accent and long in a Southern accent. Participants' eye movements were monitored while they silently read limericks in which the end words of the first two lines (e.g., glass/class) would be pronounced differently by Northern and Southern participants. The final word of the limerick (e.g., mass/sparse) then either did or did not rhyme, depending on the reader's accent. Results showed disruption to eye movement behaviour when the final word did not rhyme, determined by the reader's accent, suggesting that inner speech resembles our own voice.
C. D. Fiorillo
Transient activation of midbrain dopamine neurons by reward risk Journal Article
In: Neuroscience, vol. 197, pp. 162–171, 2011.
Dopamine neurons of the ventral midbrain are activated transiently following stimuli that predict future reward. This response has been shown to signal the expected value of future reward, and there is strong evidence that it drives positive reinforcement of stimuli and actions associated with reward in accord with reinforcement learning models. Behavior is also influenced by reward uncertainty, or risk, but it is not known whether the transient response of dopamine neurons is sensitive to reward risk. To investigate this, monkeys were trained to associate distinct visual stimuli with certain or uncertain volumes of juice of nearly the same expected value. In a choice task, monkeys preferred the stimulus predicting an uncertain (risky) reward outcome. In a Pavlovian task, in which the neuronal responses to each stimulus could be measured in isolation, it was found that dopamine neurons were more strongly activated by the stimulus associated with reward risk. Given extensive evidence that dopamine drives reinforcement, these results strongly suggest that dopamine neurons can reinforce risk-seeking behavior (gambling), at least under certain conditions. Risk-seeking behavior has the virtue of promoting exploration and learning, and these results support the hypothesis that dopamine neurons represent the value of exploration.
Gemma Fitzsimmons; Denis Drieghe
The influence of number of syllables on word skipping during reading Journal Article
In: Psychonomic Bulletin & Review, vol. 18, no. 4, pp. 736–741, 2011.
In an eye-tracking experiment, participants read sentences containing a monosyllabic (e.g., grain) or a disyllabic (e.g., cargo) five-letter word. Monosyllabic target words were skipped more often than disyllabic target words, indicating that syllabic structure was extracted from the parafovea early enough to influence the decision of saccade target selection. Fixation times on the target word when it was fixated did not show an influence of number of syllables, demonstrating that number of syllables differentially impacts skipping rates and fixation durations during reading.
Simon Farrell; Casimir J. H. Ludwig; Lucy A. Ellis; Iain D. Gilchrist
Influence of environmental statistics on inhibition of saccadic return Journal Article
In: Proceedings of the National Academy of Sciences, vol. 107, no. 2, pp. 929–934, 2010.
Initiating an eye movement is slowed if the saccade is directed to a location that has been fixated in the recent past. We show that this inhibitory effect is modulated by the temporal statistics of the environment: If a return location is likely to become behaviorally relevant, inhibition of return is absent. By fitting an accumulator model of saccadic decision-making, we show that the inhibitory effect and the sensitivity to local statistics can be dissociated in their effects on the rate of accumulation of evidence, and the threshold controlling the amount of evidence needed to generate a saccade.
Cara R. Featherstone; Patrick Sturt
Because there was a cause for concern: An investigation into a word-specific prediction account of the implicit-causality effect Journal Article
In: Quarterly Journal of Experimental Psychology, vol. 63, no. 1, pp. 3–15, 2010.
In Koornneef and Van Berkum's (2006) eye-tracking study of implicit causality (Caramazza, Grober, Garvey, & Yates, 1977), midsentence delays were observed in the processing of sentences such as "David blamed Linda because she(bias-congruent)/he(bias-incongruent) . . . " when the pronoun following because was incongruent with the bias of the implicit-causality verb. The authors suggested that these immediate delays could be attributed to participants predicting a bias-congruent pronoun after because. According to this explanation, any other word placed after because should cause processing delays. The present investigation aimed to test this explanation by using sentences of the form "David blamed Linda because she(bias-congruent)/he(bias-incongruent)/there(bias-neutral) . . . ". Since significant immediate delays were observed in sentences containing a bias-incongruent pronoun (relative to a bias-congruent pronoun) but not in sentences containing there, the results of this study support an immediate integration effect but pose a problem to the word-specific prediction account of the implicit causality effect.
Heather J. Ferguson; Christoph Scheepers; Anthony J. Sanford
Expectations in counterfactual and theory of mind reasoning Journal Article
In: Language and Cognitive Processes, vol. 25, no. 3, pp. 297–346, 2010.
During language comprehension, information about the world is exchanged and processed. Two essential ingredients of everyday cognition that are employed during language comprehension are the ability to reason counterfactually, and the ability to understand and predict other peoples' behaviour by attributing independent mental states to them (theory of mind).We report two visual-world studies investigating the extent to which the constraints of world knowledge and prior context, as established by a counterfactual (Exp. 1) or a false belief situation (Exp. 2), influence eye-movements directed towards objects in a visual field. Proportions of anticipatory eye-movements indicated an initial visual bias towards contextually supported referents in both studies. Thus, we propose that when visual information is available to reinforce linguistic input, participants expect a context-relevant continuation. Shortly after the critical word onset, the linguistically supported referent was visually favoured, with counterfactual (but not false belief) contexts revealing a temporal delay in integrating factually inconsistent language input. Results are discussed in relation to accounts of discourse processing and the processing relationship between counterfactual and theory of mind reasoning. Finally, we compare findings across different experimental paradigms and propose a novel cluster-analytic procedure to identify time-windows of interest in visual-world data.
Ruth Filik; Linda M. Moxey
The on-line processing of written irony Journal Article
In: Cognition, vol. 116, no. 3, pp. 421–436, 2010.
We report an eye-tracking study in which we investigate the on-line processing of written irony. Specifically, participants' eye movements were recorded while they read sentences which were either intended ironically, or non-ironically, and subsequent text which contained pronominal reference to the ironic (or non-ironic) phrase. Results showed longer reading times for ironic comments compared to a non-ironic baseline, suggesting that additional processing was required in ironic compared to non-ironic conditions. Reading times for subsequent pronominal reference indicated that for ironic materials, both the ironic and literal interpretations of the text were equally accessible during on-line language comprehension. This finding is most in-line with predictions of the graded salience hypothesis, which, in conjunction with the retention hypothesis, states that readers represent both the literal and ironic interpretation of an ironic utterance.
Shai Gabay; Avishai Henik; Libe Gradstein
Ocular motor ability and covert attention in patients with Duane Retraction Syndrome Journal Article
In: Neuropsychologia, vol. 48, no. 10, pp. 3102–3109, 2010.
Is orienting of spatial attention dependent on normal functioning of the ocular motor system? We investigated the role of motor pathways in covert orienting (attentional orienting without performing eye movements) by studying three patients suffering from Duane Retraction Syndrome-a congenital impairment in executing horizontal eye movements restricted to specific gaze directions. Patients showed a typical exogenous (reflexive) attention effect when the target was presented in visual fields to which they could perform an eye movement. This effect was not present when the target was presented in the visual field to which they could not perform eye movements. These findings stress the link between eye movements and attention. Specifically, they bring out the importance of the ability to execute appropriate eye movements for attentional orienting. We suggest that the relevant information about eye movement ability is provided by feedback from lower motor structures to higher attentional areas.
Amanda L. Gamble; Ronald M. Rapee
The time-course of attention to emotional faces in social phobia Journal Article
In: Journal of Behavior Therapy and Experimental Psychiatry, vol. 41, no. 1, pp. 39–44, 2010.
This study investigated the time-course of attentional bias in socially phobic (SP) and non-phobic (NP) adults. Participants viewed angry and happy faces paired with neutral faces (i.e., face-face pairs) and angry, happy and neutral faces paired with household objects (i.e., face-object pairs) for 5000 ms. Eye movement (EM) was measured throughout to assess biases in early and sustained attention. Attentional bias occurred only for face-face pairs. SP adults were vigilant for angry faces relative to neutral faces in the first 500 ms of the 5000 ms exposure, relative to NP adults. SP adults were also vigilant for happy faces over 500 ms, although there were no group-based differences in attention to happy-neutral face pairs. There were no group differences in attention to faces throughout the remainder of the exposure. Results suggest that social phobia is characterised by early vigilance for social cues with no bias in subsequent processing.
Joy J. Geng; Nicholas E. DiQuattro
Attentional capture by a perceptually salient non-target facilitates target processing through inhibition and rapid rejection Journal Article
In: Journal of Vision, vol. 10, no. 6, pp. 1–12, 2010.
Perceptually salient distractors typically interfere with target processing in visual search situations. Here we demonstrate that a perceptually salient distractor that captures attention can nevertheless facilitate task performance if the observer knows that it cannot be the target. Eye-position data indicate that facilitation is achieved by two strategies: inhibition when the first saccade was directed to the target, and rapid rejection when the first saccade was captured by the salient distractor. Both mechanisms relied on the distractor being perceptually salient and not just perceptually different. The results demonstrate how bottom-up attentional capture can play a critical role in constraining top-down attentional selection at multiple stages of processing throughout a single trial.
Kevin Fleming; Carole L. Bandy; Matthew O. Kimble
Decisions to shoot in a weapon identification task: The influence of cultural stereotypes and perceived threat on false positive errors Journal Article
In: Social Neuroscience, vol. 5, no. 2, pp. 201–220, 2010.
The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.
Tom Foulsham; Joey T. Cheng; Jessica L. Tracy; Joseph Henrich; Alan Kingstone
Gaze allocation in a dynamic situation: Effects of social status and speaking Journal Article
In: Cognition, vol. 117, no. 3, pp. 319–331, 2010.
Human visual attention operates in a context that is complex, social and dynamic. To explore this, we recorded people taking part in a group decision-making task and then showed video clips of these situations to new participants while tracking their eye movements. Observers spent the majority of time looking at the people in the videos, and in particular at their eyes and faces. The social status of the people in the clips had been rated by their peers in the group task, and this status hierarchy strongly predicted where eye-tracker participants looked: high-status individuals were gazed at much more often, and for longer, than low-status individuals, even over short, 20-s videos. Fixation was temporally coupled to the person who was talking at any one time, but this did not account for the effect of social status on attention. These results are consistent with a gaze system that is attuned to the presence of other individuals, to their social status within a group, and to the information most useful for social interaction.
Tom Foulsham; Alan Kingstone
Asymmetries in the direction of saccades during perception of scenes and fractals: Effects of image type and image features Journal Article
In: Vision Research, vol. 50, no. 8, pp. 779–795, 2010.
The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance.
Alessio Fracasso; Alfonso Caramazza; David Melcher
Continuous perception of motion and shape across saccadic eye movements Journal Article
In: Journal of Vision, vol. 10, no. 13, pp. 1–17, 2010.
Although our naïve experience of visual perception is that it is smooth and coherent, the actual input from the retina involves brief and discrete fixations separated by saccadic eye movements. This raises the question of whether our impression of stable and continuous vision is merely an illusion. To test this, we examined whether motion perception can "bridge" a saccade in a two-frame apparent motion display in which the two frames were separated by a saccade. We found that transformational apparent motion, in which an object is seen to change shape and even move in three dimensions during the motion trajectory, continues across saccades. Moreover, participants preferred an interpretation of motion in spatial, rather than retinal, coordinates. The strength of the motion percept depended on the temporal delay between the two motion frames and was sufficient to give rise to a motion-from-shape aftereffect, even when the motion was defined by a second-order shape cue ("phantom transformational apparent motion"). These findings suggest that motion and shape information are integrated across saccades into a single, coherent percept of a moving object.
Tom C. A. Freeman; Rebecca A. Champion; Paul A. Warren
A Bayesian model of perceived head-centered Velocity during smooth pursuit eye movement Journal Article
In: Current Biology, vol. 20, no. 8, pp. 757–762, 2010.
During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion ), the perceived direction of moving objects is distorted (trajectory misperception ), and self-motion veers away from its true path (e.g., the slalom illusion ). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion.
Cheryl Frenck-Mestre; Nathalie Zardan; Annie Colas; Alain Ghio
Eye-movement patterns of readers with down syndrome during sentence-processing: An exploratory study Journal Article
In: American Journal on Intellectual and Developmental Disabilities, vol. 115, no. 3, pp. 193–206, 2010.
Eye movements were examined to determine how readers with Down syndrome process sentences online. Participants were 9 individuals with Down syndrome ranging in reading level from Grades 1 to 3 and a reading-level-matched control group. For syntactically simple sentences, the pattern of reading times was similar for the two groups, with longer reading times found at sentence end. This "wrap-up" effect was also found in the first reading of more complex sentences for the control group, whereas it only emerged later for the readers with Down syndrome. Our results provide evidence that eye movements can be used to investigate reading in individuals with Down syndrome and underline the need for future studies.
Hans Peter Frey; Shane P. Kelly; Edmund C. Lalor; John J. Foxe
Early spatial attentional modulation of inputs to the fovea Journal Article
In: Journal of Neuroscience, vol. 30, no. 13, pp. 4547–4551, 2010.
Attending to a specific spatial location modulates responsivity of neurons with receptive fields processing that part of the environment. A major outstanding question is whether attentional modulation operates differently for the foveal (central) representation of the visual field than it does for the periphery. Indeed, recent animal electrophysiological recordings suggest that attention differentially affects spatial integration for central and peripheral receptive fields in primary visual cortex. In human electroencephalographic recordings, spatial attention to peripheral locations robustly modulates activity in early visual regions, but it has been claimed that this mechanism does not operate in foveal vision. Here, however, we show clear early attentional modulation of foveal stimulation with the same timing and cortical sources as seen for peripheral stimuli, demonstrating that attentional gain control operates similarly across the entire field of view. These results imply that covertly attending away from the center of gaze, which is a common paradigm in behavioral and electrophysiological studies of attention, results in a precisely timed push–pull mechanism. While the amplitude of the initial response to stimulation at attended peripheral locations is significantly increased beginning at 80 ms, the amplitude of the response to foveal stimulation begins to be attenuated.
Denise D. J. Grave; Nicola Bruno
The effect of the Müller-Lyer illusion on saccades is modulated by spatial predictability and saccadic latency Journal Article
In: Experimental Brain Research, vol. 203, no. 4, pp. 671–679, 2010.
Studies investigating the effect of visual illusions on saccadic eye movements have provided a wide variety of results. In this study, we test three factors that might explain this variability: the spatial predictability of the stimulus, the duration of the stimulus and the latency of the saccades. Participants made a saccade from one end of a Muller-Lyer figure to the other end. By changing the spatial predictability of the stimulus, we find that the illusion has a clear effect on saccades (16%) when the stimulus is at a highly predictable location. Even stronger effects of the illusion are found when the stimulus location becomes more unpredictable (19-23%). Conversely, manipulating the duration of the stimulus fails to reveal a clear difference in illusion effect. Finally, by computing the illusion effect for different saccadic latencies, we find a maximum illusion effect (about 30%) for very short latencies, which decreases by 7% with every 100 ms latency increase. We conclude that spatial predictability of the stimulus and saccadic latency influences the effect of the Muller-Lyer illusion on saccades.
C. Hemptinne; G. R. Barnes; Marcus Missal
Influence of previous target motion on anticipatory pursuit deceleration Journal Article
In: Experimental Brain Research, vol. 207, no. 3-4, pp. 173–184, 2010.
During visual pursuit of a moving target, expected changes in its trajectory often evoke anticipatory smooth pursuit responses. In the present study, we investigated characteristics of anticipatory smooth pursuit decelerations before a change or the end of a target trajectory. Healthy humans had to pursue with the eyes a target moving along a circular path that predictably or unpredictably reversed direction and then retraced its movement back to the starting position. We found that anticipatory eye decelerations were often evoked in temporal expectation of target reversal and of the end of the trajectory. The latency of anticipatory decelerations initiated before target reversal was variable, had poor temporal accuracy and depended on the history of previous trials. Anticipations of the end of the trajectory were more accurate, more precise and were not influenced by previous trials. In this case, subjects probably based their estimate of the end of the trajectory on the duration just experienced before target motion reversal. These results suggest that anticipatory eye decelerations are based on the characteristics of the current or preceding trials depending on the most reliable information available.
Kurt Debono; Alexander C. Schütz; Miriam Spering; Karl R. Gegenfurtner
Receptive fields for smooth pursuit eye movements and motion perception Journal Article
In: Vision Research, vol. 50, no. 24, pp. 2729–2739, 2010.
Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT).
Adriana M. Degani; Alessander Danna-Dos-Santos; Thomas Robert; Mark L. Latash
Kinematic synergies during saccades involving whole-body rotation: A study based on the uncontrolled manifold hypothesis Journal Article
In: Human Movement Science, vol. 29, no. 2, pp. 243–258, 2010.
We used the framework of the uncontrolled manifold hypothesis to study the coordination of body segments and eye movements in standing persons during the task of shifting the gaze to a target positioned behind the body. The task was performed at a comfortable speed and fast. Multi-segment and head-eye synergies were quantified as co-varied changes in elemental variables (body segment rotations and eye rotation) that stabilized (reduced the across trials variability) of head rotation in space and gaze trajectory. Head position in space was stabilized by co-varied rotations of body segments prior to the action, during its later stages, and after its completion. The synergy index showed a drop that started prior to the action initiation (anticipatory synergy adjustment) and continued during the phase of quick head rotation. Gaze direction was stabilized only at movement completion and immediately after the saccade at movement initiation under the " fast" instruction. The study documents for the first time anticipatory synergy adjustments during whole-body actions. It shows multi-joint synergies stabilizing head trajectory in space. In contrast, there was no synergy between head and eye rotations during saccades that would achieve a relatively invariant gaze trajectory.
Francesca Delogu; Francesco Vespignani; Anthony J. Sanford
Effects of intensionality on sentence and discourse processing: Evidence from eye-movements Journal Article
In: Journal of Memory and Language, vol. 62, no. 4, pp. 352–379, 2010.
Intensional verbs like want select for clausal complements expressing propositions, though they can be perfectly natural when combined with a direct object. There are two interesting phenomena associated with intensional transitive expressions. First, it has been suggested that their interpretation requires enriched compositional operations, similarly to expressions like began the book (e.g., Pustejovsky, 1995). Secondly, when the object position is filled by an indefinite NP, it preferentially receives an unspecific reading, under which definite anaphora is not supported (e.g., Moltmann, 1997). We report three eye-tracking experiments investigating the time-course of processing of sentence pairs like John wanted a beer. The beer was warm. Consistent with the enriched composition hypothesis, results showed that intensional transitive constructions (e.g., wanted a beer) take longer to process than control expressions (e.g., drank/wanted to drink a beer). However, contrary to previous findings, the processing of the continuation sentence appears to be not affected by whether the definite NP (the beer) can be interpreted as coreferential with the indefinite or not. We interpret the results with respect to accounts of semantic processing relying on the notions of enriched composition and non-actuality implicature.
T. M. Desrochers; D. Z. Jin; N. D. Goodman; Ann M. Graybiel
Optimal habits can develop spontaneously through sensitivity to local cost Journal Article
In: Proceedings of the National Academy of Sciences, vol. 107, no. 47, pp. 20512–20517, 2010.
Habits and rituals are expressed universally across animal species. These behaviors are advantageous in allowing sequential behaviors to be performed without cognitive overload, and appear to rely on neural circuits that are relatively benign but vulnerable to takeover by extreme contexts, neuropsychiatric sequelae, and processes leading to addiction. Reinforcement learning (RL) is thought to underlie the formation of optimal habits. However, this theoretic formulation has principally been tested experimentally in simple stimulus-response tasks with relatively few available responses. We asked whether RL could also account for the emergence of habitual action sequences in realistically complex situations in which no repetitive stimulus-response links were present and in which many response options were present. We exposed naïve macaque monkeys to such experimental conditions by introducing a unique free saccade scan task. Despite the highly uncertain conditions and no instruction, the monkeys developed a succession of stereotypical, self-chosen saccade sequence patterns. Remarkably, these continued to morph for months, long after session-averaged reward and cost (eye movement distance) reached asymptote. Prima facie, these continued behavioral changes appeared to challenge RL. However, trial-by-trial analysis showed that pattern changes on adjacent trials were predicted by lowered cost, and RL simulations that reduced the cost reproduced the monkeys' behavior. Ultimately, the patterns settled into stereotypical saccade sequences that minimized the cost of obtaining the reward on average. These findings suggest that brain mechanisms underlying the emergence of habits, and perhaps unwanted repetitive behaviors in clinical disorders, could follow RL algorithms capturing extremely local explore/exploit tradeoffs.
Leandro Luigi Di Stasi; Mauro Marchitto; Adoracíon Antolí; Thierry Baccino; José J. Cañas
Approximation of on-line mental workload index in ATC simulated multitasks Journal Article
In: Journal of Air Transport Management, vol. 16, no. 6, pp. 330–333, 2010.
To assess the effects of workload pressures, participants interacted with a modified version of air traffic control simulated tasks requiring different levels of cognitive resources. Changes in mental workload between the levels were evaluated multidimensionally using a subjective rating, performance in a secondary task, and other behavioural indices. Saccadic movements were measured using a video-based eye tracking system. The Wickens multiple resource model is used as a theoretical reference framework. Saccadic peak velocity decreases with increasing cognitive load, in agreement with subjective test scores and performance data. That saccadic peak velocity is sensitive to variations in mental workload during ecologically valid tasks is demonstrated.
Leandro Luigi Di Stasi; Rebekka Renner; Peggy Staehr; Jens R. Helmert; Boris M. Velichkovsky; Jose J. Canas; Andrés Catena; Sebastian Pannasch
Saccadic Peak Velocity Sensitivity to Variations in Mental Workload Journal Article
In: Aviation Space and Environmental Medicine, vol. 81, no. 4, pp. 413–417, 2010.
Introduction: For research and applications in the field of (neuro)ergonomics, it is of increasing importance to have reliable methods for measuring mental workload. In the present study we examined the hypothesis that saccadic eye movements can be used for an online assessment of mental workload. Methods: Saccadic main sequence (amplitude, dura- tion and peak velocity) was used as a diagnostic measure of mental workload in a virtual driving task with three complexity levels. We tested 18 drivers in the SIRCA driving simulator while their eye movements were recorded. The Wickens' multiple resources model was used as theoretical framework. Changes in mental workload between the complexity levels were evaluated multidimensionally, using subjective rating, performance in a secondary task, and other behavioral indices. Results: Saccadic peak velocity decreased (7.2 visual °/s) as the mental workload increased, as measured by scores of mental workload test (15.2 scores) and the increase of the reaction time on the secondary task (46 ms). Discussion: Saccadic peak velocity is affected by variations in mental workload during ecologically valid tasks. We conclude that saccadic peak velocity could be a useful diagnostic index for the assessment of operators' mental workload and attentional state in hazardous environments.
M. Dyer Diehl; Peter E. Pidcoe
The influence of gaze stabilization and fixation on stepping reactions in younger and older adults Journal Article
In: Journal of Geriatric Physical Therapy, vol. 33, no. 1, pp. 19–25, 2010.
PURPOSE: To date, there has been little evidence to suggest the importance of foveal viewing versus peripheral retina viewing when trying to recover from a perturbation. The purposes of this investigation were to (1) determine whether a visual target can be stabilized on the fovea during a perturbation, (2) determine whether stepping responses following a perturbation are influenced by foveal fixation, and (3) compare gaze stability and stepping responses between young and aging adults. MATERIALS/METHODS: Ten young adults and 10 aging adults were asked to wear an eye-tracking device linked to a kinematic tracking system during perturbations. Perturbations were delivered under 2 conditions: control (no instructions regarding gaze location were given) and earth-fixed (EF) (subjects were asked to fixate gaze on an EF target). Stepping responses were recorded via force plates. Gaze stability, reported as percent foveal fixation (% FF), was calculated from eye-tracking data. Step latencies (SLs) were computed from force plate data. A 2 x 2 analysis of variance was used to assess statistical significance between groups. For the young and aging adults, linear correlations were made to identify relationships between % FF and SL. RESULTS: For each condition, aging adults took longer to initiate a step (control
Steve Dipaola; Caitlin Riebe; James T. Enns
Rembrandt's textural agency: A shared perspective in visual art and science Journal Article
In: Leonardo, vol. 43, no. 2, pp. 145–151, 2010.
This interdisciplinary paper hypothesizes that Rembrandt developed new painterly techniques — novel to the early modern period — in order to engage and direct the gaze of the observer. Though these methods were not based on scientific evidence at the time, we show that they nonetheless are consistent with a contemporary understanding of human vision. Here we propose that artists in the late ‘early modern' period developed the technique of textural agency — involving selective variation in image detail — to guide the observer's eye and thereby influence the viewing experience. The paper begins by establishing the well-known use of textural agency among modern portrait artists, before considering the possibility that Rembrandt developed these techniques in his late portraits in reaction to his Italian contemporaries. A final section brings the argument full circle, with the presentation of laboratory evidence that Rembrandt's techniques indeed guide the modern viewer's eye in the way we propose.
Barnaby J. Dixson; Gina M. Grimshaw; Wayne L. Linklater; Alan F. Dixson
Watching the hourglass: Eye tracking reveals men's appreciation of the female form Journal Article
In: Human Nature, vol. 21, no. 4, pp. 355–370, 2010.
Eye-tracking techniques were used to measure men's attention to back-posed and front-posed images of women varying in waist-to-hip ratio (WHR). Irrespective of body pose, men rated images with a 0.7 WHR as most attractive. For back-posed images, initial visual fixations (occurring within 200 milliseconds of commencement of the eye-tracking session) most frequently involved the midriff. Numbers of fixations and dwell times throughout each of the five-second viewing sessions were greatest for the midriff and buttocks. By contrast, visual attention to front-posed images (first fixations, numbers of fixations, and dwell times) mainly involved the breasts, with attention shifting more to the midriff of images with a higher WHR. This report is the first to compare men's eye-tracking responses to back-posed and front-posed images of the female body. Results show the importance of the female midriff and of WHR upon men's attractiveness judgments, especially when viewing back-posed images.
Mieke Donk; Leroy Soesman
Salience is only briefly represented: Evidence from probe-detection performance Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 36, no. 2, pp. 286–302, 2010.
Salient objects in the visual field tend to capture attention. The present study aimed to examine the time-course of salience effects using a probe-detection task. Eight experiments investigated how the salience of different orientation singletons affected probe reaction time as a function of stimulus onset asynchrony (SOA) between the presentation of a singleton display and a probe display. The results demonstrate that salience consistently affected probe reaction time at the shortest SOA. The effect of salience disappeared as SOA increased. These results suggest that contrary to the assumption of major theories on visual selection, salience is transiently represented in our visual system allowing the effects of salience on attentional selection to be only short-lived.
Michael Dorr; T. Martinetz; Karl R. Gegenfurtner; E. Barth
Variability of eye movements when viewing dynamic natural scenes Journal Article
In: Journal of Vision, vol. 10, no. 10, pp. 1–17, 2010.
How similar are the eye movement patterns of different subjects when free viewing dynamic natural scenes? We collected a large database of eye movements from 54 subjects on 18 high-resolution videos of outdoor scenes and measured their variability using the Normalized Scanpath Saliency, which we extended to the temporal domain. Even though up to about 80% of subjects looked at the same image region in some video parts, variability usually was much greater. Eye movements on natural movies were then compared with eye movements in several control conditions. "Stop-motion" movies had almost identical semantic content as the original videos but lacked continuous motion. Hollywood action movie trailers were used to probe the upper limit of eye movement coherence that can be achieved by deliberate camera work, scene cuts, etc. In a "repetitive" condition, subjects viewed the same movies ten times each over the course of 2 days. Results show several systematic differences between conditions both for general eye movement parameters such as saccade amplitude and fixation duration and for eye movement variability. Most importantly, eye movements on static images are initially driven by stimulus onset effects and later, more so than on continuous videos, by subject-specific idiosyncrasies; eye movements on Hollywood movies are significantly more coherent than those on natural movies. We conclude that the stimuli types often used in laboratory experiments, static images and professionally cut material, are not very representative of natural viewing behavior. All stimuli and gaze data are publicly available at http://www.inb.uni-luebeck.de/tools-demos/gaze.
Denis Drieghe; Alexander Pollatsek; Barbara J. Juhasz; Keith Rayner
Parafoveal processing during reading is reduced across a morphological boundary Journal Article
In: Cognition, vol. 116, no. 1, pp. 136–142, 2010.
A boundary change manipulation was implemented within a monomorphemic word (e.g., fountaom as a preview for fountain), where parallel processing should occur given adequate visual acuity, and within an unspaced compound (bathroan as a preview for bathroom), where some serial processing of the constituents is likely. Consistent with that hypothesis, there was no effect of the preview manipulation on fixation time on the 1st constituent of the compound, whereas there was on the corresponding letters of the monomorphemic word. There was also a larger preview disruption on gaze duration on the whole monomorphemic word than on the compound, suggesting more parallel processing within monomorphemic words.