All EyeLink Publications
All 11,000+ peer-reviewed EyeLink research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
Simona Buetti; Elsa Juan; Mike Rinck; Dirk Kerzel
In: Cognition and Emotion, vol. 26, no. 7, pp. 1176–1188, 2012.
Approach-like actions are initiated faster with stimuli of positive valence. Conversely, avoidance-like actions are initiated faster with threatening stimuli of negative valence. We went beyond reaction time measures and investigated whether threatening stimuli also affect the way in which an action is carried out. Participants moved their hand either away from the picture of a spider (avoidance) or they moved their hand toward the picture of a spider (approach). We compared spider-fearful participants to non-anxious participants. When reaching away from the threatening spider picture, spider-fearful participants moved more directly to the target than controls. When reaching toward the threatening spider, spider-fearful participants moved less directly to the target than controls. Some conditions that showed clear differences in movement trajectories between spider-fearful and control participants were devoid of differences in reaction time. The deviation away from threatening stimuli provides evidence for the claim that affective states like fear leak into movement programming and produce deviations away from threatening stimuli in movement execution. Avoidance of threatening stimuli is rapidly integrated into ongoing motor behaviour in order to increase the distance between the participant's body and the threatening stimulus.
Antimo Buonocore; Robert D. McIntosh
In: Vision Research, vol. 69, pp. 32–41, 2012.
Distractors presented contralateral to a visual target inhibit the generation of saccades within a precise temporal window (Buonocore & McIntosh, 2008; Reingold & Stampe, 2002; Walker, Kentridge, & Findlay, 1995). The greatest 'dip' of saccadic inhibition typically occurs at about 90. ms after distractor onset, with a subsequent recovery period showing an elevated frequency of saccades. It is not yet known how the spatial properties of the distractor stimulus influence the saccadic inhibition signature. To study this, we manipulated the size and the field of presentation of the distractor in four experiments. Experiment 1 demonstrated that the size of a distractor in the contralateral field is logarithmically related to the magnitude of the saccadic inhibition dip. This implies that the probability of a planned saccade being inhibited increases logarithmically with the size of the distractor. Experiment 2 showed a qualitatively similar but more pronounced effect of size for distractors in the ipsilateral field. Experiment 3 compared the effects of contralateral and ipsilateral distractors directly using a within-subjects design, confirming the more pronounced impact of ipsilateral distractors. Experiment 4 replicated the more pronounced effect of ipsilateral distractors in a task in which target side was unpredictable, confirming that the effect does not result merely from participants preparing in advance to ignore events on one side. We suggest that participants are more able to resist contralateral distraction during target selection, as they can more effectively withdraw attention from locations remote from the target than from locations close to it.
Melanie R. Burke; Richard J. Allen; Claudia C. Gonzalez
Eye and hand movements during reconstruction of spatial memory Journal Article
In: Perception, vol. 41, no. 7, pp. 803–818, 2012.
Recent behavioural and biological evidence indicates common mechanisms serving working memory and attention (eg Awh et al, 2006 Neuroscience 139 201–208). This study explored the role of spatial attention and visual search in an adapted Corsi spatial memory task. Eye movements and touch responses were recorded from participants who recalled locations (signalled by colour or shape change) from an array presented either simultaneously or sequentially. The time delay between target presentation and recall (0, 5, or 10 s) and the number of locations to be remembered (2–5) were also manipulated. Analysis of the response phase revealed subjects were less accurate (touch data) and fixated longer (eye data) when responding to sequentially presented targets suggesting higher cognitive effort. Fixation duration on target at recall was also influenced by whether spatial location was initially signalled by colour or shape change. Finally, we found that the sequence tasks encouraged longer fixations on the signalled targets than simultaneous viewing during encoding, but no difference was observed during recall. We conclude that the attentional manipulations (colour/shape) mainly affected the eye movement parameters, whereas the memory manipulation (sequential versus simultaneous, number of items) mainly affected the performance of the hand during recall, and thus the latter is more important for ascertaining if an item is remembered or forgotten. In summary, the nature of the stimuli that is used and how it is presented play key roles in determining subject performance and behaviour during spatial memory tasks.
Robyn Burton; David P. Crabb; Nicholas D. Smith; Fiona C. Glen; David F. Garway-Heath
In: Optometry and Vision Science, vol. 89, no. 9, pp. 1282–1287, 2012.
PURPOSE: Past research has not fully ascertained the extent to which people with glaucoma have difficulties with reading. This study measures change in reading speed when letter contrast is reduced, to test the hypothesis that patients with glaucoma are more sensitive to letter contrast than age-similar visually healthy people. METHODS: Fifty-three patients with glaucoma [mean age: 66 years (standard deviation: 9)] with bilateral visual field (VF) defects and 40 age-similar visually healthy control subjects [mean age: 69 (standard deviation: 8) years] had reading speeds measured using sets of fixed size, non-scrolling texts on a computer setup that incorporated an eye tracking device. All participants had visual acuity ≥6/9, and they underwent standard tests of visual function including Humphrey 24-2 and 10-2 VFs. Potential non-visual confounders were also tested, including cognitive ability (Middlesex Elderly Assessment of Mental Status Test) and general reading ability. Individual average raw reading speeds were calculated from 8 trials (different passages of text) at both 100% and 20% letter contrast. RESULTS: Patients had an average 24-2 VF MD of -6.5 (range: 0.7 to -17.3) dB in the better eye. The overall median reduction in reading speed due to decreasing the contrast of the text in the patients was 20%, but with considerable between-individual variation (interquartile range, 8%-44%). This reduction was significantly greater (p = 0.01) than the controls [median: 11% (interquartile range, 6%-17%)]. Patients and controls had similar average performance on Middlesex Elderly Assessment of Mental Status Test (p = 0.71), a modified Burt Reading ability test (p = 0.33), and a computer-based lexical decision task (p = 0.53) and had similar self-reported day-to-day reading frequency (p = 0.12). CONCLUSIONS: Average reduction in reading speed caused by a difference in letter contrast between 100% and 20% is significantly more apparent in patients with glaucoma when compared with visually healthy people with a similar age and similar cognitive/reading ability.
Timothy J. Buschman; Eric L. Denovellis; Cinira Diogo; Daniel Bullock; Earl K. Miller
In: Neuron, vol. 76, no. 4, pp. 838–846, 2012.
Intelligent behavior requires acquiring and following rules. Rules define how our behavior should fit different situations. To understand its neural mechanisms, we simultaneously recorded from multiple electrodes in dorsolateral prefrontal cortex (PFC) while monkeys switched between two rules (respond to color versus orientation). We found evidence that oscillatory synchronization of local field potentials (LFPs) formed neural ensembles representing the rules: there were rule-specific increases in synchrony at " beta" (19-40 Hz) frequencies between electrodes. In addition, individual PFC neurons synchronized to the LFP ensemble corresponding to the current rule (color versus orientation). Furthermore, the ensemble encoding the behaviorally dominant orientation rule showed increased " alpha" (6-16 Hz) synchrony when preparing to apply the alternative (weaker) color rule. This suggests that beta-frequency synchrony selects the relevant rule ensemble, while alpha-frequency synchrony deselects a stronger, but currently irrelevant, ensemble. Synchrony may act to dynamically shape task-relevant neural ensembles out of larger, overlapping circuits.
Brittany N. Bushnell; Anitha Pasupathy
Shape encoding consistency across colors in primate V4 Journal Article
In: Journal of Neurophysiology, vol. 108, no. 5, pp. 1299–1308, 2012.
Neurons in primate cortical area V4 are sensitive to the form and color of visual stimuli. To determine whether form selectivity remains consistent across colors, we studied the responses of single V4 neurons in awake monkeys to a set of two-dimensional shapes presented in two different colors. For each neuron, we chose two colors that were visually distinct and that evoked reliable and different responses. Across neurons, the correlation coefficient between responses in the two colors ranged from -0.03 to 0.93 (median 0.54). Neurons with highly consistent shape responses, i.e., high correlation coefficients, showed greater dispersion in their responses to the different shapes, i.e., greater shape selectivity, and also tended to have less eccentric receptive field locations; among shape-selective neurons, shape consistency ranged from 0.16 to 0.93 (median 0.63). Consistency of shape responses was independent of the physical difference between the stimulus colors used and the strength of neuronal color tuning. Finally, we found that our measurement of shape response consistency was strongly influenced by the number of stimulus repeats: consistency estimates based on fewer than 10 repeats were substantially underestimated. In conclusion, our results suggest that neurons that are likely to contribute to shape perception and discrimination exhibit shape responses that are largely consistent across colors, facilitating the use of simpler algorithms for decoding shape information from V4 neuronal populations.
X. Cai; Camillo Padoa-Schioppa
In: Journal of Neuroscience, vol. 32, no. 11, pp. 3791–3808, 2012.
We examined the activity of individual cells in the primate anterior cingulate cortex during an economic choice task. In the experiments, monkeys chose between different juices offered in variables amounts and subjective values were inferred from the animals' choices. We analyzed neuronal firing rates in relation to a large number of behaviorally relevant variables. We report three main results. First, there were robust differences between the dorsal bank (ACCd) and the ventral bank (ACCv) of the cingulate sulcus. Specifically, neurons in ACCd but not in ACCv were modulated by the movement direction. Furthermore, neurons in ACCd were most active before movement initiation, whereas neurons in ACCv were most active after juice delivery. Second, neurons in both areas encoded the identity and the subjective value of the juice chosen by the animal. In contrast, neither region encoded the value of individual offers. Third, the population of value-encoding neurons in both ACCd and ACCv underwent range adaptation. With respect to economic choice, it is interesting to compare these areas with the orbitofrontal cortex (OFC), previously examined. While neurons in OFC encoded both pre-decision and post-decision variables, neurons in ACCd and ACCv only encoded post-decision variables. Moreover, the encoding of the choice outcome (chosen value and chosen juice) in ACCd and ACCv trailed that found in OFC. These observations indicate that economic decisions (i.e., value comparisons) take place upstream of ACCd and ACCv. The coexistence of choice outcome and movement signals in ACCd suggests that this area constitutes a gateway through which the choice system informs motor systems.
Manuel G. Calvo; Aida Gutiérrez-García; Andrés Fernández-Martín
In: Journal of Cognitive Psychology, vol. 24, no. 1, pp. 66–78, 2012.
We investigated whether anxiety facilitates detection of threat stimuli outside the focus of overt attention, and the time course of the interference produced by threat distractors. Threat or neutral word distractors were presented in attended (foveal) and unattended (parafoveal) locations followed by an unrelated probe word at 300 ms (Experiments 1 and 2) or 1000 ms (Experiment 2) stimulus-onset asynchrony (SOA) in a lexical decision task. Results showed: (1) no effects of trait anxiety on selective saccades to the parafoveal threat distractors; (2) interference with probe processing (i.e., slowed lexical decision times) following a foveal threat distractor at 300 ms SOA for all participants, regardless of anxiety, but only for high-anxiety participants at 1000 ms SOA; and (3) no interference effects of parafoveal threat distractors. These findings suggest that anxiety does not enhance preattentive semantic processing of threat words. Rather, anxiety leads to delays in the inhibitory control of attended task-irrelevant threat stimuli.
James E. Cane; Fabrice Cauchard; Ulrich W. Weger
In: Quarterly Journal of Experimental Psychology, vol. 65, no. 7, pp. 1397–1413, 2012.
Two experiments examined how interruptions impact reading and how interruption lags and the reader's spatial memory affect the recovery from such interruptions. Participants read paragraphs of text and were interrupted unpredictably by a spoken news story while their eye movements were monitored. Time made available for consolidation prior to responding to the interruption did not aid reading resumption. However, providing readers with a visual cue that indicated the interruption location did aid task resumption substantially in Experiment 2. Taken together, the findings show that the recovery from interruptions during reading draws on spatial memory resources and can be aided by processes that support spatial memory. Practical implications are discussed.
Fabrice Cauchard; James E. Cane; Ulrich W. Weger
In: Applied Cognitive Psychology, vol. 26, no. 3, pp. 381–390, 2012.
The current study examined the influence of interruption, background speech and music on reading, using an eye movement paradigm. Participants either read paragraphs while being exposed to background speech or music or read the texts in silence. On half of the trials, participants were interrupted by a 60-second audio story before resuming reading the paragraph. Interruptions increased overall reading time, but the reading of text following the interruption was quicker compared with baseline. Background speech and music did not modulate the interruption effects, but the background speech slowed down the reading rate compared with reading in the presence of music or reading in silence. The increase in reading time was primarily due to an increase in the time spent rereading previously read words. We argue that the observed interruption effects are in line with a theory of long-term working memory, and we present practical implications for the reported background speech effects.
Céline Cavézian; Derick Valadao; Marc Hurwitz; Mohamed Saoud; James Danckert
In: Brain Research, vol. 1437, pp. 89–103, 2012.
The line bisection task is used as a bedside test of spatial neglect patients who typically bisect lines to the right of true centre. To disambiguate the contribution of perceptual from motor biases in bisection, previous research has used the landmark task in which participants determine whether a transection mark is left or right of centre. One recent study using stimuli that reliably leads to leftward perceptual biases in healthy individuals, found that ocular judgements of centre were biased to the right of centre, whereas manual bisections were biased leftwards. Here we used behavioural measures and functional MRI in healthy individuals to investigate ocular and perceptual judgements of centre. Ocular judgements were made by having participants fixate the centre of a horizontal bar that was dark at one end and light at the other (i.e., a 'greyscale' stimulus), whereas perceptual responses were made by having participants indicate whether a transection mark on the greyscales stimuli was to the left or right of centre. Behavioural data indicated a leftward bias in the first, second and longest fixations for bisection. Moreover, greyscale orientation (i.e., dark extremity to the right or to the left), and stimulus position modulated fixations. In contrast, for the landmark task, initial fixations were attracted towards the transection mark, whereas subsequent fixations were closer to veridical centre. Imaging data showed a large bilateral network, including superior parietal and lingual cortex, that was active for bisection. The landmark task activated a predominantly right hemisphere network including superior and inferior parietal cortices. Taken together these results indicate that very different strategies and underlying neural networks are invoked by the bisection and landmark tasks.
Jessica P. K. Chan; Jennifer D. Ryan
In: Frontiers in Psychology, vol. 3, pp. 87, 2012.
Face recognition is impaired when changes are made to external face features (e.g., hairstyle), even when all internal features (i.e., eyes, nose, mouth) remain the same. Eye movement monitoring was used to determine the extent to which altered hairstyles affect processing of face features, thereby shedding light on how internal and external features are stored in memory. Participants studied a series of faces, followed by a recognition test in which novel, repeated, and manipulated (altered hairstyle) faces were presented. Recognition was higher for repeated than manipulated faces. Although eye movement patterns distinguished repeated from novel faces, viewing of manipulated faces was similar to that of novel faces. Internal and external features may be stored together as one unit in memory; consequently, changing even a single feature alters processing of the other features and disrupts recognition.
Myriam Chanceaux; Jonathan Grainger
In: Acta Psychologica, vol. 141, no. 2, pp. 149–158, 2012.
Three experiments measured serial position functions for character-in-string identification in peripheral vision. In Experiment 1, random strings of five letters (e.g., P F H T M) or five symbols (e.g., $łambda$ Bcyrillic Thorn $Psi$ ¥) were briefly presented to the left or to the right of fixation, and identification accuracy was measured at each position in the string using a post-cued two-alternative forced-choice task (e.g., was there a T or a B at the 4th position). In Experiment 2 the performance to letter stimuli was compared with familiar two-dimensional shapes (e.g., square, triangle, circle), and in Experiment 3 we compared digit strings (e.g., 6 3 7 9 2) with a set of keyboard symbols (e.g., % S @ < ?). Eye-movements were monitored to ensure central fixation. The results revealed a triple interaction between the nature of the stimulus (letters/digits vs. symbols/shapes), eccentricity, and visual field. In all experiments this interaction reflected a selective left visual field advantage for letter or digit stimuli compared with symbol or shape stimuli for targets presented at the greatest eccentricity. The results are in line with the predictions of the modified receptive field hypothesis proposed by Tydgat and Grainger (2009), and the predictions of the SERIOL2 model of letter string encoding.
Myriam Chanceaux; Françoise Vitu; Luisa Bendahman; Simon Thorpe; Jonathan Grainger
In: Vision Research, vol. 56, pp. 10–19, 2012.
A saccadic choice task (Kirchner & Thorpe, 2006) was used to measure word processing speed in peripheral vision. To do so, word targets were accompanied by distractor stimuli, which were random strings of consonants presented in the contralateral visual field. Participants were also tested with the animal stimuli of Kirchner and Thorpe's original study. The results obtained with the animal stimuli provide a straightforward replication of prior findings, with the estimated fastest saccade latencies to animal targets being 140. ms. With the word targets, the fastest reliable saccades occurred with latencies of around 200. ms. The results obtained with word targets provide a timing estimate for word processing in peripheral vision that is incompatible with sequential-attention-shift (SAS) accounts of eye movement control in reading.
Chi-Chan Chang; Yung-Hui Lee; Chiuhsiang Joe Lin; Bor-Shong Liu; Yuh-Chuan Shih
In: Perceptual and Motor Skills, vol. 114, no. 2, pp. 527–541, 2012.
The study investigated the effectiveness of different camouflage designs using a computational image quality index. Camouflaged human targets were presented on a natural landscape and the targets were designed to be similar to the landscape background with different levels of background similarity as estimated by the image index. The targets were presented in front of the observer (central 0 degrees) or at different angles in the left (-7 degrees, -14 degrees, -21 degrees) or right (+7 degrees, +14 degrees, +21 degrees) visual fields. The observer had to detect the target using peripheral vision if the target appeared in the left and right visual fields. The camouflage effectiveness was assessed by detection hit rates, detection times, and subjective ratings on detection confidence and task difficulty. The study showed that the psychophysical measures correlated well with the image similarity index, suggesting a potentially more efficient camouflage effectiveness assessment tool if the relationship between the psychophysical results and the index can be quantified in the future.
Steve W. C. Chang; Joseph W. Barter; R. Becket Ebitz; Karli K. Watson; Michael L. Platt
In: Proceedings of the National Academy of Sciences, vol. 109, no. 3, pp. 959–964, 2012.
People attend not only to their own experiences, but also to the experiences of those around them. Such social awareness profoundly influences human behavior by enabling observational learning, as well as by motivating cooperation, charity, empathy, and spite. Oxytocin (OT), a neurosecretory hormone synthesized by hypothalamic neurons in the mammalian brain, can enhance affiliation or boost exclusion in different species in distinct contexts, belying any simple mechanistic neural model. Here we show that inhaled OT penetrates the CNS and subsequently enhances the sensitivity of rhesus macaques to rewards occurring to others as well as themselves. Roughly 2 h after inhaling OT, monkeys increased the frequency of prosocial choices associated with reward to another monkey when the alternative was to reward no one. OT also increased attention to the recipient monkey as well as the time it took to render such a decision. In contrast, within the first 2 h following inhalation, OT increased selfish choices associated with delivery of reward to self over a reward to the other monkey, without affecting attention or decision latency. Despite the differences in species typical social behavior, exogenous, inhaled OT causally promotes social donation behavior in rhesus monkeys, as it does in more egalitarian and monogamous ones, like prairie voles and humans, when there is no perceived cost to self. These findings potentially implicate shared neural mechanisms.
E. Charles Leek; Candy Patterson; Matthew A. Paul; Robert D. Rafal; Filipe Cristino
Eye movements during object recognition in visual agnosia Journal Article
In: Neuropsychologia, vol. 50, no. 9, pp. 2142–2153, 2012.
This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape.
Bruno Dagnino; Joaquin Navajas; Mariano Sigman
In: Archives of Sexual Behavior, vol. 41, no. 4, pp. 929–937, 2012.
Evolutionary psychologists have been interested in male preferences for particular female traits that are thought to signal health and reproductive potential. While the majority of studies have focused on what makes specific body traits attractive-such as the waist-to-hip ratio, the body mass index, and breasts shape and size-there is little empirical research that has examined individual differences in male preferences for specific traits (e.g., favoring breasts over buttocks). The current study begins to fill this empirical gap. In the first experiment (Study 1), 184 male participants were asked to report their preference between breasts and buttocks on a continuous scale. We found that (1) the distribution of preference was bimodal, indicating that Argentinean males tended to define themselves as favoring breasts or buttocks but rarely thinking that these traits contributed equally to their choice and (2) the distribution was biased towards buttocks. In a second experiment (Study 2), 19 male participants were asked to rate pictures of female breasts and buttocks. This study was necessary to generate three categories of pictures with statistically different ratings (high, medium, and low). In a third experiment (Study 3), we recorded eye-movements of 25 male participants while they chose the more attractive between two women, only seeing their breasts and buttock. We found that the first and last fixations were systematically directed towards the self-reported preferred trait.
Sangita Dandekar; Jian Ding; Claudio M. Privitera; Thom Carney; Stanley A. Klein
The fixation and saccade P3 Journal Article
In: PLoS ONE, vol. 7, no. 11, pp. e48761, 2012.
Although most instances of object recognition during natural viewing occur in the presence of saccades, the neural correlates of objection recognition have almost exclusively been examined during fixation. Recent studies have indicated that there are post-saccadic modulations of neural activity immediately following eye movement landing; however, whether post-saccadic modulations affect relatively late occurring cognitive components such as the P3 has not been explored. The P3 as conventionally measured at fixation is commonly used in brain computer interfaces, hence characterizing the post-saccadic P3 could aid in the development of improved brain computer interfaces that allow for eye movements. In this study, the P3 observed after saccadic landing was compared to the P3 measured at fixation. No significant differences in P3 start time, temporal persistence, or amplitude were found between fixation and saccade trials. Importantly, sensory neural responses canceled in the target minus distracter comparisons used to identify the P3. Our results indicate that relatively late occurring cognitive neural components such as the P3 are likely less sensitive to post saccadic modulations than sensory neural components and other neural activity occurring shortly after eye movement landing. Furthermore, due to the similarity of the fixation and saccade P3, we conclude that the P3 following saccadic landing could possibly be used as a viable signal in brain computer interfaces allowing for eye movements.
Barry Dauphin; Harold H. Greene
In: Rorschachiana, vol. 33, no. 1, pp. 3–22, 2012.
This study represents the beginning of a systematic effort to utilize eye-movement responses in order to better understand individuals' processing strategies during the Rorschach Inkblot Method (RIM). Eye movements ref lect moment-by-moment spatial and temporal processing of visual information and represent a useful approach for studying the RIM with potential clinical implications. Thirteen participants responded to the Rorschach while eye movements were being monitored. Several eye-movement indices were studied which ref lect different aspects of information processing. Differences among the Rorschach cards were found for several eye-movement indices. For exam-ple, fixation durations were longer during a second viewing of the cards than during the first. This is consonant with an attempt to acquire conceptually difficult information, as participants were reinterpreting the cards. Results are discussed in terms of visual information processing strategies during the RIM and the potential usefulness of eye movements as a response measure to the RIM.
Marco Davare; A. Zénon; Gilles Pourtois; Michel Desmurget; Etienne Olivier
In: Cerebral Cortex, vol. 22, no. 6, pp. 1382–1394, 2012.
The contribution of the posterior parietal cortex (PPC) to visually guided movements has been originally inferred from observations made in patients suffering from optic ataxia. Subsequent electrophysiological studies in monkeys and functional imaging data in humans have corroborated the key role played by the PPC in sensorimotor transformations underlying goal-directed movements, although the exact contribution of this structure remains debated. Here, we used transcranial magnetic stimulation (TMS) to interfere transiently with the function of the left or right medial part of the intraparietal sulcus (mIPS) in healthy volunteers performing visually guided movements with the right hand. We found that a "virtual lesion" of either mIPS increased the scattering in initial movement direction (DIR), leading to longer trajectory and prolonged movement time, but only when TMS was delivered 100-160 ms before movement onset and for movements directed toward contralateral targets. Control experiments showed that deficits in DIR consequent to mIPS virtual lesions resulted from an inappropriate implementation of the motor command underlying the forthcoming movement and not from an inaccurate computation of the target localization. The present study indicates that mIPS plays a causal role in implementing specifically the direction vector of visually guided movements toward objects situated in the contralateral hemifield.
T. S. Davis; R. A. Parker; Paul A. House; E. Bagley; S. Wendelken; R. A. Normann; Bradley Greger
In: Journal of Neural Engineering, vol. 9, no. 6, pp. 1–12, 2012.
OBJECTIVE: It has been hypothesized that a vision prosthesis capable of evoking useful visual percepts can be based upon electrically stimulating the primary visual cortex (V1) of a blind human subject via penetrating microelectrode arrays. As a continuation of earlier work, we examined several spatial and temporal characteristics of V1 microstimulation. APPROACH: An array of 100 penetrating microelectrodes was chronically implanted in V1 of a behaving macaque monkey. Microstimulation thresholds were measured using a two-alternative forced choice detection task. Relative locations of electrically-evoked percepts were measured using a memory saccade-to-target task. MAIN RESULTS: The principal finding was that two years after implantation we were able to evoke behavioural responses to electric stimulation across the spatial extent of the array using groups of contiguous electrodes. Consistent responses to stimulation were evoked at an average threshold current per electrode of 204 ± 49 µA (mean ± std) for groups of four electrodes and 91 ± 25 µA for groups of nine electrodes. Saccades to electrically-evoked percepts using groups of nine electrodes showed that the animal could discriminate spatially distinct percepts with groups having an average separation of 1.6 ± 0.3 mm (mean ± std) in cortex and 1.0° ± 0.2° in visual space. Significance. These results demonstrate chronic perceptual functionality and provide evidence for the feasibility of a cortically-based vision prosthesis for the blind using penetrating microelectrodes.
Sébastien Coppe; Jean-Jacques Orban de Xivry; Demet Yuksel; Adrian Ivanoiu; Philippe Lefevre
In: Journal of Neurophysiology, vol. 108, no. 11, pp. 2957–2966, 2012.
Prediction is essential for motor function in everyday life. For instance, predictive mechanisms improve the perception of a moving target by increasing eye speed anticipatively, thus reducing motion blur on the retina. Subregions of the frontal lobes play a key role in eye movements in general and in smooth pursuit in particular, but their precise function is not firmly established. Here, the role of frontal lobes in the timing of predictive action is demonstrated by studying predictive smooth pursuit during transient blanking of a moving target in mild frontotemporal lobar degeneration (FTLD) and Alzheimer's disease (AD) patients. While control subjects and AD patients predictively reaccelerated their eyes before the predicted time of target reappearance, FTLD patients did not. The difference was so dramatic (classification accuracy ⬎90%) that it could even lead to the definition of a new biomarker. In contrast, anticipatory eye movements triggered by the disappearance of the fixation point were still present before target motion onset in FTLD patients and visually guided pursuit was normal in both patient groups compared with controls. Therefore, FTLD patients were only impaired when the predicted timing of an external event was required to elicit an action. These results argue in favor of a role of the frontal lobes in predictive movement timing.
Antoine Coutrot; Nathalie Guyader; Gelu Ionescu; Alice Caplier
In: Journal of Eye Movement Research, vol. 5, no. 4, pp. 1–10, 2012.
Models of visual attention rely on visual features such as orientation, intensity or motion to predict which regions of complex scenes attract the gaze of observers. So far, sound has never been considered as a possible feature that might influence eye movements. Here, we evaluate the impact of non-spatial sound on the eye movements of observers watching videos. We recorded eye movements of 40 participants watching assorted videos with and without their related soundtracks. We found that sound impacts on eye position, fixation duration and saccade amplitude. The effect of sound is not constant across time but becomes significant around one second after the beginning of video shots.
Christopher D. Cowper-Smith; Gail A. Eskes; David A. Westwood
In: Neuroscience Letters, vol. 531, no. 2, pp. 120–124, 2012.
Inhibition of return (IOR) is thought to improve the efficiency of visual search behaviour by biasing attention, eye movements, or both, toward novel stimuli. Previous research suggests that IOR might arise from early sensory, attentional or motor programming processes. In the present study, we were interested in determining if IOR could instead arise from processes operating at or during response execution, independent from effects on earlier processes. Participants made consecutive saccades (from a common starting location) to central arrowhead stimuli. We removed the possible contribution of early sensory/attentional and motor preparation effects in IOR by allowing participants to fully prepare their responses in advance of an execution signal. When responses were prepared in advance, we continued to observe IOR. Our data therefore provide clear evidence that saccadic IOR can result from an execution bias that might arise from inhibitory effects on motor output neurons, or alternatively from late attentional engagement processes.
Abbie L. Coy; Samuel B. Hutton
In: Visual Cognition, vol. 20, no. 8, pp. 883–901, 2012.
Across three experiments we sought to determine whether extrafoveally presented emotional faces are processed sufficiently rapidly to influence saccade programming. Two rectangular targets containing a neutral and an emotional face were presented either side of a central fixation cross. Participants made prosaccades towards an abrupt luminosity change to the border of one of the rectangles. The faces appeared 150 ms before or simultaneously with the cue. Saccades were faster towards cued rectangles containing emotional compared to neutral faces even when the rectangles were positioned 12 degrees from the fixation cross. When faces were inverted, the facilitative effect of emotion only emerged in the ?150 ms SOA condition, possibly reflecting a shift from configural to featural face processing. Together the results suggest that the human brain is highly specialized for processing emotional information and responds very rapidly to the brief presentation of expressive faces, even when these are located outside foveal vision.
Abbie L. Coy; Samuel B. Hutton
In: Psychiatry Research, vol. 196, no. 2-3, pp. 225–229, 2012.
It has been suggested that certain types of auditory hallucinations may be the by-product of a perceptual system that has evolved to be oversensitive to threat-related stimuli. People with schizophrenia and high schizotypes experience visual as well as auditory hallucinations, and have deficits in processing facial emotions. We sought to determine the relationship between visual hallucination proneness and the tendency to misattribute threat and non-threat related emotions to neutral faces. Participants completed a questionnaire assessing visual hallucination proneness (the Revised Visual Hallucination Scale - RVHS). High scoring individuals (. N=. 64) were compared to low scoring individuals (. N=. 72) on a novel emotion detection task. The high RVHS group made more false positive errors (ascribing emotions to neutral faces) than the low RVHS group, particularly when detecting threat-related emotions. All participants made more false positives when neutral faces were presented to the right visual field than to the left visual field. Our results support continuum models of visual hallucinatory experience in which tolerance for false positives is highest for potentially threatening emotional stimuli and suggest that lateral asymmetries in face processing extend to the misperception of facial emotion.
Sarah C. Creel
Preschoolers' use of talker information in on-line comprehension Journal Article
In: Child Development, vol. 83, no. 6, pp. 2042–2056, 2012.
A crucial part of language development is learning how various social and contextual language-external factors constrain an utterance's meaning. This learning process is poorly understood. Five experiments addressed one hundred thirty-one 3- to 5-year-old children's use of one such socially relevant information source: talker characteristics. Participants learned 2 characters' favorite colors; then, those characters asked participants to select colored shapes, as eye movements were tracked. Results suggest that by preschool, children use voice characteristics predictively to constrain a talker's domain of reference, visually fixating the talker's preferred color shapes. Indicating flexibility, children used talker information when the talker made a request for herself but not when she made a request for the other character. Children's ease at using voice characteristics and possible developmental changes are discussed.
Sarah C. Creel
In: Developmental Science, vol. 15, no. 5, pp. 697–713, 2012.
Recent research has considered the phonological specificity of children's word representations, but few studies have examined the flexibility of those representations. Tolerating acoustic-phonetic deviations has been viewed as a negative in terms of discriminating minimally different word forms, but may be a positive in an increasingly multicultural society where children encounter speakers with variable accents. To explore children's on-line processing of accented speech, preschoolers heard atypically pronounced words (e. g. 'fesh', from fish) and selected pictures from a four-item display as eye movements were tracked. Children recognized similarity between typical and accented variants, selecting the fish overwhelmingly when hearing 'fesh' (Experiment 1), even when a novel-picture alternative was present (Experiment 2). However, eye movements indicated slowed on-line recognition of accented relative to typical variants. Novel-picture selections increased with feature distance from familiar forms, but were similarly sensitive to vowel, onset, and coda changes (Experiment 3). Implications for child accent processing and mutual exclusivity are discussed.
Sarah C. Creel; Melanie A. Tumlin
In: Cognitive Science, vol. 36, no. 2, pp. 224–260, 2012.
Three experiments explored online recognition in a nonspeech domain, using a novel experimental paradigm. Adults learned to associate abstract shapes with particular melodies, and at test they identified a played melody's associated shape. To implicitly measure recognition, visual fixations to the associated shape versus a distractor shape were measured as the melody played. Degree of similarity between associated melodies was varied to assess what types of pitch information adults use in recognition. Fixation and error data suggest that adults naturally recognize music, like language, incrementally, computing matches to representations before melody offset, despite the fact that music, unlike language, provides no pressure to execute recognition rapidly. Further, adults use both absolute and relative pitch information in recognition. The implicit nature of the dependent measure should permit use with a range of populations to evaluate postulated developmental and evolutionary changes in pitch encoding.
Lijing Chen; Xingshan Li; Yufang Yang
In: PLoS ONE, vol. 7, no. 8, pp. e42533, 2012.
The relationship between focus and new information has been unclear despite being the subject of several information structure studies. Here, we report an eye-tracking experiment that explored the relationship between them in on-line discourse processing in Chinese reading. Focus was marked by the Chinese focus-particle "shi", which is equivalent to the cleft structure "it was... who..." in English. New information was defined as the target word that was not present in previous contexts. Our results show that, in the target region, focused information was processed more quickly than non-focused information, while new information was processed more slowly than given information. These results reveal differences in processing patterns between focus and newness, and suggest that they are different concepts that relate to different aspects of cognitive processing. In addition, the effect of new/given information occurred in the post-target region for the focus condition, but not for the non-focus condition, suggesting a complex relationship between focus and newness in the discourse integration stage.
Q. Chen; Ralph Weidner; Peter H. Weiss; John C. Marshall; Gereon R. Fink
In: Journal of Cognitive Neuroscience, vol. 24, no. 11, pp. 2223–2236, 2012.
On the basis of double dissociations in clinical symptoms of patients with unilateral visuospatial neglect, neuropsychological research distinguishes between different spatial domains (near vs. far) and different spatial reference frames (egocentric vs. allocentric). In this fMRI study, we investigated the neural interaction between spatial domains and spatial reference frames by constructing a virtual three-dimensional world and asking participants to perform either allocentric or egocentric judgments on an object located in either near or far space. Our results suggest that the parietal-occipital junction (POJ) not only shows a preference for near-space processing but is also involved in the neural interaction between spatial domains and spatial reference frames. Two dissociable streams of visual processing exist in the human brain: a ventral perception-related stream and a dorsal action-related stream. Consistent with the perception-action model, both far-space processing and allocentric judgments draw upon the ventral stream whereas both near-space processing and egocentric judgments draw upon the dorsal stream. POJ showed higher neural activity during allocentric judgments (ventral) in near space (dorsal) and egocentric judgments (dorsal) in far space (ventral) as compared with egocentric judgments (dorsal) in near space (dorsal) and allocentric judgments (ventral) in far space (ventral). Because representations in the dorsal and ventral streams need to interact during allocentric judgments (ventral) in near space (dorsal) and egocentric judgments (dorsal) in far space (ventral), our results imply that POJ is involved in the neural interaction between the two streams. Further evidence for the suggested role of POJ as a neural interface between the dorsal and ventral streams is provided by functional connectivity analysis.
Selmaan Chettih; Frank H. Durgin; Daniel J. Grodner
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 38, no. 2, pp. 295–311, 2012.
Are processes of figurative comparison and figurative categorization different? An experiment combining alternative-sense and matched-sense metaphor priming with a divided visual field assessment technique sought to isolate processes of comparison and categorization in the 2 cerebral hemispheres. For target metaphors presented in the right visual field/left cerebral hemisphere (RVF/LH), only matched-sense primes were facilitative. Literal primes and alternative-sense primes had no effect on comprehension time compared to the unprimed baseline. The effects of matched-sense primes were additive with the rated conventionality of the targets. For target metaphors presented to the left visual field/right cerebral hemisphere (LVF/RH), matched-sense primes were again additively facilitative. However, alternative-sense primes, though facilitative overall, seemed to eliminate the preexisting advantages of conventional target metaphor senses in the LVF/RH in favor of metaphoric senses similar to those of the primes. These findings are consistent with tightly controlled categorical coding in the LH and coarse, flexible, context-dependent coding in the RH.
Joseph D. Chisholm; Alan Kingstone
In: Attention, Perception, and Psychophysics, vol. 74, no. 2, pp. 257–262, 2012.
Action video game players (AVGPs) have been demonstrated to outperform non-video-game players(NVGPs) on a range of cognitive tasks. Evidence to date suggests that AVGPs' enhanced performance in attention based tasks can be accounted for by improved top-down control over the allocation of visuospatial attention. Thus, we propose that AVGPs provide a population that can be used to investigate the role of top-down factors in key models of attention. Previous work using AVGPs has indicated that they experience less interfering effects from a salient but task-irrelevant distractor in an attentional capture paradigm (Chisholm, Hickey, Theeuwes, & Kingstone,2010). Two fundamentally different bottom-up and top-down models of attention can account for this result. In the present study, we compared AVGP and NVGP performance in an oculomotor capture paradigm to address when and how top-down control modulates capture. In tracking eye movements, we acquired an explicit measurement of attention allocation and replicated the covert attention effect that AVGPs are quicker than NVGPs to attend to a target in the presence of a task-irrelevant distractor. Critically, our study reveals that this top-down gain is the result of fewer shifts of attention to the salient distractor, rather than faster disengagement after bottom-up capture has occurred. This supports the theory that top-down control can modulate the involuntary capture of attention.
Jan Churan; Daniel Guitton; Christopher C. Pack
In: PLoS ONE, vol. 7, no. 12, pp. e52195, 2012.
Visual neurons have spatial receptive fields that encode the positions of objects relative to the fovea. Because foveate animals execute frequent saccadic eye movements, this position information is constantly changing, even though the visual world is generally stationary. Interestingly, visual receptive fields in many brain regions have been found to exhibit changes in strength, size, or position around the time of each saccade, and these changes have often been suggested to be involved in the maintenance of perceptual stability. Crucial to the circuitry underlying perisaccadic changes in visual receptive fields is the superior colliculus (SC), a brainstem structure responsible for integrating visual and oculomotor signals. In this work we have studied the time-course of receptive field changes in the SC. We find that the distribution of the latencies of SC responses to stimuli placed outside the fixation receptive field is bimodal: The first mode is comprised of early responses that are temporally locked to the onset of the visual probe stimulus and stronger for probes placed closer to the classical receptive field. We suggest that such responses are therefore consistent with a perisaccadic rescaling, or enhancement, of weak visual responses within a fixed spatial receptive field. The second mode is more similar to the remapping that has been reported in the cortex, as responses are time-locked to saccade onset and stronger for stimuli placed in the postsaccadic receptive field location. We suggest that these two temporal phases of spatial updating may represent different sources of input to the SC.
Ivar A. H. Clemens; Luc P. J. Selen; Mathieu Koppen; W. Pieter Medendorp
Visual stability across combined eye and body motion Journal Article
In: Journal of Vision, vol. 12, no. 12, pp. 1–11, 2012.
In order to maintain visual stability during self-motion, the brain needs to update any egocentric spatial representations of the environment. Here, we use a novel psychophysical approach to investigate how and to what extent the brain integrates visual, extraocular, and vestibular signals pertaining to this spatial update. Participants were oscillated sideways at a frequency of 0.63 Hz while keeping gaze fixed on a stationary light. When the motion direction changed, a reference target was shown either in front of or behind the fixation point. At the next reversal, half a cycle later, we tested updating of this reference location by asking participants to judge whether a briefly flashed probe was shown to the left or right of the memorized target. We show that updating is not only biased, but that the direction and magnitude of this bias depend on both gaze and object location, implying that a gaze-centered reference frame is involved. Using geometric modeling, we further show that the gaze-dependent errors can be caused by an underestimation of translation amplitude, by a bias of visually perceived objects towards the fovea (i.e., a foveal bias), or by a combination of both.
Charles Jr. Clifton; Lyn Frazier
In: Quarterly Journal of Experimental Psychology, vol. 65, no. 9, pp. 1760–1776, 2012.
Two experiments are reported that show that introducing event participants in a conjoined noun phrase (NP) favours a single event (collective) interpretation, while introducing them in separate clauses favours a separate events (distributive) interpretation. In Experiment 1, acceptability judgements were speeded when the bias ofa predicate toward separate events versus a single event matched the pre- sumed bias of how the subjects' referents were introduced (as conjoined noun phrases or in conjoined clauses). In Experiment 2, reading ofa phrase containing an anaphor following conjoined noun phrases was facilitated when the anaphor was they, relative to when it was neither/each ofthem; the opposite pattern was found when the anaphor followed conjoined clauses. We argue that comprehension was facilitated when the form of an anaphor was appropriate for how its antecedents were introduced. These results address the very general problem of how we individuate entities and events when pre- sented with a complex situation and show that different linguistic forms can guide how we construe a situation. The results also indicate that there is no general penalty for introducing the entities or events separately—in distinct clauses as “split” antecedents.
Charles Clifton; Lyn Frazier
Discourse integration guided by the 'Question under Discussion' Journal Article
In: Cognitive Psychology, vol. 65, no. 2, pp. 352–379, 2012.
What makes a discourse coherent? One potential factor has been discussed in the linguistic literature in terms of a Question under Discussion (QUD). This approach claims that discourse proceeds by continually raising explicit or implicit questions, viewed as sets of alternatives, or competing descriptions of the world. If the interlocutor accepts the question, it becomes the QUD, a narrowed set of alternatives to be addressed (Roberts, in press). Three eye movement recording studies are reported that investigated the effect of a preceding explicit QUD (Experiment 1) or implicit QUD (Experiments 2 and 3) on the processing of following text. Experiment 1 revealed an effect of whether the question queried alternative propositions or alternative entities. Reading times in the answer were faster when the answer it provided was of the same semantic type as was queried. Experiment 2 tested QUDs implied by the alternative description of reality introduced by a non-actuality implicature trigger such as should X or want to X. The results, when combined with the results of Experiment 3 (which ruled out a possible alternative interpretation) showed disrupted reading of a following verb phrase that failed to resolve the implicit QUD (Did the discourse participant actually X?), compared to reading the same material in the absence of a clear QUD. The findings support an online role for QUDs in guiding readers' structuring and interpretation of discourse.
Moreno I. Coco; Frank Keller
In: Cognitive Science, vol. 36, no. 7, pp. 1204–1223, 2012.
Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they are mentioned, leading us to hypothesize that the scan pattern of a participant can be used to predict what he or she will say. We test this hypothesis using a data set of cued scene descriptions of photo-realistic scenes. We demonstrate that similar scan patterns are correlated with similar sentences, within and between visual scenes; and that this correlation holds for three phases of the language production process (target identification, sentence planning, and speaking). We also present a simple algorithm that uses scan patterns to accurately predict associated sentences by utilizing similarity-based retrieval.
Probability of seeing increases saccadic readiness Journal Article
In: PLoS ONE, vol. 7, no. 11, pp. e49454, 2012.
Associating movement directions or endpoints with monetary rewards or costs influences movement parameters in humans, and associating movement directions or endpoints with food reward influences movement parameters in non-human primates. Rewarded movements are facilitated relative to non-rewarded movements. The present study examined to what extent successful foveation facilitated saccadic eye movement behavior, with the hypothesis that foveation may constitute an informational reward. Human adults performed saccades to peripheral targets that either remained visible after saccade completion or were extinguished, preventing visual feedback. Saccades to targets that were systematically extinguished were slower and easier to inhibit than saccades to targets that afforded successful foveation, and this effect was modulated by the probability of successful foveation. These results suggest that successful foveation facilitates behavior, and that obtaining the expected sensory consequences of a saccadic eye movement may serve as a reward for the oculomotor system.
Thérèse Collins; Josh Wallman
In: Journal of Neurophysiology, vol. 107, no. 12, pp. 3342–3348, 2012.
When saccades systematically miss their visual target, their amplitude adjusts, causing the position errors to be progressively reduced. Conventionally, this adaptation is viewed as driven by retinal error (the distance between primary saccade endpoint and visual target). Recent work suggests that the oculomotor system is informed about where the eye lands; thus not all "retinal error" is unexpected. The present study compared two error signals that may drive saccade adaptation: retinal error and prediction error (the difference between predicted and actual postsaccadic images). Subjects made saccades to a visual target in two successive sessions. In the first session, the target was extinguished during saccade execution if the amplitude was smaller (or, in other experiments, greater) than the running median, thereby modifying the average retinal error subjects experienced without moving the target during the saccade as in conventional adaptation paradigms. In the second session, targets were extinguished at the start of saccades and turned back on at a position that reproduced the trial-by-trial retinal error recorded in the first session. Despite the retinal error in the first and second sessions having been identical, adaptation was severalfold greater in the second session, when the predicted target position had been changed. These results argue that the eye knows where it lands and where it expects the target to be, and that deviations from this prediction drive saccade adaptation more strongly than retinal error alone.
Martin C. Cölln; Kerstin Kusch; Jens R. Helmert; Petra Kohler; Boris M. Velichkovsky; Sebastian Pannasch
In: Applied Ergonomics, vol. 43, no. 1, pp. 48–56, 2012.
Frederic Benmussa; Charles Aissani; A. -L. Paradis; Jean Lorenceau
Coupled dynamics of bistable distant motion displays Journal Article
In: Journal of Vision, vol. 11, no. 8, pp. 14–14, 2011.
This study explores the extent to which a display changing periodically in perceptual interpretation through smooth periodic physical changes-an inducer-is able to elicit perceptual switches in an intrinsically bistable distant probe display. Four experiments are designed to examine the coupling strength and bistable dynamics with displays of varying degree of ambiguity, similarity, and symmetry-in motion characteristics-as a function of their locations in visual space. The results show that periodic fluctuations of a remote inducer influence a bistable probe and regulate its dynamics through coupling. Coupling strength mainly depends on the relative locations of the probe display and the contextual inducer in the visual field, with stronger coupling when both displays are symmetrical around the vertical meridian and weaker coupling otherwise. Smaller effects of common fate and symmetry are also found. Altogether, the results suggest that long-range interhemispheric connections, presumably involving the corpus callosum, are able to synchronize perceptual transitions across the vertical meridian. If true, bistable dynamics may provide a behavioral method to probe interhemispheric connectivity in behaving human. Consequences of these findings for studies using stimuli symmetrical around the vertical meridian are evaluated.
Sarah J. Bayless; Missy Glover; Margot J. Taylor; Roxane J. Itier
In: Visual Cognition, vol. 19, no. 4, pp. 483–510, 2011.
This study investigated the role of the eye region of emotional facial expressions in modulating gaze orienting effects. Eye widening is characteristic of fearful and surprised expressions and may significantly increase the salience of perceived gaze direction. This perceptual bias rather than the emotional valence of certain expressions may drive enhanced gaze orienting effects. In a series of three experiments involving low anxiety participants, different emotional expressions were tested using a gaze-cueing paradigm. Fearful and surprised expressions enhanced the gaze orienting effect compared with happy or angry expressions. Presenting only the eye regions as cueing stimuli eliminated this effect whereas inversion globally reduced it. Both inversion and the use of eyes only attenuated the emotional valence of stimuli without affecting the perceptual salience of the eyes. The findings thus suggest that low-level stimulus features alone are not sufficient to drive gaze orienting modulations by emotion. Rather, they interact with the emotional valence of the expression that appears critical. The study supports the view that rapid processing of fearful and surprised emotional expressions can potentiate orienting to another person's averted gaze in non-anxious people.
Paul M. Bays; Emma Y. Wu; Masud Husain
Storage and binding of object features in visual working memory Journal Article
In: Neuropsychologia, vol. 49, pp. 1622–1631, 2011.
An influential conception of visual working memory is of a small number of discrete memory “slots”, each storing an integrated representation of a single visual object, including all its component features. When a scene contains more objects than there are slots, visual attention controls which objects gain access to memory. A key prediction of such a model is that the absolute error in recalling multiple features of the same object will be correlated, because features belonging to an attended object are all stored, bound together. Here,wetested participants' ability to reproduce frommemoryboth the color and orientation ofan object indicated by a location cue. We observed strong independence oferrors between feature dimensions even for large memory arrays (6 items), inconsistent with an upper limit on the number of objects held in memory. Examining the pattern of responses in each dimension revealed a gaussian distribution of error cen- tered on the target value that increased in width under higher memory loads. For large arrays, a subset of responses were not centered on the target but instead predominantly corresponded to mistakenly reproducing one of the other features held in memory. These misreporting responses again occurred independently in each feature dimension, consistent with ‘misbinding' due to errors in maintaining the binding information that assigns features to objects. The results support a shared-resource model of working memory, in which increasing memory load incrementally degrades storage of visual information, reducing the fidelity with which both object fea- tures and feature bindings are maintained.
Genna M. Bebko; Steven L. Franconeri; Kevin N. Ochsner; Joan Y. Chiao
In: Emotion, vol. 11, no. 4, pp. 732–742, 2011.
Successful emotion regulation is important for maintaining psychological well-being. Although it is known that emotion regulation strategies, such as cognitive reappraisal and expressive suppression, may have divergent consequences for emotional responses, the cognitive processes underlying these differences remain unclear. Here we used eye-tracking to investigate the role of attentional deployment in emotion regulation success. We hypothesized that differences in the deployment of attention to emotional areas of complex visual scenes may be a contributing factor to the differential effects of these two strategies on emotional experience. Eye-movements, pupil size, and self-reported negative emotional experience were measured while healthy young adult participants viewed negative IAPS images and regulated their emotional responses using either cognitive reappraisal or expressive suppression. Consistent with prior work, reappraisers reported feeling significantly less negative than suppressers when regulating emotion as compared to a baseline condition. Across both groups, participants looked away from emotional areas during emotion regulation, an effect that was more pronounced for suppressers. Critically, irrespective of emotion regulation strategy, participants who looked toward emotional areas of a complex visual scene were more likely to experience emotion regulation success. Taken together, these results demonstrate that attentional deployment varies across emotion regulation strategies and that successful emotion regulation depends on the extent to which people look toward emotional content in complex visual scenes.
Stefanie I. Becker
In: PLoS ONE, vol. 6, no. 3, pp. e17740, 2011.
The present study examined the factors that determine the dwell times in a visual search task, that is, the duration the gaze remains fixated on an object. It has been suggested that an item's similarity to the search target should be an important determiner of dwell times, because dwell times are taken to reflect the time needed to reject the item as a distractor, and such discriminations are supposed to be harder the more similar an item is to the search target. In line with this similarity view, a previous study shows that, in search for a target ring of thin line-width, dwell times on thin linewidth Landolt C's distractors were longer than dwell times on Landolt C's with thick or medium linewidth. However, dwell times may have been longer on thin Landolt C's because the thin line-width made it harder to detect whether the stimuli had a gap or not. Thus, it is an open question whether dwell times on thin line-width distractors were longer because they were similar to the target or because the perceptual decision was more difficult. The present study de-coupled similarity from perceptual difficulty, by measuring dwell times on thin, medium and thick line-width distractors when the target had thin, medium or thick line-width. The results showed that dwell times were longer on target-similar than target-dissimilar stimuli across all target conditions and regardless of the line-width. It is concluded that prior findings of longer dwell times on thin linewidth-distractors can clearly be attributed to target similarity. As will be discussed towards the end, the finding of similarity effects on dwell times has important implications for current theories of visual search and eye movement control.
Stefanie I. Becker; Gernot Horstmann; Roger W. Remington
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 6, pp. 1739–1757, 2011.
Several different explanations have been proposed to account for the search asymmetry (SA) for angry schematic faces (i.e., the fact that an angry face target among friendly faces can be found faster than vice versa). The present study critically tested the perceptual grouping account, (a) that the SA is not due to emotional factors, but to perceptual differences that render angry faces more salient than friendly faces, and (b) that the SA is mainly attributable to differences in distractor grouping, with angry faces being more difficult to group than friendly faces. In visual search for angry and friendly faces, the number of distractors visible during each fixation was systematically manipulated using the gaze-contingent window technique. The results showed that the SA emerged only when multiple distractors were visible during a fixation, supporting the grouping account. To distinguish between emotional and perceptual factors in the SA, we altered the perceptual properties of the faces (dented-chin face) so that the friendly face became more salient. In line with the perceptual account, the SA was reversed for these faces, showing faster search for a friendly face target. These results indicate that the SA reflects feature-level perceptual grouping, not emotional valence.
Artem V. Belopolsky; Christel Devue; Jan Theeuwes
Angry faces hold the eyes Journal Article
In: Visual Cognition, vol. 19, no. 1, pp. 27–36, 2011.
Efficient processing of complex social and biological stimuli associated with threat is crucial for survival. Previous studies have suggested that threatening stimuli such as angry faces not only capture visual attention, but also delay the disengagement of attention from their location. However, in the previous studies disengagement of attention was measured indirectly and was inferred on the basis of delayed manual responses. The present study employed a novel paradigm that allows direct examination of the delayed disengagement hypothesis by measuring the time it takes to disengage the eyes from threatening stimuli. The results showed that participants were indeed slower to make an eye movement away from an angry face presented at fixation than from either a neutral or a happy face. This finding provides converging support that the delay in disengagement of attention is an important component of processing threatening information.
Artem V. Belopolsky; Jan Theeuwes
In: Neuropsychologia, vol. 49, no. 6, pp. 1605–1610, 2011.
Humans tend to create and maintain internal representations of the environment that help guiding actions during the everyday activities. Previous studies have shown that the oculomotor system is involved in coding and maintenance of locations in visual-spatial working memory. In these studies selection of the relevant location for maintenance in working memory took place on the screen (selecting the location of a dot presented on the screen). The present study extended these findings by showing that the oculomotor system also codes selection of location from an internal memory representation. Participants first memorized two locations and after a retention interval selected one location for further maintenance. The results show that saccade trajectories deviated away from the ultimately remembered location. Furthermore, selection of the location from the memorized representation produced sustained oculomotor preparation to it. The results show that oculomotor system is very flexible and plays an active role for coding and maintaining information selected within internal memory representations.
Boaz M. Ben-David; Craig G. Chambers; Meredyth Daneman; M. Kathleen Pichora-Fuller; Eyal M. Reingold; Bruce A. Schneider
In: Journal of Speech, Language, and Hearing Research, vol. 54, pp. 243–262, 2011.
PURPOSE: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. METHOD: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted objects, for example, "Look at the candle." Eye movements captured listeners' ability to differentiate the target noun (candle) from a similar-sounding phonological competitor (e.g., candy or sandal). Manipulations included the presence/absence of noise, the type of phonological overlap in target-competitor pairs, and the number of syllables. RESULTS: Having controlled for age-related differences in word recognition accuracy (by tailoring noise levels), similar online processing profiles were found for younger and older adults when targets were discriminated from competitors that shared onset sounds. Age-related differences were found when target words were differentiated from rhyming competitors and were more extensive in noise. CONCLUSIONS: Real-time spoken word recognition processes appear similar for younger and older adults in most conditions; however, age-related differences may be found in the discrimination of rhyming words (especially in noise), even when there are no age differences in word recognition accuracy. These results highlight the utility of eye movement methodologies for studying speech processing across the life span.
Nick Berggren; Samuel B. Hutton; Nazanin Derakshan
In: Frontiers in Psychology, vol. 2, pp. 280, 2011.
Individuals reporting high levels of distractibility in everyday life show impaired performance in standard laboratory tasks measuring selective attention and inhibitory processes. Similarly, increasing cognitive load leads to more errors/distraction in a variety of cognitive tasks. How these two factors interact is currently unclear; highly distractible individuals may be affected more when their cognitive resources are taxed, or load may linearly affect performance for all individuals. We investigated the relationship between self-reported levels of cognitive failures (CF) in daily life and performance in the antisaccade task, a widely used tool examining attentional control. Levels of concurrent cognitive demand were manipulated using a secondary auditory discrimination task. We found that both levels of self-reported CF and task load increased antisaccade latencies while having no effect on prosaccade eye-movements. However individuals rating themselves as suffering few daily life distractions showed a comparable load cost to those who experience many. These findings suggest that the likelihood of distraction is governed by the addition of both internal susceptibility and the external current load placed on working memory.
Colas N. Authié; Daniel R. Mestre
In: Vision Research, vol. 51, no. 16, pp. 1791–1800, 2011.
When analyzing gaze behavior during curve driving, it is commonly accepted that gaze is mostly located in the vicinity of the tangent point, being the point where gaze direction tangents the curve inside edge. This approach neglects the fact that the tangent point is actually motionless only in the limit case when the trajectory precisely follows the curve's geometry. In this study, we measured gaze behavior during curve driving, with the general hypothesis that gaze is not static, when exposed to a global optical flow due to self-motion. In order to study spatio-temporal aspects of gaze during curve driving, we used a driving simulator coupled to a gaze recording system. Ten participants drove seven runs on a track composed of eight curves of various radii (50, 100, 200 and 500. m), with each radius appearing in both right and left directions. Results showed that average gaze position was, as previously described, located in the vicinity of the tangent point. However, analysis also revealed the presence of a systematic optokinetic nystagmus (OKN) around the tangent point position. The OKN slow phase direction does not match the local optic flow direction, while slow phase speed is about half of the local speed. Higher directional gains are observed when averaging the entire optical flow projected on the simulation display, whereas the best speed gain is obtained for a 2° optic flow area, centered on the instantaneous gaze location. The present study confirms that the tangent point is a privileged feature in the dynamic visual scene during curve driving, and underlines a contribution of the global optical flow to gaze behavior during active self-motion.
Sheena K. Au-Yeung; Valerie Benson; Monica S. Castelhano; Keith Rayner
In: Autism Research and Treatment, vol. 2011, pp. 1–7, 2011.
Minshew and Goldstein (1998) postulated that autism spectrum disorder (ASD) is a disorder of complex information processing. The current study was designed to investigate this hypothesis. Participants with and without ASD completed two scene perception tasks: a simple “spot the difference” task, where they had to say which one of a pair of pictures had a detail missing, and a complex “which one's weird” task, where they had to decide which one of a pair of pictures looks “weird”. Participants with ASD did not differ from TD participants in their ability to accurately identify the target picture in both tasks. However, analysis of the eye movement sequences showed that participants with ASD viewed scenes differently from normal controls exclusively for the complex task. This difference in eye movement patterns, and the method used to examine different patterns, adds to the knowledge base regarding eye movements and ASD. Our results are in accordance with Minshew and Goldstein's theory that complex, but not simple, information processing is impaired in ASD.
Hazel I. Blythe; Tuomo Häikiö; Raymond Bertam; Simon P. Liversedge; Jukka Hyönä
Reading disappearing text: Why do children refixate words? Journal Article
In: Vision Research, vol. 51, no. 1, pp. 84–92, 2011.
We compared Finnish adults' and children's eye movements on long (8-letter) and short (4-letter) target words embedded in sentences, presented either normally or as disappearing text. When reading disappearing text, where refixations did not provide new information, the 8- to 9-year-old children made fewer refixations but more regressions back to long words compared to when reading normal text. This difference was not observed in the adults or 10- to 11-year-old children. We conclude that the younger children required a second visual sample on the long words, and they adapted their eye movement behaviour when reading disappearing text accordingly.
Carsten N. Boehler; Jens-Max Hopf; Ruth M. Krebs; Christian M. Stoppel; Mircea A. Schoenfeld; Hans-Jochen Heinze; Toemme Noesselt
In: Journal of Neuroscience, vol. 31, no. 13, pp. 4955–4961, 2011.
Dopamine release in cortical and subcortical structures plays a central role in reward-related neural processes. Within this context, dopaminergic inputs are commonly assumed to play an activating role, facilitating behavioral and cognitive operations necessary to obtain a prospective reward. Here, we provide evidence from human fMRI that this activating role can also be mediated by task-demand-related processes and thus extends beyond situations that only entail extrinsic motivating factors. Using a visual discrimination task in which varying levels of task demands were precued, we found enhanced hemodynamic activity in the substantia nigra (SN) for high task demands in the absence of reward or similar extrinsic motivating factors. This observation thus indicates that the SN can also be activated in an endogenous fashion. In parallel to its role in reward-related processes, reward-independent activation likely serves to recruit the processing resources needed to meet enhanced task demands. Simultaneously, activity in a wide network of cortical and subcortical control regions was enhanced in response to high task demands, whereas areas of the default-mode network were deactivated more strongly. The present observations suggest that the SN represents a core node within a broader neural network that adjusts the amount of available neural and behavioral resources to changing situational opportunities and task requirements, which is often driven by extrinsic factors but can also be controlled endogenously.
Patrick A. Bolger; Gabriela Zapata
Semantic categories and context in L2 vocabulary learning Journal Article
In: Language Learning, vol. 61, no. 2, pp. 614–646, 2011.
This article extends recent findings that presenting semantically related vocabulary simultaneously inhibits learning. It does so by adding story contexts. Participants learned 32 new labels for known concepts from four different semantic categories in stories that were either semantically related (one category per story) or semantically unrelated (four categories per story). They then completed a semantic-categorization task, followed by a stimulus-match verification task in an eye-tracker. Results suggest that there may be a slight learning advantage in the semantically unrelated condition. However, our findings are better interpreted in terms of how learning occurred and how vocabulary was processed afterward. Additionally, our results suggest that contextual support from the stories may have surmounted much of the disadvantage attributed to semantic relatedness.
Sabine Born; Dirk Kerzel; Jan Theeuwes
In: Experimental Brain Research, vol. 208, no. 4, pp. 621–631, 2011.
The current study investigated whether capture of the eyes by a salient onset distractor and the disengagement of the eyes from that distractor are driven by the same or by different underlying control modes. A variant of the classic oculomotor capture task was used. Observers had to make a saccade to the only gray circle among red background circles. On some trials, a green (novel color), red (placeholder color) or gray (target color) distractor square was presented with sudden onset. Results showed that when participants reacted fast, oculomotor capture was primarily driven by bottom-up pop-out: both types of distractors (green and gray) that popped out among the red background elements showed more capture than a red distractor that did not pop-out. In contrast to initial capture, disengagement of the eyes from the distractor was driven by top-down target-distractor similarity effects. We also examined the time-course of this effect. The distractor could change from green to either the target or placeholder color. When the color change was early in time (30-40 ms after its onset), dwell times were strongly affected by the change, whereas the effect on oculomotor capture was weak. Importantly, a change occurring as early as 60-80 ms after distractor onset did neither affect capture nor dwell times, corroborating the assumption of parallel programming of saccades.
Robert G. Alexander; Gregory J. Zelinsky
Visual similarity effects in categorical search Journal Article
In: Journal of Vision, vol. 11, no. 8, pp. 1–15, 2011.
We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets.
Gerry T. M. Altmann
In: Acta Psychologica, vol. 137, no. 2, pp. 190–200, 2011.
The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question "What is the delay between language input and saccade execution?" problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between "signal" (eye movements due to the unfolding language) and "noise" (eye movements due to extraneous factors)? In two studies, participants heard either 'the man. .' or 'the girl. .', and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100. ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word.
George J. Andersen; Rui Ni; Zheng Bian; Julie Kang
In: Accident Analysis and Prevention, vol. 43, no. 1, pp. 381–390, 2011.
The present study examined the limits of spatial attention while performing two driving relevant tasks that varied in depth. The first task was to maintain a fixed headway distance behind a lead vehicle that varied speed. The second task was to detect a light-change target in an array of lights located above the roadway. In Experiment 1 the light detection task required drivers to encode color and location. The results indicated that reaction time to detect a light-change target increased and accuracy decreased as a function of the horizontal location of the light-change target and as a function of the distance from the driver. In a second experiment the light change task was changed to a singleton search (detect the onset of a yellow light) and the workload of the car following task was systematically varied. The results of Experiment 2 indicated that RT increased as a function of task workload, the 2D position of the light-change target and the distance of the light-change target. A multiple regression analysis indicated that the effect of distance on light detection performance was not due to changes in the projected size of the light target. In Experiment 3 we found that the distance effect in detecting a light change could not be explained by the location of eye fixations. The results demonstrate that when drivers attend to a roadway scene attention is limited in three-dimensional space. These results have important implications for developing tests for assessing crash risk among drivers as well as the design of in vehicle technologies such as head-up displays.
Nicola C. Anderson; Evan F. Risko; Alan Kingstone
Exploiting human sensitivity to gaze for tracking the eyes Journal Article
In: Behavior Research Methods, vol. 43, pp. 843–852, 2011.
Given the prevalence, quality, and low cost of web cameras, along with the remarkable human sensitivity to gaze, we examined the accuracy of eye tracking using only a web camera. Participants were shown webcamera recordings of a person's eyes moving 1°, 2°, or 3° of visual angle in one of eight radial directions (north, northeast, east, southeast, etc.), or no eye movement occurred at all. Observers judged whether an eye movement was made and, if so, its direction. Our findings demonstrate that for all saccades of any size or direction, observers can detect and discriminate eye movements significantly better than chance. Critically, the larger the saccade, the better the judgments, so that for eye movements of 3°, people can tell whether an eye movement occurred, and where it was going, at about 90% or better. This simple methodology of using a web camera and looking for eye movements offers researchers a simple, reliable, and cost-effective research tool that can be applied effectively both in studies where it is important that participants maintain central fixation (e.g., covert attention investigations) and in those where they are free or required to move their eyes (e.g., visual search).
Richard Andersson; Fernanda Ferreira; John M. Henderson
In: Acta Psychologica, vol. 137, no. 2, pp. 208–216, 2011.
The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.
Bernhard Angele; Keith Rayner
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1210–1220, 2011.
We used the boundary paradigm (Rayner, 1975) to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n + 2 preprocessing effects. In Experiment 1, we tested whether parafoveal processing of the second word to the right of fixation (n + 2) takes place only when the preceding word (n + 1) is very short (Angele, Slattery, Yang, Kliegl, & Rayner, 2008); word n + 1 was always a three-letter word. Before crossing the boundary, preview for both words n + 1 and n + 2 was either incorrect or correct. In a third condition, only the preview for word n + 1 was incorrect. In Experiment 2, we tested whether word frequency of the preboundary word (n) had an influence on the presence of preview benefit and parafoveal-on-foveal effects. Additionally, Experiment 2 contained a condition in which only preview of n + 2 was incorrect. Our findings suggest that effects of parafoveal n + 2 preprocessing are not modulated by either n + 1 word length or n frequency. Furthermore, we did not observe any evidence of parafoveal lexical preprocessing of word n + 2 in either experiment.
Jens K. Apel; Gavin F. Revie; Angelo Cangelosi; Rob Ellis; Jeremy Goslin; Martin H. Fischer
In: Experimental Brain Research, vol. 214, no. 2, pp. 249–259, 2011.
We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]". Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution.
Keith S. Apfelbaum; Sheila E. Blumstein; Bob Mcmurray
In: Psychonomic Bulletin & Review, vol. 18, no. 1, pp. 141–149, 2011.
Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors. Critical trials consisted of high or low density auditory targets (e.g., horse) and a visual display containing a target, a semantically related object (e.g., saddle), and two phonologically and semantically unrelated objects (e.g., chimney, bikini). Results showed greater magnitude of priming for semantically related objects of low than of high density words, and no differences for high and low density word targets in the time course of looks to the word semantically related to the target. This pattern of results is consistent with models of cascading activation, which predict that lexical activation has continuous effects on the level of semantic activation, with no delays in the onset of semantic activation for phonologically competing words.
Mathias Abegg; Dara S. Manoach; Jason J. S. Barton
In: Vision Research, vol. 51, no. 1, pp. 215–221, 2011.
Foreknowledge about the demands of an upcoming trial may be exploited to optimize behavioural responses. In the current study we systematically investigated the benefits of partial foreknowledge - that is, when some but not all aspects of a future trial are known in advance. For this we used an ocular motor paradigm with horizontal prosaccades and antisaccades. Predictable sequences were used to create three partial foreknowledge conditions: one with foreknowledge about the stimulus location only, one with foreknowledge about the task set only, and one with foreknowledge about the direction of the required response only. These were contrasted with a condition of no-foreknowledge and a condition of complete foreknowledge about all three parameters. The results showed that the three types of foreknowledge affected saccadic efficiency differently. While foreknowledge about stimulus-location had no effect on efficiency, task foreknowledge had some effect and response-foreknowledge was as effective as complete foreknowledge. Foreknowledge effects on switch costs followed a similar pattern in general, but were not specific for switching of the trial attribute for which foreknowledge was available. We conclude that partial foreknowledge has a differential effect on efficiency, most consistent with preparatory activation of a motor schema in advance of the stimulus, with consequent benefits for both switched and repeated trials.
David J. Acunzo; John M. Henderson
No emotional "Pop-out" effect in natural scene viewing Journal Article
In: Emotion, vol. 11, no. 5, pp. 1134–1143, 2011.
It has been shown that attention is drawn toward emotional stimuli. In particular, eye movement research suggests that gaze is attracted toward emotional stimuli in an unconscious, automated manner. We addressed whether this effect remains when emotional targets are embedded within complex real-world scenes. Eye movements were recorded while participants memorized natural images. Each image contained an item that was either neutral, such as a bag, or emotional, such as a snake or a couple hugging. We found no latency difference for the first target fixation between the emotional and neutral conditions, suggesting no extrafoveal "pop-out" effect of emotional targets. However, once detected, emotional targets held attention for a longer time than neutral targets. The failure of emotional items to attract attention seems to contradict previous eye-movement research using emotional stimuli. However, our results are consistent with studies examining semantic drive of overt attention in natural scenes. Interpretations of the results in terms of perceptual and attentional load are provided.
Carlos Aguilar; Eric Castet
In: Vision Research, vol. 51, no. 9, pp. 997–1012, 2011.
Many important results in visual neuroscience rely on the use of gaze-contingent retinal stabilization techniques. Our work focuses on the important fraction of these studies that is concerned with the retinal stabilization of visual filters that degrade some specific portions of the visual field. For instance, macular scotomas, often induced by age related macular degeneration, can be simulated by continuously displaying a gaze-contingent mask in the center of the visual field. The gaze-contingent rules used in most of these studies imply only a very minimal processing of ocular data. By analyzing the relationship between gaze and scotoma locations for different oculo-motor patterns, we show that such a minimal processing might have adverse perceptual and oculomotor consequences due mainly to two potential problems: (a) a transient blink-induced motion of the scotoma while gaze is static, and (b) the intrusion of post-saccadic slow eye movements. We have developed new gaze-contingent rules to solve these two problems. We have also suggested simple ways of tackling two unrecognized problems that are a potential source of mismatch between gaze and scotoma locations. Overall, the present work should help design, describe and test the paradigms used to simulate retinopathy with gaze-contingent displays.
Mehrnoosh Ahmadi; Mitra Judi; Anahita Khorrami; Javad Mahmoudi-Gharaei; Mehdi Tehrani-Doost
In: Iranian Journal of Psychiatry, vol. 6, no. 3, pp. 87–91, 2011.
OBJECTIVE: Early recognition of negative emotions is considered to be of vital importance. It seems that children with attention deficit hyperactivity disorder have some difficulties recognizing facial emotional expressions, especially negative ones. This study investigated the preference of children with attention deficit hyperactivity disorder for negative (angry, sad) facial expressions compared to normal children. METHOD: Participants were 35 drug naive boys with ADHD, aged between 6-11 years,and 31 matched healthy children. Visual orientation data were recorded while participants viewed face pairs (negative-neutral pairs) shown for 3000ms. The number of first fixations made to each expression was considered as an index of initial orientation. RESULTS: Group comparisons revealed no difference between attention deficit hyperactivity disorder group and their matched healthy counterparts in initial orientation of attention. A tendency towards negative emotions was found within the normal group, while no difference was observed between initial allocation of attention toward negative and neutral expressions in children with ADHD. CONCLUSION: Children with attention deficit hyperactivity disorder do not have significant preference for negative facial expressions. In contrast, normal children have a significant preference for negative facial emotions rather than neutral faces.
Snigdha Banerjee; Adam C. Snyder; Sophie Molholm; John J. Foxe
In: Journal of Neuroscience, vol. 31, no. 27, pp. 9923–9932, 2011.
Oscillatory alpha-band activity (8-15 Hz) over parieto-occipital cortex in humans plays an important role in suppression of processing for inputs at to-be-ignored regions of space, with increased alpha-band power observed over cortex contralateral to locations expected to contain distractors. It is unclear whether similar processes operate during deployment of spatial attention in other sensory modalities. Evidence from lesion patients suggests that parietal regions house supramodal representations of space. The parietal lobes are prominent generators of alpha oscillations, raising the possibility that alpha is a neural signature of supramodal spatial attention. Furthermore, when spatial attention is deployed within vision, processing of task-irrelevant auditory inputs at attended locations is also enhanced, pointing to automatic links between spatial deployments across senses. Here, we asked whether lateralized alpha-band activity is also evident in a purely auditory spatial-cueing task and whether it had the same underlying generator configuration as in a purely visuospatial task. If common to both sensory systems, this would provide strong support for "supramodal" attention theory. Alternately, alpha-band differences between auditory and visual tasks would support a sensory-specific account. Lateralized shifts in alpha-band activity were indeed observed during a purely auditory spatial task. Crucially, there were clear differences in scalp topographies of this alpha activity depending on the sensory system within which spatial attention was deployed. Findings suggest that parietally generated alpha-band mechanisms are central to attentional deployments across modalities but that they are invoked in a sensory-specific manner. The data support an "interactivity account," whereby a supramodal system interacts with sensory-specific control systems during deployment of spatial attention.
Brian Bartek; Richard L. Lewis; Shravan Vasishth; Mason R. Smith
In search of on-line locality effects in sentence comprehension Journal Article
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 5, pp. 1178–1198, 2011.
Many comprehension theories assert that increasing the distance between elements participating in a linguistic relation (e.g., a verb and a noun phrase argument) increases the difficulty of establishing that relation during on-line comprehension. Such locality effects are expected to increase reading times and are thought to reveal properties and limitations of the short-term memory system that supports comprehension. Despite their theoretical importance and putative ubiquity, however, evidence for on-line locality effects is quite narrow linguistically and methodologically: It is restricted almost exclusively to self-paced reading of complex structures involving a particular class of syntactic relation. We present 4 experiments (2 self-paced reading and 2 eyetracking experiments) that demonstrate locality effects in the course of establishing subject-verb dependencies; locality effects are seen even in materials that can be read quickly and easily. These locality effects are observable in the earliest possible eye-movement measures and are of much shorter duration than previously reported effects. To account for the observed empirical patterns, we outline a processing model of the adaptive control of button pressing and eye movements. This model makes progress toward the goal of eliminating linking assumptions between memory constructs and empirical measures in favor of explicit theories of the coordinated control of motor responses and parsing.
Vanessa Baudiffier; David Caplan; Daniel Gaonac'h; David Chesnet
In: Quarterly Journal of Experimental Psychology, vol. 64, no. 10, pp. 1896–1905, 2011.
Two experiments, one using self-paced reading and one using eye tracking, investigated the influence of noun animacy on the processing of subject relative (SR) clauses, object relative (OR) clauses, and object relative clauses with stylistic inversion (OR-SI) in French. Each sentence type was presented in two versions: either with an animate relative clause (RC) subject and an inanimate object (AS/IO), or with an inanimate RC subject and an animate object (IS/AO). There was an interaction between the RC structure and noun animacy. The advantage of SR sentences over OR and OR-SI sentences disappeared in AS/IO sentences. The interaction between animacy and structure occurred in self-paced reading times and in total fixation times on the RCs, but not in first-pass reading times. The results are consistent with a late interaction between animacy and structural processing during parsing and provide data relevant to several models of parsing.
Raymond Bertram; Victor Kuperman; R. Harald Baayen; Jukka Hyönä
In: Scandinavian Journal of Psychology, vol. 52, no. 6, pp. 530–544, 2011.
Inserting a hyphen in Dutch and Finnish compounds is most often illegal given spelling conventions. However, the current two eye movement experiments on triconstituent Dutch compounds like voetbalbond "footballassociation" (Experiment 1) and triconstituent Finnish compounds like lentokenttätaksi "airporttaxi" (Experiment 2) show that inserting a hyphen at constituent boundaries does not have to be detrimental to compound processing. In fact, when hyphens were inserted at the major constituent boundary (voetbal-bond "football-association"; lentokenttä-taksi "airport-taxi"), processing of the first part (voetbal "football"; lentokenttä "airport") turns out to be faster when it is followed by a hyphen than when it is legally concatenated. Inserting a hyphen caused a delay in later eye movement measures, which is probably due to the illegality of inserting hyphens in normally concatenated compounds. However, in both Dutch and Finnish we found a learning effect in the course of the experiment, such that by the end of the experiments hyphenated compounds are read faster than in the beginning of the experiment. By the end of the experiment, compounds with a hyphen at the major constituent boundary were actually processed equally fast as (Dutch) or even faster than (Finnish) their concatenated counterparts. In contrast, hyphenation at the minor constituent boundary (voet-balbond "foot-ballassociation"; lento-kenttätaksi "air-porttaxi") was detrimental to compound processing speed throughout the experiment. The results imply that the hyphen may be an efficient segmentation cue and that spelling illegalities can be overcome easily, as long as they make sense.
Narcisse P. Bichot; Matthew T. Heard; Robert Desimone
In: Journal of Neuroscience Methods, vol. 199, no. 2, pp. 265–272, 2011.
It has been known that monkeys will repeatedly press a bar for electrical stimulation in several different brain structures. We explored the possibility of using electrical stimulation in one such structure, the nucleus accumbens, as a substitute for liquid reward in animals performing a complex task, namely visual search. The animals had full access to water in the cage at all times on days when stimulation was used to motivate them. Electrical stimulation was delivered bilaterally at mirror locations in and around the accumbens, and the animals' motivation to work for electrical stimulation was quantified by the number of trials they performed correctly per unit of time. Acute mapping revealed that stimulation over a large area successfully supported behavioral performance during the task. Performance improved with increasing currents until it reached an asymptotic, theoretically maximal level. Moreover, stimulation with chronically implanted electrodes showed that an animal's motivation to work for electrical stimulation was at least equivalent to, and often better than, when it worked for liquid reward while on water control. These results suggest that electrical stimulation in the accumbens is a viable method of reward in complex tasks. Because this method of reward does not necessitate control over water or food intake, it may offer an alternative to the traditional liquid or food rewards in monkeys, depending on the goals and requirements of the particular research project.
Elina Birmingham; Moran Cerf; Ralph Adolphs
In: Social Neuroscience, vol. 6, no. 5-6, pp. 420–435, 2011.
The amygdala plays a critical role in orienting gaze and attention to socially salient stimuli. Previous work has demonstrated that SM a patient with rare bilateral amygdala lesions, fails to fixate and make use of information from the eyes in faces. Amygdala dysfunction has also been implicated as a contributing factor in autism spectrum disorders (ASD), consistent with some reports of reduced eye fixations in ASD. Yet, detailed comparisons between ASD and patients with amygdala lesions have not been undertaken. Here we carried out such a comparison, using eye tracking to complex social scenes that contained faces. We presented participants with three task conditions. In the Neutral task, participants had to determine what kind of room the scene took place in. In the Describe task, participants described the scene. In the Social Attention task, participants inferred where people in the scene were directing their attention. SM spent less time looking at the eyes and much more time looking at the mouths than control subjects, consistent with earlier findings. There was also a trend for the ASD group to spend less time on the eyes, although this depended on the particular image and task. Whereas controls and SM looked more at the eyes when the task required social attention, the ASD group did not. This pattern of impairments suggests that SM looks less at the eyes because of a failure in stimulus-driven attention to social features, whereas individuals with ASD look less at the eyes because they are generally insensitive to socially relevant information and fail to modulate attention as a function of task demands. We conclude that the source of the social attention impairment in ASD may arise upstream from the amygdala, rather than in the amygdala itself.
Jan Churan; Daniel Guitton; Christopher C. Pack
In: Journal of Neurophysiology, vol. 106, no. 4, pp. 1862–1874, 2011.
Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available.
Laetitia Cirilli; Philippe Timary; Philippe Lefèvre; Marcus Missal
In: PLoS ONE, vol. 6, no. 10, pp. e26699, 2011.
Impulsivity is the tendency to act without forethought. It is a personality trait commonly used in the diagnosis of many psychiatric diseases. In clinical practice, impulsivity is estimated using written questionnaires. However, answers to questions might be subject to personal biases and misinterpretations. In order to alleviate this problem, eye movements could be used to study differences in decision processes related to impulsivity. Therefore, we investigated correlations between impulsivity scores obtained with a questionnaire in healthy subjects and characteristics of their anticipatory eye movements in a simple smooth pursuit task. Healthy subjects were asked to answer the UPPS questionnaire (Urgency Premeditation Perseverance and Sensation seeking Impulsive Behavior scale), which distinguishes four independent dimensions of impulsivity: Urgency, lack of Premeditation, lack of Perseverance, and Sensation seeking. The same subjects took part in an oculomotor task that consisted of pursuing a target that moved in a predictable direction. This task reliably evoked anticipatory saccades and smooth eye movements. We found that eye movement characteristics such as latency and velocity were significantly correlated with UPPS scores. The specific correlations between distinct UPPS factors and oculomotor anticipation parameters support the validity of the UPPS construct and corroborate neurobiological explanations for impulsivity. We suggest that the oculomotor approach of impulsivity put forth in the present study could help bridge the gap between psychiatry and physiology.
Monica S. Castelhano; Chelsea Heaven
In: Psychonomic Bulletin & Review, vol. 18, no. 5, pp. 890–896, 2011.
Although the use of semantic information about the world seems ubiquitous in every task we perform, it is not clear whether we rely on a scene's semantic information to guide attention when searching for something in a specific scene context (e.g., keys in one's living room). To address this question, we compared contribution of a scene's semantic information (i.e., scene gist) versus learned spatial associations between objects and context. Using the flash-preview-moving-window paradigm Castelhano and Henderson (Journal of Experimental Psychology: Human Perception and Performance 33:753-763, 2007), participants searched for target objects that were placed in either consistent or inconsistent locations and were semantically consistent or inconsistent with the scene gist. The results showed that learned spatial associations were used to guide search even in inconsistent contexts, providing evidence that scene context can affect search performance without consistent scene gist information. We discuss the results in terms of hierarchical organization of top-down influences of scene context.
Dario Cazzoli; Thomas Nyffeler; Christian W. Hess; René M. Müri
Vertical bias in neglect: A question of time? Journal Article
In: Neuropsychologia, vol. 49, no. 9, pp. 2369–2374, 2011.
Neglect is defined as the failure to attend and to orient to the contralesional side of space. A horizontal bias towards the right visual field is a classical finding in patients who suffered from a right-hemispheric stroke. The vertical dimension of spatial attention orienting has only sparsely been investigated so far. The aim of this study was to investigate the specificity of this vertical bias by means of a search task, which taps a more pronounced top-down attentional component. Eye movements and behavioural search performance were measured in thirteen patients with left-sided neglect after right hemispheric stroke and in thirteen age-matched controls. Concerning behavioural performance, patients found significantly less targets than healthy controls in both the upper and lower left quadrant. However, when targets were located in the lower left quadrant, patients needed more visual fixations (and therefore longer search time) to find them, suggesting a time-dependent vertical bias.
Jessica P. K. Chan; Daphne Kamino; Malcolm A. Binns; Jennifer D. Ryan
In: Frontiers in Psychology, vol. 2, pp. 92, 2011.
Older adults typically exhibit poorer face recognition compared to younger adults. These recognition differences may be due to underlying age-related changes in eye movement scanning. We examined whether older adults' recognition could be improved by yoking their eye movements to those of younger adults. Participants studied younger and older faces, under free viewing conditions (bases), through a gaze-contingent moving window (own), or a moving window which replayed the eye movements of a base participant (yoked). During the recognition test, participants freely viewed the faces with no viewing restrictions. Own-age recognition biases were observed for older adults in all viewing conditions, suggesting that this effect occurs independently of scanning. Participants in the bases condition had the highest recognition accuracy, and participants in the yoked condition were more accurate than participants in the own condition. Among yoked participants, recognition did not depend on age of the base participant. These results suggest that successful encoding for all participants requires the bottom-up contribution of peripheral information, regardless of the locus of control of the viewer. Although altering the pattern of eye movements did not increase recognition, the amount of sampling of the face during encoding predicted subsequent recognition accuracy for all participants. Increased sampling may confer some advantages for subsequent recognition, particularly for people who have declining memory abilities.
Steve W. C. Chang; Amy A. Winecoff; Michael L. Platt
Vicarious reinforcement in rhesus macaques (Macaca mulatta) Journal Article
In: Frontiers in Neuroscience, vol. 5, pp. 27, 2011.
What happens to others profoundly influences our own behavior. Such other-regarding outcomes can drive observational learning, as well as motivate cooperation, charity, empathy, and even spite. Vicarious reinforcement may serve as one of the critical mechanisms mediating the influence of other-regarding outcomes on behavior and decision-making in groups. Here we show that rhesus macaques spontaneously derive vicarious reinforcement from observing rewards given to another monkey, and that this reinforcement can motivate them to subsequently deliver or withhold rewards from the other animal. We exploited Pavlovian and instrumental conditioning to associate rewards to self (M1) and/or rewards to another monkey (M2) with visual cues. M1s made more errors in the instrumental trials when cues predicted reward to M2 compared to when cues predicted reward to M1, but made even more errors when cues predicted reward to no one. In subsequent preference tests between pairs of conditioned cues, M1s preferred cues paired with reward to M2 over cues paired with reward to no one. By contrast, M1s preferred cues paired with reward to self over cues paired with reward to both monkeys simultaneously. Rates of attention to M2 strongly predicted the strength and valence of vicarious reinforcement. These patterns of behavior, which were absent in non-social control trials, are consistent with vicarious reinforcement based upon sensitivity to observed, or counterfactual, outcomes with respect to another individual. Vicarious reward may play a critical role in shaping cooperation and competition, as well as motivating observational learning and group coordination in rhesus macaques, much as it does in humans. We propose that vicarious reinforcement signals mediate these behaviors via homologous neural circuits involved in reinforcement learning and decision-making.
Chang-Mao Chao; Philip Tseng; Tzu-Yu Hsu; Jia-Han Su; Ovid J. L. Tzeng; Daisy L. Hung; Neil G. Muggleton; Chi-Hung Juan
In: Human Brain Mapping, vol. 32, no. 11, pp. 1961–1972, 2011.
Predictability in the visual environment provides a powerful cue for efficient processing of scenes and objects. Recently, studies have suggested that the directionality and magnitude of saccade curvature can be informative as to how the visual system processes predictive information. The pres-ent study investigated the role of the right posterior parietal cortex (rPPC) in shaping saccade curva-tures in the context of predictive and non-predictive visual cues. We used an orienting paradigm that incorporated manipulation of target location predictability and delivered transcranial magnetic stimulation (TMS) over rPPC. Participants were presented with either an informative or uninforma-tive cue to upcoming target locations. Our results showed that rPPC TMS generally increased sac-cade latency and saccade error rates. Intriguingly, rPPC TMS increased curvatures away from the distractor only when the target location was unpredictable and decreased saccadic errors towards the distractor. These effects on curvature and accuracy were not present when the target location was predictable. These results dissociate the strong contingency between saccade latency and saccade curvature and also indicate that rPPC plays an important role in allocating and suppressing attention to distractors when the target demands visual disambiguation. Furthermore, the present study sug-gests that, like the frontal eye fields, rPPC is critically involved in determining saccade curvature and the generation of saccadic behaviors under conditions of differing target predictability.
Mara Breen; Charles Clifton
In: Journal of Memory and Language, vol. 64, no. 2, pp. 153–170, 2011.
This paper presents findings from two eye-tracking studies designed to investigate the role of metrical prosody in silent reading. In Experiment 1, participants read stress-alternating noun-verb or noun-adjective homographs (e.g. PREsent, preSENT) embedded in limericks, such that the lexical stress of the homograph, as determined by context, either matched or mismatched the metrical pattern of the limerick. The results demonstrated a reading cost when readers encountered a mismatch between the predicted and actual stress pattern of the word. Experiment 2 demonstrated a similar cost of a mismatch in stress patterns in a context where the metrical constraint was mediated by lexical category rather than by explicit meter. Both experiments demonstrated that readers are slower to read words when their stress pattern does not conform to expectations. The data from these two eye-tracking experiments provide some of the first on-line evidence that metrical information is part of the default representation of a word during silent reading.
Eli Brenner; Jeroen B. J. Smeets
Continuous visual control of interception Journal Article
In: Human Movement Science, vol. 30, no. 3, pp. 475–494, 2011.
People generally try to keep their eyes on a moving target that they intend to catch or hit. In the present study we first examined how important it is to do so. We did this by designing two interception tasks that promote different eye movements. In both tasks it was important to be accurate relative to both the moving target and the static environment. We found that performance was more variable in relation to the structure that was not fixated. This suggests that the resolution of visual information that is gathered during the movement is important for continuously improving predictions about critical aspects of the task, such as anticipating where the target will be at some time in the future. If so, variability in performance should increase if the target briefly disappears from view just before being hit, even if the target moves completely predictably. We demonstrate that it does, indicating that new visual information is used to improve precision throughout the movement.
Meredith Brown; Anne Pier Salverda; Laura C. Dilley; Michael K. Tanenhaus
In: Psychonomic Bulletin & Review, vol. 18, no. 6, pp. 1189–1196, 2011.
Previous work examining prosodic cues in online spoken-word recognition has focused primarily on local cues to word identity. However, recent studies have suggested that utterance-level prosodic patterns can also influence the interpretation of subsequent sequences of lexically ambiguous syllables (Dilley, Mattys, & Vinke, Journal of Memory and Language, 63:274–294, 2010; Dilley & McAuley, Journal of Memory and Language, 59:294–311, 2008). To test the hypothesis that these distal prosody effects are based on expectations about the organization of upcoming material, we conducted a visual-world experiment. We examined fixations to competing alternatives such as pan and panda upon hearing the target word panda in utterances in which the acoustic properties of the preceding sentence material had been manipulated. The proportions of fixations to the monosyllabic competitor were higher beginning 200 ms after target word onset when the preceding prosody supported a prosodic constituent boundary following pan-, rather than following panda. These findings support the hypothesis that expectations based on perceived prosodic patterns in the distal context influence lexical segmentation and word recognition.
Sarah Brown-Schmidt; Agnieszka E. Konopka
In: Information, vol. 2, no. 4, pp. 302–326, 2011.
This article describes research investigating the on-line processing of language in unscripted conversational settings. In particular, we focus on the process of formulating and interpreting definite referring expressions. Within this domain we present results of two eye-tracking experiments addressing the problem of how speakers interrogate the referential domain in preparation to speak, how they select an appropriate expression for a given referent, and how addressees interpret these expressions. We aim to demonstrate that it is possible, and indeed fruitful, to examine unscripted, conversational language using modified experimental designs and standard hypothesis testing procedures.
Maximilian Bruchmann; Philipp Hintze; Simon Mota
In: Advances in Cognitive Psychology, vol. 7, no. 1, pp. 132–141, 2011.
We studied the effects of selective attention on metacontrast masking with 3 different cueing experiments. Experiments 1 and 2 compared central symbolic and peripheral spatial cues. For symbolic cues, we observed small attentional costs, that is, reduced visibility when the target appeared at an unexpected location, and attentional costs as well as benefits for peripheral cues. All these effects occurred exclusively at the late, ascending branch of the U-shaped metacontrast masking function, although the possibility exists that cueing effects at the early branch were obscured by a ceiling effect due to almost perfect visibility at short stimulus onset asynchronies (SOAs). In Experiment 3, we presented temporal cues that indicated when the target was likely to appear, not where. Here, we also observed cueing effects in the form of higher visibility when the target appeared at the expected point in time compared to when it appeared too early. However, these effects were not restricted to the late branch of the masking function, but enhanced visibility over the complete range of the masking function. Given these results we discuss a common effect for different types of spatial selective attention on metacontrast masking involving neural subsystems that are different from those involved in temporal attention.
Julie N. Buchan; Kevin G. Munhall
In: Perception, vol. 40, no. 10, pp. 1164–1182, 2011.
Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.
Brittany N. Bushnell; Philip J. Harding; Yoshito Kosai; Wyeth Bair; Anitha Pasupathy
Equiluminance cells in visual cortical area V4 Journal Article
In: Journal of Neuroscience, vol. 31, no. 35, pp. 12398–12412, 2011.
We report a novel class of V4 neuron in the macaque monkey that responds selectively to equiluminant colored form. These "equiluminance" cells stand apart because they violate the well established trend throughout the visual system that responses are minimal at low luminance contrast and grow and saturate as contrast increases. Equiluminance cells, which compose ∼22% of V4, exhibit the opposite behavior: responses are greatest near zero contrast and decrease as contrast increases. While equiluminance cells respond preferentially to equiluminant colored stimuli, strong hue tuning is not their distinguishing feature-some equiluminance cells do exhibit strong unimodal hue tuning, but many show little or no tuning for hue. We find that equiluminance cells are color and shape selective to a degree comparable with other classes of V4 cells with more conventional contrast response functions. Those more conventional cells respond equally well to achromatic luminance and equiluminant color stimuli, analogous to color luminance cells described in V1. The existence of equiluminance cells, which have not been reported in V1 or V2, suggests that chromatically defined boundaries and shapes are given special status in V4 and raises the possibility that form at equiluminance and form at higher contrasts are processed in separate channels in V4.
Brittany N. Bushnell; Philip J. Harding; Yoshito Kosai; Anitha Pasupathy
In: Journal of Neuroscience, vol. 31, no. 11, pp. 4012–4024, 2011.
Past studies of shape coding in visual cortical area V4 have demonstrated that neurons can accurately represent isolated shapes in terms of their component contour features. However, rich natural scenes contain many partially occluded objects, which have "accidental" contours at the junction between the occluded and occluding objects. These contours do not represent the true shape of the occluded object and are known to be perceptually discounted. To discover whether V4 neurons differentially encode accidental contours, we studied the responses of single neurons in fixating monkeys to complex shapes and contextual stimuli presented either in isolation or adjoining each other to provide a percept of partial occlusion. Responses to preferred contours were suppressed when the adjoining context rendered those contours accidental. The observed suppression was reversed when the partial occlusion percept was compromised by introducing a small gap between the component stimuli. Control experiments demonstrated that these results likely depend on contour geometry at T-junctions and cannot be attributed to mechanisms based solely on local color/luminance contrast, spatial proximity of stimuli, or the spatial frequency content of images. Our findings provide novel insights into how occluded objects, which are fundamental to complex visual scenes, are encoded in area V4. They also raise the possibility that the weakened encoding of accidental contours at the junction between objects could mark the first step of image segmentation along the ventral visual pathway.
Roberto Caldara; Sébastien Miellet
In: Behavior Research Methods, vol. 43, no. 3, pp. 864–878, 2011.
Eye movement data analyses are commonly based on the probability of occurrence of saccades and fixations (and their characteristics) in given regions of interest (ROIs). In this article, we introduce an alternative method for computing statistical fixation maps of eye movements-iMap-based on an approach inspired by methods used in functional magnetic resonance imaging. Importantly, iMap does not require the a priori segmentation of the experimental images into ROIs. With iMap, fixation data are first smoothed by convolving Gaussian kernels to generate three-dimensional fixation maps. This procedure embodies eyetracker accuracy, but the Gaussian kernel can also be flexibly set to represent acuity or attentional constraints. In addition, the smoothed fixation data generated by iMap conform to the assumptions of the robust statistical random field theory (RFT) approach, which is applied thereafter to assess significant fixation spots and differences across the three-dimensional fixation maps. The RFT corrects for the multiple statistical comparisons generated by the numerous pixels constituting the digital images. To illustrate the processing steps of iMap, we provide sample analyses of real eye movement data from face, visual scene, and memory processing. The iMap MATLAB toolbox is editable and freely available for download online ( www.unifr.ch/psycho/ibmlab/ ).
Manuel G. Calvo; Lauri Nummenmaa
In: Vision Research, vol. 51, no. 15, pp. 1751–1759, 2011.
Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280. ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500. ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions.
Rouwen Cañal-Bruland; Simone Lotz; Norbert Hagemann; Jörg Schorer; Bernd Strauss
Visual span and change detection in soccer: An expertise study Journal Article
In: Journal of Cognitive Psychology, vol. 23, no. 3, pp. 302–310, 2011.
There is evidence to suggest that sports experts are able to extract more perceptual information from a single fixation than novices when exposed to meaningful tasks that are specific to their field of expertise. In particular, Reingold et al. (2001) showed that chess experts use a larger visual span including fewer fixations when compared to their less skilled counterparts. The aim of the present study was to examine whether also in a more complex environment, namely soccer, skilled players use a larger visual span and fewer fixations than less skilled players when attempting to recognise players' positions. To this end, we combined the gaze-contingent window technique with the change detection paradigm. Results seem to suggest that skilled soccer players do not use a larger visual span than less skilled players. However, skilled soccer players showed significantly fewer fixations of longer duration than their less skilled counterparts, supporting the notion that experts may extract more information from a single glance.
Minglei Chen; Hwa Wei Ko
In: Journal of Research in Reading, vol. 34, no. 2, pp. 232–246, 2011.
This study was to investigate Chinese children's eye patterns while reading different text genres from a developmental perspective. Eye movements were recorded while children in the second through sixth grades read two expository texts and two narrative texts. Across passages, overall word frequency was not significantly different between the two genres. Results showed that all children had longer fixation durations for low-frequency words. They also had longer fixation durations on content words. These results indicate that children adopted a word-based processing strategy like skilled readers do. However, only older children's rereading times were affected by genre. Overall, eye-movement patterns of older children reported in this study are in accordance with those of skilled Chinese readers, but younger children are more likely to be responsive to word characteristics than text level when reading a Chinese text.
Ying Chen; Patrick Byrne; J. Douglas Crawford
In: Neuropsychologia, vol. 49, no. 1, pp. 49–60, 2011.
Allocentric cues can be used to encode locations in visuospatial memory, but it is not known how and when these representations are converted into egocentric commands for behaviour. Here, we tested the influence of different memory intervals on reach performance toward targets defined in either egocentric or allocentric coordinates, and then compared this to performance in a task where subjects were implicitly free to choose when to convert from allocentric to egocentric representations. Reach and eye positions were measured using Optotrak and Eyelink Systems, respectively, in fourteen subjects. Our results confirm that egocentric representations degrade over a delay of several seconds, whereas allocentric representations remained relatively stable over the same time scale. Moreover, when subjects were free to choose, they converted allocentric representations into egocentric representations as soon as possible, despite the apparent cost in reach precision in our experimental paradigm. This suggests that humans convert allocentric representations into egocentric commands at the first opportunity, perhaps to optimize motor noise and movement timing in real-world conditions.
Hui-Yan Chiau; Philip Tseng; Jia-Han Su; Ovid J. L. Tzeng; Daisy L. Hung; Neil G. Muggleton; Chi-Hung Juan
Trial type probability modulates the cost of antisaccades Journal Article
In: Journal of Neurophysiology, vol. 106, no. 2, pp. 515–526, 2011.
The antisaccade task, where eye movements are made away from a target, has been used to investigate the flexibility of cognitive control of behavior. Antisaccades usually have longer saccade latencies than prosaccades, the so-called antisaccade cost. Recent studies have shown that this antisaccade cost can be modulated by event probability. This may mean that the antisaccade cost can be reduced, or even reversed, if the probability of surrounding events favors the execution of antisaccades. The probabilities of prosaccades and antisaccades were systematically manipulated by changing the proportion of a certain type of trial in an interleaved pro/antisaccades task. We aimed to disentangle the intertwined relationship between trial type probabilities and the antisaccade cost with the ultimate goal of elucidating how probabilities of trial types modulate human flexible behaviors, as well as the characteristics of such modulation effects. To this end, we examined whether implicit trial type probability can influence saccade latencies and also manipulated the difficulty of cue discriminability to see how effects of trial type probability would change when the demand on visual perceptual analysis was high or low. A mixed-effects model was applied to the analysis to dissect the factors contributing to the modulation effects of trial type probabilities. Our results suggest that the trial type probability is one robust determinant of antisaccade cost. These findings highlight the importance of implicit probability in the flexibility of cognitive control of behavior.
Chelsie L. Cushman; Rebecca L. Johnson
Age-of-acquisition effects in pure alexia Journal Article
In: Quarterly Journal of Experimental Psychology, vol. 64, no. 9, pp. 1726–1742, 2011.
Pure alexia is an acquired reading disorder in which previously literate adults adopt a letter-by-letter processing strategy. Though these individuals display impaired reading, research shows that they are still able to use certain lexical information in order to facilitate visual word processing. The current experiment investigates the role that a word's age of acquisition (AoA) plays in the reading processes of an individual with pure alexia (G.J.) when other lexical variables have been controlled. Results from a sentence reading task in which eye movement patterns were recorded indicated that G.J. shows a strong effect of AoA, where late-acquired words are more difficult to process than early-acquired words. Furthermore, it was observed that the AoA effect is much greater for G.J. than for age-matched control participants. This indicates that patients with pure alexia rely heavily on intact top-down information, supporting the interactive activation model of reading.
Kirsten A. Dalrymple; Elina Birmingham; Walter F. Bischof; Jason J. S. Barton; Alan Kingstone
In: Brain Research, vol. 1367, pp. 265–277, 2011.
Simultanagnosia is a disorder of visual attention, defined as an inability to see more than one object at once. It has been conceived as being due to a constriction of the visual "window" of attention, a metaphor that we examine in the present article. A simultanagnosic patient (SL) and two non-simultanagnosic control patients (KC and ES) described social scenes while their eye movements were monitored. These data were compared to a group of healthy subjects who described the same scenes under the same conditions as the patients, or through an aperture that restricted their vision to a small portion of the scene. Experiment 1 demonstrated that SL showed unusually low proportions of fixations to the eyes in social scenes, which contrasted with all other participants who demonstrated the standard preferential bias toward eyes. Experiments 2 and 3 revealed that when healthy participants viewed scenes through a window that was contingent on where they looked (Experiment 2) or where they moved a computer mouse (Experiment 3), their behavior closely mirrored that of patient SL. These findings suggest that a constricted window of visual processing has important consequences for how simultanagnosic patients explore their world. Our paradigm's capacity to mimic simultanagnosic behaviors while viewing complex scenes implies that it may be a valid way of modeling simultanagnosia in healthy individuals, providing a useful tool for future research. More broadly, our results support the thesis that people fixate the eyes in social scenes because they are informative to the meaning of the scene.
Kirsten A. Dalrymple; Elina Birmingham; Walter F. Bischof; Jason J. S. Barton; Alan Kingstone
In: Cortex, vol. 47, no. 7, pp. 787–799, 2011.
Simultanagnosia is a disorder of visual attention: the inability to see more than one object at one time. Some hypothesize that this is due to a constriction of the visual " window" of attention. Little is known about how simultanagnosics explore complex stimuli and how their behaviour changes with recovery. We monitored the eye movements of simultanagnosic patient SL to see how she scans social scenes shortly after onset of simultanagnosia (Time 1) and after some recovery (Time 2). At Time 1 SL had an abnormally low proportion of fixations to the eyes of the people in the scenes. She made a significantly larger proportion of fixations to the eyes at Time 2. We hypothesized that this change was related to an expansion of her restricted window of attention. Previously we simulated SL's behaviour in healthy subjects by having them view stimuli through a restricted viewing window. We used this simulation paradigm here to test our expanding window hypothesis. Subjects viewing social scenes through a larger window allocated more fixations to the eyes of people in the scenes than subjects viewing scenes through a smaller window, supporting our hypothesis. Recovery in simultanagnosia may be related to the expansion of the restricted attentional window that characterizes the disorder.