All EyeLink Publications
All 11,000+ peer-reviewed EyeLink research publications up until 2022 (with some early 2023s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
Benjamin T. Vincent
How do we use the past to predict the future in oculomotor search? Journal Article
In: Vision Research, vol. 74, pp. 93–101, 2012.
A variety of findings suggest that when conducting visual search, we can exploit cues that are statistically related to a target's location. But is this the result of heuristic mechanisms or an internal model that tracks the statistics of the environment? Here, connections are made between the two explanations, and four models are assessed to probe the mechanisms underlying prediction in search. Participants conducted a simple gaze-contingent search task with five conditions, each of which consists of different combinations of 1st and 2nd order statistics. People's exploration behaviour adapted to the statistical rules governing target behaviour. Behaviour was most consistent with a model that represents transitions from one location to another, and that makes the underlying assumption that the world is dynamic. This assumption that the world is changeable could not be overridden despite task instruction and nearly 1. h of exposure to unchanging statistics. This means that while people may be suboptimal in some experimental contexts, it may be because their internal mental model makes assumptions that are adaptive in a complex, changeable world.
Björn N. S. Vlaskamp; Anna Schubö
Eye movements during action preparation Journal Article
In: Experimental Brain Research, vol. 216, no. 3, pp. 463–472, 2012.
Looking at actions of others activates representations of similar own actions, that is, the action resonates. This may facilitate or interfere with the actions that one intends to make. We asked whether people promote or block those effects by making eye movements to or away from the actions of others. We investigated gaze behavior with a cup-clinking task: An actor shown on a video grabbed a cup and moved it toward the participant who next grabbed his own cup in the 'same' or in a different, 'complementary', way. In the 'same' condition, participants mostly looked at the place where the actor held the cup. In the 'complementary' condition, gaze behavior was similar at the start of the actor's action. To our surprise, as the action reached completion, participants started to look at the cup's site that corresponded to the grabbing instruction for their own action. A second experiment showed that this effect grew with delay of the go-signal. This indicates that a reason for the effect may be to support memorizing the instructed action. The bottom line of the study is that passively viewed scenes (passive in the sense that nothing in the observed scene is manipulated by the viewer) are scanned to support preparation of actions that one intends to make. We discuss how this finding relates to action resonance and how it relates to links between representations of actions and objects.
Melissa L. -H. Võ; Jeremy M. Wolfe
When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, pp. 23–41, 2012.
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches-despite previous encounters with the target objects-demonstrates the dominance of guidance by generic scene knowledge in real-world search.
Christian Vorstius; Ralph Radach; Alan R. Lang
Effects of acute alcohol intoxication on automated processing: evidence from the double-step paradigm Journal Article
In: Journal of Psychopharmacology, vol. 26, no. 2, pp. 262–272, 2012.
Reflexive and voluntary levels of processing have been studied extensively with respect to possible impairments due to alcohol intoxication. This study examined alcohol effects at the 'automated' level of processing essential to many complex visual processing tasks (e.g., reading, visual search) that involve ongoing modifications or reprogramming of well-practiced routines. Data from 30 participants (16 male) were collected in two counterbalanced sessions (alcohol vs. no-alcohol control; mean breath alcohol concentration = 68 mg/dL vs. 0 mg/dL). Eye movements were recorded during a double-step task where 75% of trials involved two target stimuli in rapid succession (inter-stimulus interval [ISI]=40, 70, or 100 ms) so that they could elicit two distinct saccades or eye movements (double steps). On 25% of trials a single target appeared. Results indicated that saccade latencies were longer under alcohol. In addition, the proportion of single-step responses and the mean saccade amplitude (length) of primary saccades decreased significantly with increasing ISI. The key novel finding, however, was that the reprogramming time needed to cancel the first saccade and adjust saccade amplitude was extended significantly by alcohol. The additional time made available by prolonged latencies due to alcohol was not utilized by the saccade programming system to decrease the number of two-step responses. These results represent the first demonstration of specific alcohol-induced programming deficits at the automated level of oculomotor processing.
Chin-An Wang; Susan E. Boehnke; Brian J. White; Douglas P. Munoz
Microstimulation of the monkey superior colliculus induces pupil dilation without evoking saccades Journal Article
In: Journal of Neuroscience, vol. 32, no. 11, pp. 3629–3636, 2012.
The orienting reflex is initiated by a salient stimulus and facilitates quick, appropriate action. It involves a rapid shift of the eyes, head, and attention and other physiological responses such as changes in heart rate and transient pupil dilation. The SC is a critical structure in the midbrain that selects incoming stimuli based on saliency and relevance to coordinate orienting behaviors, particularly gaze shifts, but its causal role in pupil dilation remains poorly understood in mammals. Here, we examined the role of the primate SC in the control of pupil dynamics. While requiring monkeys to keep their gaze fixed, we delivered weak electrical microstimulation to the SC, so that saccadic eye movements were not evoked. Pupil size increased transiently after microstimulation of the intermediate SC layers (SCi) and the size of evoked pupil dilation was larger on a dim versus bright background. In contrast, microstimulation of the superficial SC layers did not cause pupil dilation. Thus, the SCi is directly involved not only in shifts of gaze and attention, but also in pupil dilation as part of the orienting reflex, and the function of pupil dilation may be related to increasing visual sensitivity. The shared neural mechanisms suggest that pupil dilation may be associated with covert attention.
Matthew David Weaver; Dane Aronsen; Johan Lauwereyns
A short-lived face alert during inhibition of return Journal Article
In: Attention, Perception, and Psychophysics, vol. 74, no. 3, pp. 510–520, 2012.
In the present study, we explored the role of faces in oculomotor inhibition of return (IOR) using a tightly controlled spatial cuing paradigm. We measured saccadic response latency to targets following peripheral cues that were either faces or objects of lesser sociobiological salience. A recurring influence from cue content was observed across numerous methodological variations. Faces versus other object cues briefly reduced saccade latencies toward subsequently presented targets, independently of attentional allocation and IOR. The results suggest a short-lived priming effect or social facilitation effect from the mere presence of a face. In the present study, we further showed that saccadic responses were unaffected by face versus nonface objects in double-cue presentations. Our findings indicate that peripheral face cues do not influence attentional orienting processes involved in IOR any differently from other objects in a tightly controlled oculomotor IOR paradigm.
Andrea Weber; Matthew W. Crocker
On the nature of semantic constraints on lexical access Journal Article
In: Journal of Psycholinguistic Research, vol. 41, no. 3, pp. 195–214, 2012.
We present two eye-tracking experiments that investigate lexical frequency and semantic context constraints in spoken-word recognition in German. In both experiments, the pivotal words were pairs of nouns overlapping at onset but varying in lexical frequency. In Experiment 1, German listeners showed an expected frequency bias towards high-frequency competitors (e.g., Blume, 'flower') when instructed to click on low-frequency targets (e.g., Bluse, 'blouse'). In Experiment 2, semantically constraining context increased the availability of appropriate low-frequency target words prior to word onset, but did not influence the availability of semantically inappropriate high-frequency competitors at the same time. Immediately after target word onset, however, the activation of high-frequency competitors was reduced in semantically constraining sentences, but still exceeded that of unrelated distractor words significantly. The results suggest that (1) semantic context acts to downgrade activation of inappropriate competitors rather than to exclude them from competition, and (2) semantic context influences spoken-word recognition, over and above anticipation of upcoming referents.
Sarah J. White; Adrian Staub
The distribution of fixation durations during reading: Effects of stimulus quality Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 3, pp. 603–617, 2012.
Participants' eye movements were recorded as they read single sentences presented normally, presented entirely in faint text, or presented normally except for a single faint word. Fixations were longer when the entire sentence was faint than when the sentence was presented normally. In addition, fixations were much longer on a single faint word embedded in normal text, compared to when the entire sentence was faint. The primary aim of the study was to examine the influence of stimulus quality on the distribution of fixation durations. Ex-Gaussian fitting revealed that stimulus quality affected the mean of the Normal component, but in contrast to results from single-word tasks (Plourde & Besner, 1997), stimulus quality did not affect the exponential component, regardless of whether one or all words were faint. The results also contrast with the finding (Staub, White, Drieghe, Hollway, & Rayner, 2010) that the word frequency effect on fixation durations is an effect on both of the critical distributional parameters. These findings are argued to have implications for the interpretation of the role of stimulus quality in word recognition, and for models of eye movement control in reading.
Veronica Whitford; Debra Titone
Second-language experience modulates first- and second-language word frequency effects: Evidence from eye movement measures of natural paragraph reading Journal Article
In: Psychonomic Bulletin & Review, vol. 19, no. 1, pp. 73–80, 2012.
We used eye movement measures of first-language (L1) and second-language (L2) paragraph reading to investigate whether the degree of current L2 exposure modulates the relative size of L1 and L2 frequency effects (FEs). The results showed that bilinguals displayed larger L2 than L1 FEs during both early-and late-stage eye movement measures, which are taken to reflect initial lexical access and postlexical access, respectively. More-over, the magnitude of L2 FEs was inversely related to current L2 exposure, such that lower levels of L2 exposure led to larger L2 FEs. In contrast, during early-stage reading measures, bilinguals with higher levels of current L2 exposure showed larger L1 FEs than did bilinguals with lower levels of L2 exposure, suggesting that increased L2 experience modifies the earliest stages of L1 lexical access. Taken together, the findings are consistent with implicit learning accounts (e.g., Monsell, 1991), the weaker links hypothesis (Gollan, Montoya, Cera, Sandoval, Journal of Memory and Language, 58:787–814, 2008), and current bilingual visual word recognition models (e.g., the bilingual interactive activation model plus [BIA+]; Dijkstra & van Heuven, Bilingualism: Language and Cognition, 5:175– 197, 2002). Thus, amount of current L2 exposure is a key determinant of FEs and, thus, lexical activation, in both the L1 and L2.
Jan M. Wiener; Christoph Hölscher; Simon Büchner; Lars Konieczny
Gaze behaviour during space perception and spatial decision making Journal Article
In: Psychological Research, vol. 76, no. 6, pp. 713–729, 2012.
A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screen-shots of choice points taken in large virtual environments. Each screen-shot depicted alternative path options. In Experiment 1, participants had to decide between them in order to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 & 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making.
Alison M. Trude; Sarah Brown-Schmidt
Talker-specific perceptual adaptation during online speech perception Journal Article
In: Language and Cognitive Processes, vol. 27, no. 7-8, pp. 979–1001, 2012.
Despite the ubiquity of between-talker differences in accent and dialect, little is known about how listeners adjust to this source of variability as language is perceived in real time. In three experiments, we examined whether, and when, listeners can use specific knowledge of a particular talker's accent during on-line speech processing. Listeners were exposed to the speech of two talkers, a male who had an unfamiliar regional dialect of American English, in which the /æ/ vowel is raised to /ei/ only before /g/ (e.g., bag is pronounced /beig/), and a female talker without the dialect. In order to examine how knowledge of a particular talker's accent influenced language processing, we examined listeners' interpretation of unaccented words such as back and bake in contexts that included a competitor like bag. If interpretation processes are talker-specific, the pattern of competition from bag should vary depending on how that talker pronounces the competitor word. In all three experiments, listeners rapidly used their knowledge of how the talker would have pronounced bag to either rule out or include bag as a temporary competitor. Providing a cue to talker identity prior to the critical word strengthened these effects. These results are consistent with views of language processing in which multiple sources of information, including previous experience with the current talker and contextual cues, are rapidly integrated during lexical activation and selection processes.nDespite the ubiquity of between-talker differences in accent and dialect, little is known about how listeners adjust to this source of variability as language is perceived in real time. In three experiments, we examined whether, and when, listeners can use specific knowledge of a particular talker's accent during on-line speech processing. Listeners were exposed to the speech of two talkers, a male who had an unfamiliar regional dialect of American English, in which the /æ/ vowel is raised to /ei/ only before /g/ (e.g., bag is pronounced /beig/), and a female talker without the dialect. In order to examine how knowledge of a particular talker's accent influenced language processing, we examined listeners' interpretation of unaccented words such as back and bake in contexts that included a competitor like bag. If interpretation processes are talker-specific, the pattern of competition from bag should vary depending on how that talker pronounces the competitor word. In all three experiments, listeners rapidly used their knowledge of how the talker would have pronounced bag to either rule out or include bag as a temporary competitor. Providing a cue to talker identity prior to the critical word strengthened these effects. These results are consistent with views of language processing in which multiple sources of information, including previous experience with the current talker and contextual cues, are rapidly integrated during lexical activation and selection processes.
Hans A. Trukenbrod; Ralf Engbert
Eye movements in a sequential scanning task: Evidence for distributed processing Journal Article
In: Journal of Vision, vol. 12, no. 1, pp. 1–12, 2012.
Current models of eye movement control are derived from theories assuming serial processing of single items or from theories based on parallel processing of multiple items at a time. This issue has persisted because most investigated paradigms generated data compatible with both serial and parallel models. Here, we study eye movements in a sequential scanning task, where stimulus n indicates the position of the next stimulus n + 1. We investigate whether eye movements are controlled by sequential attention shifts when the task requires serial order of processing. Our measures of distributed processing in the form of parafoveal-on-foveal effects, long-range modulations of target selection, and skipping saccades provide evidence against models strictly based on serial attention shifts. We conclude that our results lend support to parallel processing as a strategy for eye movement control.
Jie-Li Tsai; Reinhold Kliegl; Ming Yan
Parafoveal semantic information extraction in traditional Chinese reading Journal Article
In: Acta Psychologica, vol. 141, no. 1, pp. 17–23, 2012.
Semantic information extraction from the parafovea has been reported only in simplified Chinese for a special subset of characters and its generalizability has been questioned. This study uses traditional Chinese, which differs from simplified Chinese in visual complexity and in mapping semantic forms, to demonstrate access to parafoveal semantic information during reading of this script. Preview duration modulates various types (identical, phonological, and unrelated) of parafoveal information extraction. Parafoveal semantic extraction is more elusive in English; therefore, we conclude that such effects in Chinese are presumably caused by substantial cross-language differences from alphabetic scripts. The property of Chinese characters carrying rich lexical information in a small region provides the possibility of semantic extraction in the parafovea.
Annelie Tuinman; Holger Mitterer; Anne Cutler
Resolving ambiguity in familiar and unfamiliar casual speech Journal Article
In: Journal of Memory and Language, vol. 66, no. 4, pp. 530–544, 2012.
In British English, the phrase . Canada aided can sound like . Canada raided if the speaker links the two vowels at the word boundary with an intrusive /r/. There are subtle phonetic differences between an onset /r/ and an intrusive /r/, however. With cross-modal priming and eye-tracking, we examine how native British English listeners and non-native (Dutch) listeners deal with the lexical ambiguity arising from this language-specific connected speech process. Together the results indicate that the presence of /r/ initially activates competing words for both listener groups; however, the native listeners rapidly exploit the phonetic cues and achieve correct lexical selection. In contrast, The Dutch-native advanced L2 listeners to English failed to recover from the /r/-induced competition, and failed to match native performance in either task. The /r/-intrusion process, which adds a phoneme to speech input, thus causes greater difficulty for L2 listeners than connected-speech processes which alter or delete phonemes.
Marco Turi; David C. Burr
Spatiotopic perceptual maps in humans: Evidence from motion adaptation Journal Article
In: Proceedings of the Royal Society B: Biological Sciences, vol. 279, no. 1740, pp. 3091–3097, 2012.
How our perceptual experience of the world remains stable and continuous despite the frequent repositioning eye movements remains very much a mystery. One possibility is that our brain actively constructs a spatiotopic representation of the world, which is anchored in external--or at least head-centred--coordinates. In this study, we show that the positional motion aftereffect (the change in apparent position after adaptation to motion) is spatially selective in external rather than retinal coordinates, whereas the classic motion aftereffect (the illusion of motion after prolonged inspection of a moving source) is selective in retinotopic coordinates. The results provide clear evidence for a spatiotopic map in humans: one which can be influenced by image motion.
Yusuke Uchida; Daisuke Kudoh; Akira Murakami; Masaaki Honda; Shigeru Kitazawa
Origins of superior dynamic visual acuity in baseball players: Superior eye movements or superior image processing Journal Article
In: PLoS ONE, vol. 7, no. 2, pp. e31530, 2012.
Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in athletes than in non-athletes, and the better DVA of athletes has been attributed to a better ability to track moving objects. In the present study, we hypothesized that the better DVA of athletes is partly derived from better perception of moving images on the retina through some kind of perceptual learning. To test this hypothesis, we quantitatively measured DVA in baseball players and non-athletes using moving Landolt rings in two conditions. In the first experiment, the participants were allowed to move their eyes (free-eye-movement conditions), whereas in the second they were required to fixate on a fixation target (fixation conditions). The athletes displayed significantly better DVA than the non-athletes in the free-eye-movement conditions. However, there was no significant difference between the groups in the fixation conditions. These results suggest that the better DVA of athletes is primarily due to an improved ability to track moving targets with their eyes, rather than to improved perception of moving images on the retina.
Yoshiyuki Ueda; Asuka Komiya
Cultural adaptation of visual attention: Calibration of the oculomotor control system in accordance with cultural scenes Journal Article
In: PLoS ONE, vol. 7, no. 11, pp. e50282, 2012.
Previous studies have found that Westerners are more likely than East Asians to attend to central objects (i.e., analytic attention), whereas East Asians are more likely than Westerners to focus on background objects or context (i.e., holistic attention). Recently, it has been proposed that the physical environment of a given culture influences the cultural form of scene cognition, although the underlying mechanism is yet unclear. This study examined whether the physical environment influences oculomotor control. Participants saw culturally neutral stimuli (e.g., a dog in a park) as a baseline, followed by Japanese or United States scenes, and finally culturally neutral stimuli again. The results showed that participants primed with Japanese scenes were more likely to move their eyes within a broader area and they were less likely to fixate on central objects compared with the baseline, whereas there were no significant differences in the eye movements of participants primed with American scenes. These results suggest that culturally specific patterns in eye movements are partly caused by the physical environment.
Yoshiyuki Ueda; Jun Saiki
Characteristics of eye movements in 3-D object learning: Comparison between within-modal and cross-modal object recognition Journal Article
In: Perception, vol. 41, no. 11, pp. 1289–1298, 2012.
Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning - visual recognition) and cross-modal learning (egÿvisual learning - haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.
Ryan J. Vaden; Nathan L. Hutcheson; Lesley A. McCollum; Jonathan Kentros; Kristina M. Visscher
Older adults, unlike younger adults, do not modulate alpha power to suppress irrelevant information Journal Article
In: NeuroImage, vol. 63, no. 3, pp. 1127–1133, 2012.
This study examines the neural mechanisms through which younger and older adults ignore irrelevant information, a process that is necessary to effectively encode new memories. Some age-related memory deficits have been linked to a diminished ability to dynamically gate sensory input, resulting in problems inhibiting the processing of distracting stimuli. Whereas oscillatory power in the alpha band (8-12. Hz) over visual cortical areas is thought to dynamically gate sensory input in younger adults, it is not known whether older adults use the same mechanism to gate out sensory input. Here we identified a task in which both older and younger adults could suppress the processing of irrelevant sensory stimuli, allowing us to use electroencephalography (EEG) to explore the neural activity associated with suppression of visual processing. As expected, we found that the younger adults' suppression of visual processing was correlated with robust modulation of alpha oscillatory power. However, older adults did not modulate alpha power to suppress processing of visual information. These results demonstrate that suppression of alpha power is not necessary to inhibit the processing of distracting stimuli in older adults, suggesting the existence of alternative strategies for suppressing irrelevant, potentially distracting information.
Josselin Gautier; O. Le Meur
A time-dependent saliency model combining center and depth biases for 2D and 3D viewing conditions Journal Article
In: Cognitive Computation, vol. 4, no. 2, pp. 141–156, 2012.
The role of the binocular disparity in the deployment of visual attention is examined in this paper. To address this point, we compared eye tracking data recorded while observers viewed natural images in 2D and 3D conditions. The influence of disparity on saliency, center and depth biases is first studied. Results show that visual exploration is affected by the introduction of the binocular disparity. In particular, participants tend to look first at closer areas in 3D condition and then direct their gaze to more widespread locations. Beside this behavioral analysis, we assess the extent to which state-of-the-art models of bottom-up visual attention predict where observers looked at in both viewing conditions. To improve their ability to predict salient regions, low-level features as well as higher-level foreground/background cues are examined. Results indicate that, consecutively to initial centering response, the foreground feature plays an active role in the early but also middle instants of attention deployments. Importantly, this influence is more pronounced in stereoscopic conditions. It supports the notion of a quasi-instantaneous bottom-up saliency modulated by higher figure/ground processing. Beyond depth information itself, the foreground cue might constitute an early process of “selection for action”. Finally, we propose a time-dependent computational model to predict saliency on still pictures. The proposed approach combines low-level visual features, center and depth biases. Its performance outperforms state-of-the-art models of bottom-up attention.
Trafton Drew; Corbin Cunningham; Jeremy M. Wolfe
When and why might a computer-aided detection (CAD) system interfere with visual search? An eye-tracking study Miscellaneous
Rational and Objectives: Computer-aided detection (CAD) systems are intended to improve performance. This study investigates how CAD might actually interfere with a visual search task. This is a laboratory study with implications for clinical use of CAD. Methods: Forty-seven naive observers in two studies were asked to search for a target, embedded in 1/f2.4 noise while we monitored their eye movements. For some observers, a CAD system marked 75% of targets and 10% of distractors, whereas other observers completed the study without CAD. In experiment 1, the CAD system's primary function was to tell observers where the target might be. In experiment 2, CAD provided information about target identity. Results: In experiment 1, there was a significant enhancement of observer sensitivity in the presence of CAD (t(22) = 4.74, P < .001), but there was also a substantial cost. Targets that were not marked by the CAD system were missed more frequently than equivalent targets in no-CAD blocks of the experiment (t(22) = 7.02, P < .001). Experiment 2 showed no behavioral benefit from CAD, but also no significant cost on sensitivity to unmarked targets (t(22) = 0.6
Jean Duchesne; Vincent Bouvier; Julien Guilleme; Olivier A. Coubard; Julien Guillemé; Olivier A. Coubard
Maxwellian eye fixation during natural scene perception Journal Article
In: The Scientific World Journal, pp. 1–12, 2012.
When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell’s law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes.
Sarah L. Eagleman; Valentin Dragoi
Image sequence reactivation in awake V4 networks Journal Article
In: Proceedings of the National Academy of Sciences, vol. 109, no. 47, pp. 19450–19455, 2012.
In the absence of sensory input, neuronal networks are far from being silent. Whether spontaneous changes in ongoing activity reflect previous sensory experience or stochastic fluctuations in brain activity is not well understood. Here we describe reactivation of stimulus-evoked activity in awake visual cortical networks. We found that continuous exposure to randomly flashed image sequences induces reactivation in macaque V4 cortical networks in the absence of visual stimulation. This reactivation of previously evoked activity is stimulus-specific, occurs only in the same temporal order as the original response, and strengthens with increased stimulus exposures. Importantly, cells exhibiting significant reactivation carry more information about the stimulus than cells that do not reactivate. These results demonstrate a surprising degree of experience-dependent plasticity in visual cortical networks as a result of repeated exposure to unattended information. We suggest that awake reactivation in visual cortex may underlie perceptual learning by passive stimulus exposure.
Kurt Debono; Alexander C. Schütz; Karl R. Gegenfurtner
Illusory bending of pursuit target Journal Article
In: Vision Research, vol. 57, pp. 51–60, 2012.
To pursue a small target moving in front of a drifting background, motion vectors from the target need to be integrated and segmented from those belonging to the background. Smooth pursuit eye movements typically integrate target and background directions initially and after some time shift towards the veridical target direction. The perceived target direction on the other hand is generally stable over time: the target is perceived to move in the same direction as long as the motion information maintains the same properties over time. If illusory target motion is observed, this tends to be shifted away from the background. Here we investigated how initial motion integration and segmentation of such stimuli are modulated by direction cues. We presented a small pursuit target moving along a straight path, in front of a background moving in a different direction. Without a direction cue, initial pursuit was biased towards the background direction before shifting towards the veridical target direction. The target's perceived direction on the other hand was near veridical. A cue in the background direction increased initial pursuit integration but also caused perception to behave in a similar way: the target initially had an illusory motion component in the background direction and after about 200 ms it was perceived to curve towards its veridical direction. This illusion shows that during the initial process of segmenting the direction of a pursuit target from irrelevant background motion, both pursuit and perception can be erroneously influenced by a direction cue and integrate the cued background motion. Both modalities corrected this initial integration error as more information about the target became available.
Joost C. Dessing; Patrick A. Byrne; Armin Abadeh; J. Douglas Crawford
Hand-related rather than goal-related source of gaze-dependent errors in memory-guided reaching Journal Article
In: Journal of Vision, vol. 12, no. 11, pp. 1–8, 2012.
Mechanisms for visuospatial cognition are often inferred directly from errors in behavioral reports of remembered target direction. For example, gaze-centered target representations for reach were first inferred from reach overshoots of target location relative to gaze. Here, we report evidence for the hypothesis that these gaze-dependent reach errors stem predominantly from misestimates of hand rather than target position, as was assumed in all previous studies. Subjects showed typical gaze-dependent overshoots in complete darkness, but these errors were entirely suppressed by continuous visual feedback of the finger. This manipulation could not affect target representations, so the suppressed gaze-dependent errors must have come from misestimates of hand position, likely arising in a gaze-dependent transformation of hand position signals into visual coordinates. This finding has broad implications for any task involving localization of visual targets relative to unseen limbs, in both healthy individuals and patient populations, and shows that response-related transformations cannot be ignored when deducing the sources of gaze-related errors.
Joost C. Dessing; Frédéric P. Rey; Peter J. Beek
Gaze fixation improves the stability of expert juggling Journal Article
In: Experimental Brain Research, vol. 216, no. 4, pp. 635–644, 2012.
Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled sig- nificantly less variable when fixating, compared to uncon- strained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze- centered coding and/or attentional control mechanisms in the brain.
Christel Devue; Artem V. Belopolsky; Jan Theeuwes
Oculomotor guidance and capture by irrelevant faces Journal Article
In: PLoS ONE, vol. 7, no. 4, pp. e34598, 2012.
Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.
Leandro Luigi Di Stasi; Rebekka Renner; Andrés Catena; José J. Cañas; Boris M. Velichkovsky; Sebastian Pannasch
Towards a driver fatigue test based on the saccadic main sequence: A partial validation by subjective report data Journal Article
In: Transportation Research Part C: Emerging Technologies, vol. 21, no. 1, pp. 122–133, 2012.
Developing a valid measurement of mental fatigue remains a big challenge and would be beneficial for various application areas, such as the improvement of road traffic safety. In the present study we examined influences of mental fatigue on the dynamics of saccadic eye movements. Based on previous findings, we propose that among amplitude and duration of saccades, the peak velocity of saccadic eye movements is particularly sensitive to changes in mental fatigue. Ten participants completed a fixation task before and after 2. h of driving in a virtual simulation environment as well as after a rest break of fifteen minutes. Driving and rest break were assumed to directly influence the level of mental fatigue and were evaluated using subjective ratings and eye movement indices. According to the subjective ratings, mental fatigue was highest after driving but decreased after the rest break. The peak velocity of saccadic eye movements decreased after driving while the duration of saccades increased, but no effects of the rest break were observed in the saccade parameters. We conclude that saccadic eye movement parameters-particularly the peak velocity-are sensitive indicators for mental fatigue. According to these findings, the peak velocity analysis represents a valid on-line measure for the detection of mental fatigue, providing the basis for the development of new vigilance screening tools to prevent accidents in several application domains.
Adele Diederich; Annette Schomburg; Hans Colonius
Saccadic reaction times to audiovisual stimuli show effects of oscillatory phase reset Journal Article
In: PLoS ONE, vol. 7, no. 10, pp. e44910, 2012.
Initiating an eye movement towards a suddenly appearing visual target is faster when an accessory auditory stimulus occurs in close spatiotemporal vicinity. Such facilitation of saccadic reaction time (SRT) is well-documented, but the exact neural mechanisms underlying the crossmodal effect remain to be elucidated. From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multisensory processing. Specifically, it is assumed that the phase of an ongoing neural oscillation is shifted due to the occurrence of a sensory stimulus so that, across trials, phase values become highly consistent (phase reset). If one can identify the phase an oscillation is reset to, it is possible to predict when temporal windows of high and low excitability will occur. However, in behavioral experiments the pre-stimulus phase will be different on successive repetitions of the experimental trial, and average performance over many trials will show no signs of the modulation. Here we circumvent this problem by repeatedly presenting an auditory accessory stimulus followed by a visual target stimulus with a temporal delay varied in steps of 2 ms. Performing a discrete time series analysis on SRT as a function of the delay, we provide statistical evidence for the existence of distinct peak spectral components in the power spectrum. These frequencies, although varying across participants, fall within the beta and gamma range (20 to 40 Hz) of neural oscillatory activity observed in neurophysiological studies of multisensory integration. Some evidence for high-theta/alpha activity was found as well. Our results are consistent with the phase reset hypothesis and demonstrate that it is amenable to testing by purely psychophysical methods. Thus, any theory of multisensory processes that connects specific brain states with patterns of saccadic responses should be able to account for traces of oscillatory activity in observable behavior.
Michael D. Dodd; Amanda Balzer; Carly M. Jacobs; Michael W. Gruszczynski; Kevin B. Smith; John R. Hibbing
The political left rolls with the good and the political right confronts the bad: Connecting physiology and cognition to preferences Journal Article
In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 367, pp. 640–649, 2012.
We report evidence that individual-level variation in people's physiological and attentional responses to aversive and appetitive stimuli are correlated with broad political orientations. Specifically, we find that greater orientation to aversive stimuli tends to be associated with right-of-centre and greater orientation to appetitive (pleasing) stimuli with left-of-centre political inclinations. These findings are consistent with recent evidence that political views are connected to physiological predispositions but are unique in incorporating findings on variation in directed attention that make it possible to understand additional aspects of the link between the physiological and the political.
Isabel Dombrowe; Mieke Donk; Hayley Wright; Christian N. L. Olivers; Glyn W. Humphreys
The contribution of stimulus-driven and goal-driven mechanisms to feature-based selection in patients with spatial attention deficits Journal Article
In: Cognitive Neuropsychology, vol. 29, no. 3, pp. 249–274, 2012.
When people search a display for a target defined by a unique feature, fast saccades are predominantly stimulus-driven whereas slower saccades are primarily goal-driven. Here we use this dissociative pattern to assess whether feature-based selection in patients with lateralized spatial attention deficits is impaired in stimulus-driven processing, goal-driven processing, or both. A group of patients suffering from extinction or neglect after parietal damage, and a group of healthy, age-matched controls, were instructed to make a saccade to a uniquely oriented target line which was presented simultaneously with a differently oriented distractor line. We systematically varied the salience of the target and distractor by changing the orientation of background elements, and used a time-based model to extract stimulus-driven (salience) and goal-driven (target set) components of selection. The results show that the patients exhibited reduced stimulus-driven processing only in the contralesional hemifield, while goal-driven processing was reduced across both hemifields.
Nick Donnelly; Katherine Cornes; Tamaryn Menneer
An examination of the processing capacity of features in the Thatcher illusion Journal Article
In: Attention, Perception, and Psychophysics, vol. 74, no. 7, pp. 1475–1487, 2012.
Detection of the Thatcher illusion (Thompson, Perception, 9:483-484, 1980) is widely upheld as being dependent on configural processing (e.g., Bartlett & Searcy, Cognitive Psychology, 25:281-316, 1993; Boutsen, Humphreys, Praamstra, & Warbrick, NeuroImage, 32:352-367, 2006; Donnelly & Hadwin, Visual Cognition, 10:1001-1017, 2003; Leder & Bruce, Quarterly Journal of Experimental Psychology, 53A:513-536, 2000; Lewis, Perception, 30:769-774, 2001; Maurer, Grand, & Mondloch, Trends in Cognitive Sciences, 6:255-260, 2002; Stürzel & Spillmann, Perception, 29:937-942, 2000). Given that supercapacity processing accompanies configural processing (see Wenger & Townsend, 2001), supercapacity processing should occur in the processing of Thatcherised upright faces. The purpose of this study was to test for evidence that the grotesqueness of upright Thatcherised faces results from supercapacity processing. Two tasks were employed: categorisation of a single face as odd or normal, and a same/different task for sequentially presented faces. The stimuli were typical faces, partially Thatcherised faces (either eyes or mouth inverted) and fully Thatcherised faces. All of the faces were presented upright. The data from both experiments were analysed using mean response times and a number of capacity measures (capacity coefficient, the Miller and Grice inequalities, and the proportional-hazards ratio). The results of both experiments demonstrated some evidence of a redundancy gain for the redundant-target condition over the single-target condition, especially in the response times in Experiment 1. However, there was very limited evidence, in either experiment, that the redundancy gains resulted from supercapacity processing. We concluded that the oddity signalled by inversion of eyes and mouths does not arise from positive interdependencies between these features.
Tim Donovan; Trevor J. Crawford; Damien Litchfield
Negative priming for target selection with saccadic eye movements Journal Article
In: Experimental Brain Research, vol. 222, no. 4, pp. 483–494, 2012.
We conducted a series of experiments to determine whether negative priming is used in the process of target selection for a saccadic eye movement. The key questions addressed the circumstances in which the negative priming of an object takes place, and the distinction between spatial and object-based effects. Experiment 1 revealed that after fixating a target (cricket ball) amongst an array of semantically related distracters, saccadic eye movements in a subsequent display were faster to the target than to the distracters or new objects, irrespective of location. The main finding was that of the facilitation of a recent target, not the inhibition of a recent distracter or location. Experiment 2 replicated this finding by using silhouettes of objects for selection that is based on feature shape. Error rates were associated with distracters with high target-shape similarity; therefore, Experiment 3 presented silhouettes of animals using distracters with low target-shape similarity. The pattern of results was similar to that of Experiment 2, with clear evidence of target facilitation rather than the inhibition of distracters. Experiment 4 and 5 introduced a distracter together with the target into the probe display, to generate a level of competitive selection in the probe condition. In these circumstances, clear evidence of spatial inhibition at the location of the previous distracters emerged. We discuss the implications for our understanding of selective attention and consider why it is essential to supplement response time data with the analysis of eye movement behaviour in spatial negative priming paradigms.
Michael Dorr; Eleonora Vig; Erhardt Barth
Eye movement prediction and variability on natural video data sets Journal Article
In: Visual Cognition, vol. 20, no. 4-5, pp. 495–514, 2012.
We here study the predictability of eye movements when viewing high-resolution natural videos. We use three recently published gaze data sets that contain a wide range of footage, from scenes of almost still-life character to professionally made, fast-paced advertisements and movie trailers. Inter-subject gaze variability differs significantly between data sets, with variability being lowest for the professional movies. We then evaluate three state-of-the-art saliency models on these data sets. A model that is based on the invariants of the structure tensor and that combines very generic, sparse video representations with machine learning techniques outperforms the two reference models; performance is further improved for two data sets when the model is extended to a perceptually inspired colour space. Finally, a combined analysis of gaze variability and predictability shows that eye movements on the professionally made movies are the most coherent (due to implicit gaze-guidance strategies of the movie directors), yet the least predictable (presumably due to the frequent cuts). Our results highlight the need for standardized benchmarks to comparatively evaluate eye movement prediction algorithms.
Nathan Faivre; Vincent Berthet; Sid Kouider
Nonconscious influences from emotional faces: A comparison of visual crowding, masking, and continuous flash suppression Journal Article
In: Frontiers in Psychology, vol. 3, pp. 129, 2012.
In the study of nonconscious processing, different methods have been used in order to render stimuli invisible. While their properties are well described, the level at which they disrupt nonconscious processing remains unclear. Yet, such accurate estimation of the depth of nonconscious processes is crucial for a clear differentiation between conscious and nonconscious cognition. Here, we compared the processing of facial expressions rendered invisible through gaze-contingent crowding (GCC), masking, and continuous flash suppression (CFS), three techniques relying on different properties of the visual system. We found that both pictures and videos of happy faces suppressed from awareness by GCC were processed such as to bias subsequent preference judgments. The same stimuli manipulated with visual masking and CFS did not bias significantly preference judgments, although they were processed such as to elicit perceptual priming. A significant difference in preference bias was found between GCC and CFS, but not between GCC and masking. These results provide new insights regarding the nonconscious impact of emotional features, and highlight the need for rigorous comparisons between the different methods employed to prevent perceptual awareness.
Nathan Faivre; Sylvain Charron; Paul Roux; Stephane Lehericy; Sid Kouider
Nonconscious emotional processing involves distinct neural pathways for pictures and videos Journal Article
In: Neuropsychologia, vol. 50, pp. 3736–3744, 2012.
Facial expressions are known to impact observers' behavior, even when they are not consciously identifiable. Relying on visual crowding, a perceptual phenomenon whereby peripheral faces become undiscriminable, we show that participants exposed to happy vs. neutral crowded faces rated the pleasantness of subsequent neutral targets accordingly to the facial expression's valence. Using functional magnetic resonance imaging (fMRI) along with psychophysiological interaction analysis, we investigated the neural determinants of this nonconscious preference bias, either induced by static (i.e., pictures) or dynamic (i.e., videos) facial expressions. We found that while static expressions activated primarily the ventral visual pathway (including task-related functional connectivity between the fusiform face area and the amygdala), dynamic expressions triggered the dorsal visual pathway (i.e., posterior partietal cortex) and the substantia innominata, a structure that is contiguous with the dorsal amygdala. As temporal cues are known to improve the processing of visible facial expressions, the absence of ventral activation we observed with crowded videos questions the capacity to integrate facial features and facial motions without awareness. Nevertheless, both static and dynamic facial expressions activated the hippocampus and the orbitofrontal cortex, suggesting that nonconscious preference judgments may arise from the evaluation of emotional context and the computation of aesthetic evaluation.
Claudia Felser; Ian Cunnings
Processing reflexives in a second language: The timing of structural and discourse-level constraints Journal Article
In: Applied Psycholinguistics, vol. 33, no. 3, pp. 571–603, 2012.
We report the results from two eye-movement monitoring experiments examining the processing of reflexive pronouns by proficient German-speaking learners of second language (L2) English. Our results showthat the nonnative speakers initially tried to linkEnglish argument reflexives to a discourse-prominent but structurally inaccessible antecedent, thereby violating binding condition A. Our native speaker controls, in contrast, showed evidence of applying conditionAimmediately during processing. Together, our findings show that L2 learners' initial focusing on a structurally inaccessible antecedent cannot be due to first language influence and is also independent of whether the inaccessible antecedent c-commands the reflexive. This suggests that unlike native speakers, nonnative speakers of English initially attempt to interpret reflexives through discourse-based coreference assignment rather than syntactic binding.
Claudia Felser; Ian Cunnings; Claire Batterham; Harald Clahsen
The timing of island effects in nonnative sentence processing Journal Article
In: Studies in Second Language Acquisition, vol. 34, no. 1, pp. 67–98, 2012.
Using the eye-movement monitoring technique in two reading comprehension experiments, this study investigated the timing of constraints on wh-dependencies (so-called island constraints) in fi rst-and second-language (L1 and L2) sentence processing. The results show that both L1 and L2 speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect L1 and L2 comprehenders in essentially the same way. Furthermore, these results show that the timing of island effects in L1 compared to L2 sentence comprehension is affected differently by the type of cue (semantic fi t versus fi lled gaps) signaling whether dependency formation is possible at a potential gap site. Even though L1 English speakers showed immediate sensitivity to fi lled gaps but not to lack of semantic fi t, profi cient German-speaking learners of English as a L2 showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in L2 processing is based on semantic feature matching rather than being structurally mediated as in L1 comprehension.
Is there a common control mechanism for anti-saccades and reading eye movements? Evidence from distributional analyses Journal Article
In: Vision Research, vol. 57, pp. 35–50, 2012.
In the saccadic literature, the voluntary control of eye movement involves inhibiting automatic saccadic plans. In contrast, the dominant view in reading is that linguistic processes trigger saccade planning. The present study explores the possibility of a common control mechanism, in which cognitively driven responses compete to inhibit automatic, perceptually driven saccade plans. A probabilistic model is developed to account for empirical distributions of saccadic response time in anti-saccade tasks (Studies 1 and 2) and fixation duration in reading and reading-like tasks (Studies 3 and 4). In all cases the distributions can be decomposed into a perceptually based component and a component sensitive to cognitive demands. Parametric similarities among the models strongly suggest a shared cognitive control mechanism between reading and other voluntary saccadic tasks.
Heather J. Ferguson
Eye movements reveal rapid concurrent access to factual and counterfactual interpretations of the world Journal Article
In: Quarterly Journal of Experimental Psychology, vol. 65, no. 5, pp. 939–961, 2012.
Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events.
Thomas Ellenbuerger; Arnaud Boutin; Stefan Panzer; Yannick Blandin; Lennart Fischer; Jörg Schorer; Charles H. Shea
Observational training in visual half-fields and the coding of movement sequences Journal Article
In: Human Movement Science, vol. 31, no. 6, pp. 1436–1448, 2012.
An experiment was conducted to determine if gating information to different hemispheres during observational training facilitates the development of a movement representation. Participants were randomly assigned to one of three observation groups that differed in terms of the type of visual half-field presentation during observation (right visual half-field (RVF), left visual half-field (LVF), or in central position (CE)), and a control group (CG). On Day 1, visual stimuli indicating the pattern of movement to be produced were projected on the respective hemisphere. The task participants observed was a 1300. ms spatial-temporal pattern of elbow flexions and extensions. On Day 2, participants physically performed the task in an inter-manual transfer paradigm with a retention test, and two contralateral transfer tests; a mirror transfer test which required the same pattern of muscle activation and limb joint angles and a non-mirror transfer test which reinstated the visual-spatial pattern of the sequence. The results demonstrated that participants of the CE, RVF and the LVF groups showed superior retention and transfer performance compared to participants of the CG. Participants of the CE- and LVF-groups demonstrated an advantage when the visual-spatial coordinates were reinstated compared to the motor coordinates, while participants of the RVF-group did not promote specific transfer patterns. These results will be discussed in the context of hemisphere specialization.
Stephani Foraker; Gregory L. Murphy
Polysemy in sentence comprehension: Effects of meaning dominance Journal Article
In: Journal of Memory and Language, vol. 67, no. 4, pp. 407–425, 2012.
Words like church are polysemous, having two related senses (a building and an organization). Three experiments investigated how polysemous senses are represented and processed during sentence comprehension. On one view, readers retrieve an underspecified, core meaning, which is later specified more fully with contextual information. On another view, readers retrieve one or more specific senses. In a reading task, context that was neutral or biased towards a particular sense preceded a polysemous word. Disambiguating material consistent with only one sense followed, in a second sentence (Experiment 1) or the same sentence (Experiments 2 and 3). Reading the disambiguating material was faster when it was consistent with that context, and dominant senses were committed to more strongly than subordinate senses. Critically, following neutral context, the continuation was read more quickly when it selected the dominant sense, and the degree of sense dominance partially explained the reading time advantage. Similarity of the senses also affected reading times. Across experiments, we found that sense selection may not be completed immediately following a polysemous word but is completed at a sentence boundary. Overall, the results suggest that readers select an individual sense when reading a polysemous word, rather than a core meaning.
Tom Foulsham; Richard Dewhurst; Marcus Nyström; Halszka Jarodzka; Roger Johansson; Geoffrey Underwood; Kenneth Holmqvist
Comparing scanpaths during scene encoding and recognition: A multi-dimensional approach Journal Article
In: Journal of Eye Movement Research, vol. 5, no. 3, pp. 1–14, 2012.
Complex stimuli and tasks elicit particular eye movement sequences. Previous research has focused on comparing between these scanpaths, particularly in memory and imagery research where it has been proposed that observers reproduce their eye movements when recognizing or imagining a stimulus. However, it is not clear whether scanpath similarity is related to memory performance and which particular aspects of the eye movements recur. We therefore compared eye movements in a picture memory task, using a recently proposed comparison method, MultiMatch, which quantifies scanpath similarity across multiple dimensions including shape and fixation duration. Scanpaths were more similar when the same participant's eye movements were compared from two viewings of the same image than between different images or different participants viewing the same image. In addition, fixation durations were similar within a participant and this similarity was associated with memory performance.
Steven L. Franconeri; Jason M. Scimeca; Jessica C. Roth; Sarah A. Helseth; Lauren E. Kahn
Flexible visual processing of spatial relationships Journal Article
In: Cognition, vol. 122, no. 2, pp. 210–227, 2012.
Visual processing breaks the world into parts and objects, allowing us not only to examine the pieces individually, but also to perceive the relationships among them. There is work exploring how we perceive spatial relationships within structures with existing representations, such as faces, common objects, or prototypical scenes. But strikingly, there is little work on the perceptual mechanisms that allow us to flexibly represent arbitrary spatial relationships, e.g., between objects in a novel room, or the elements within a map, graph or diagram. We describe two classes of mechanism that might allow such judgments. In the simultaneous class, both objects are selected concurrently. In contrast, we propose a sequential class, where objects are selected individually over time. We argue that this latter mechanism is more plausible even though it violates our intuitions. We demonstrate that shifts of selection do occur during spatial relationship judgments that feel simultaneous, by tracking selection with an electrophysiological correlate. We speculate that static structure across space may be encoded as a dynamic sequence across time. Flexible visual spatial relationship processing may serve as a case study of more general visual relation processing beyond space, to other dimensions such as size or numerosity.
Steven Frisson; Mary Wakefield
Psychological essentialist reasoning and perspective taking during reading: A donkey is not a zebra, but a plate can be a clock Journal Article
In: Memory and Cognition, vol. 40, no. 2, pp. 297–310, 2012.
In an eyetracking study, we examined whether readers use psychological essentialist reasoning and perspective taking online. Stories were presented in which an animal or an artifact was transformed into another animal (e.g., a donkey into a zebra) or artifact (e.g., a plate into a clock). According to psychological essentialism, the essence of the animal did not change in these stories, while the transformed artifact would be thought to have changed categories. We found evidence that readers use this kind of reasoning online: When reference was made to the transformed animal, the nontransformed term ("donkey") was preferred, but the opposite held for the transformed artifact ("clock" was read faster than "plate"). The immediacy of the effect suggests that this kind of reasoning is employed automatically. Perspective taking was examined within the same stories by the introduction of a novel story character. This character, who was naïve about the transformation, commented on the transformed animal or artifact. If the reader were to take this character's perspective immediately and exclusively for reference solving, then only the transformed term ("zebra" or "clock") would be felicitous. However, the results suggested that while this character's perspective could be taken into account, it seems difficult to completely discard one's own perspective at the same time.
Patrick Plummer; Keith Rayner
Effects of parafoveal word length and orthographic features on initial fixation landing positions in reading Journal Article
In: Attention, Perception, and Psychophysics, vol. 74, no. 5, pp. 950–963, 2012.
Previous research has demonstrated that readers use word length and word boundary information in targeting saccades into upcoming words while reading. Previous studies have also revealed that the initial landing positions for fixations on words are affected by parafoveal processing. In the present study, we examined the effects of word length and orthographic legality on targeting saccades into parafoveal words. Long (8-9 letters) and short (4-5 letters) target words, which were matched on lexical frequency and initial letter trigram, were paired and embedded into identical sentence frames. The gaze-contingent boundary paradigm (Rayner, 1975) was used to manipulate the parafoveal information available to the reader before direct fixation on the target word. The parafoveal preview was either identical to the target word or was a visually similar nonword. The nonword previews contained orthographically legal or orthographically illegal initial letters. The results showed that orthographic preprocessing of the word to the right of fixation affected eye movement targeting, regardless of word length. Additionally, the lexical status of an upcoming saccade target in the parafovea generally did not influence preprocessing.
Elsie Premereur; Wim Vanduffel; Peter Janssen
Local field potential activity associated with temporal expectations in the macaque lateral intraparietal area Journal Article
In: Journal of Cognitive Neuroscience, vol. 24, no. 6, pp. 1314–1330, 2012.
Oscillatory brain activity is attracting increasing interest in cognitive neuroscience. Numerous EEG (magnetoencephalography) and local field potential (LFP) measurements have related cognitive functions to different types of brain oscillations, but the functional significance of these rhythms remains poorly understood. Despite its proven value, LFP activity has not been extensively tested in the macaque lateral intraparietal area (LIP), which has been implicated in a wide variety of cognitive control processes. We recorded action potentials and LFPs in area LIP during delayed eye movement tasks and during a passive fixation task, in which the time schedule was fixed so that temporal expectations about task-relevant cues could be formed. LFP responses in the gamma band discriminated reliably between saccade targets and distractors inside the receptive field (RF). Alpha and beta responses were much less strongly affected by the presence of a saccade target, however, but rose sharply in the waiting period before the go signal. Surprisingly, conditions without visual stimulation of the LIP-RF-evoked robust LFP responses in every frequency band—most prominently in those below 50 Hz—precisely time-locked to the expected time of stimulus onset in the RF. These results indicate that in area LIP, oscillations in the LFP, which reflect synaptic input and local network activity, are tightly coupled to the temporal expectation of task-relevant cues.
Elsie Premereur; Wim Vanduffel; Pieter R. Roelfsema; Peter Janssen
Frontal eye field microstimulation induces task-dependent gamma oscillations in the lateral intraparietal area Journal Article
In: Journal of Neurophysiology, vol. 108, no. 5, pp. 1392–1402, 2012.
Macaque frontal eye fields (FEF) and the lateral intraparietal area (LIP) are high-level oculomotor control centers that have been implicated in the allocation of spatial attention. Electrical microstimulation of macaque FEF elicits functional magnetic resonance imaging (fMRI) activations in area LIP, but no study has yet investigated the effect of FEF microstimulation on LIP at the single-cell or local field potential (LFP) level. We recorded spiking and LFP activity in area LIP during weak, subthreshold microstimulation of the FEF in a delayed-saccade task. FEF microstimulation caused a highly time- and frequency-specific, task-dependent increase in gamma power in retinotopically corresponding sites in LIP: FEF microstimulation produced a significant increase in LIP gamma power when a saccade target appeared and remained present in the LIP receptive field (RF), whereas less specific increases in alpha power were evoked by FEF microstimulation for saccades directed away from the RF. Stimulating FEF with weak currents had no effect on LIP spike rates or on the gamma power during memory saccades or passive fixation. These results provide the first evidence for task-dependent modulations of LFPs in LIP caused by top-down stimulation of FEF. Since the allocation and disengagement of spatial attention in visual cortex have been associated with increases in gamma and alpha power, respectively, the effects of FEF microstimulation on LIP are consistent with the known effects of spatial attention.
Jessica M. Price; Anthony J. Sanford
Reading in healthy ageing: The influence of information structuring in sentences Journal Article
In: Psychology and Aging, vol. 27, no. 2, pp. 529–540, 2012.
In three experiments, we investigated the cognitive effects of linguistic prominence to establish whether focus plays a similar or different role in modulating language processing in healthy ageing. Information structuring through the use of cleft sentences is known to increase the processing efficiency of anaphoric references to elements contained with a marked focus structure. It also protects these elements from becoming suppressed in the wake of subsequent information, suggesting selective mechanisms of enhancement and suppression. In Experiment 1 (using self-paced reading), we found that focus enhanced (faster) integration for anaphors referring to words contained within the scope of focus; but suppressed (slower) integration for anaphors to words contained outside of the scope of focus; and in some cases, the effects were larger in older adults. In Experiment 2 (using change detection), we showed that older adults relied more on the linguistic structure to enhance change detection when the changed word was in focus. In Experiment 3 (using delayed probe recognition and eye-tracking), we found that older adults recognized probes more accurately when they were made to elements within the scope of focus than when they were outside the scope of focus. These results indicate that older adults' ability to selectively attend or suppress concepts in a marked focus structure is preserved.
Jessica C. Roth; Steven L. Franconeri
Asymmetric coding of categorical spatial relations in both language and vision Journal Article
In: Frontiers in Psychology, vol. 3, pp. 464, 2012.
Describing certain types of spatial relationships between a pair of objects requires that the objects are assigned different "roles" in the relation, e.g., "A is above B" is different than "B is above A." This asymmetric representation places one object in the "target" or "figure" role and the other in the "reference" or "ground" role. Here we provide evidence that this asymmetry may be present not just in spatial language, but also in perceptual representations. More specifically, we describe a model of visual spatial relationship judgment where the designation of the target object within such a spatial relationship is guided by the location of the "spotlight" of attention. To demonstrate the existence of this perceptual asymmetry, we cued attention to one object within a pair by briefly previewing it, and showed that participants were faster to verify the depicted relation when that object was the linguistic target. Experiment 1 demonstrated this effect for left-right relations, and Experiment 2 for above-below relations. These results join several other types of demonstrations in suggesting that perceptual representations of some spatial relations may be asymmetrically coded, and further suggest that the location of selective attention may serve as the mechanism that guides this asymmetry.
Annie Roy-Charland; Jean Saint-Aubin; Raymond M. Klein; Gregory H. MacLean; Amanda Lalande; Ashley Bélanger
Eye movements when reading: The importance of the word to the left of fixation Journal Article
In: Visual Cognition, vol. 20, no. 3, pp. 328–355, 2012.
In reading, it is well established that word processing can begin in the parafovea while the eyes are fixating the previous word. However, much less is known about the processing of information to the left of fixation. In two experiments, this issue was explored by combining a gaze-contingent display procedure preventing parafoveal preview and a letter detection task. All words were displayed as a series of xs until the reader fixated them, thereby preventing forward parafoveal processing, yet enabling backward parafoveal or postview processing. Results from both experiments revealed that readers were able to detect a target letter embedded in a word that was skipped. In those cases, the letter could only have been identified in postview (to the left of fixation), and detection rate decreased as the distance between the target letter and the eyes' landing position increased. Most importantly, for those skipped words, the typical missing-letter effect was observed with more omissions for target letters embedded in function than in content words. This can be taken as evidence that readers can extract basic prelexical information, such as the presence of a letter, in the parafoveal area to the left of fixation. Implications of these results are discussed in relation to models of eye movement control in reading and also in relation to models of the missing-letter effect.
Octavio Ruiz; Michael A. Paradiso
Macaque V1 representations in natural and reduced visual contexts: Spatial and temporal properties and influence of saccadic eye movements Journal Article
In: Journal of Neurophysiology, vol. 108, no. 1, pp. 324–333, 2012.
Vision in natural situations is different from the paradigms generally used to study vision in the laboratory. In natural vision, stimuli usually appear in a receptive field as the result of saccadic eye movements rather than suddenly flashing into view. The stimuli themselves are rich with meaningful and recognizable objects rather than simple abstract patterns. In this study we examined the sensitivity of neurons in macaque area V1 to saccades and to complex background contexts. Using a variety of visual conditions, we find that natural visual response patterns are unique. Compared with standard laboratory situations, in more natural vision V1 responses have longer latency, slower time course, delayed orientation selectivity, higher peak selectivity, and lower amplitude. Furthermore, the influences of saccades and background type (complex picture vs. uniform gray) interact to give a distinctive, and presumably more natural, response pattern. While in most of the experiments natural images were used as background, we find that similar synthetic unnatural background stimuli produce nearly identical responses (i.e., complexity matters more than "naturalness"). These findings have important implications for our understanding of vision in more natural situations. They suggest that with the saccades used to explore complex images, visual context ("surround effects") would have a far greater effect on perception than in standard experiments with stimuli flashed on a uniform background. Perceptual thresholds for contrast and orientation should also be significantly different in more natural situations.
Jason Rupp; Mario Dzemidzic; Tanya Blekher; John West; Siu L. Hui; Joanne Wojcieszek; Andrew J. Saykin; David A. Kareken; Tatiana M. Foroud
Comparison of vertical and horizontal saccade measures and their relation to gray matter changes in premanifest and manifest Huntington disease Journal Article
In: Journal of Neurology, vol. 259, no. 2, pp. 267–276, 2012.
Saccades are a potentially important biomarker of Huntington disease (HD) progression, as saccadic abnormalities can be detected both cross-sectionally and longitudinally. Although vertical saccadic impairment was reported decades ago, recent studies have focused on horizontal saccades. This study investigated antisaccade (AS) and memory guided saccade (MG) impairment in both the horizontal and vertical directions in individuals with the disease-causing CAG expansion (CAG+; n = 74), using those without the expansion (CAG-; n = 47) as controls. Percentage of errors, latency, and variability of latency were used to measure saccadic performance. We evaluated the benefits of measuring saccades in both directions by comparing effect sizes of horizontal and vertical measures, and by investigating the correlation of saccadic measures with underlying gray matter loss. Consistent with previous studies, AS and MG impairments were detected prior to the onset of manifest disease. Furthermore, the largest effect sizes were found for vertical saccades. A subset of participants (12 CAG-, 12 premanifest CAG+, 7 manifest HD) underwent magnetic resonance imaging, and an automated parcellation and segmentation procedure was used to extract thickness and volume measures in saccade-generating and inhibiting regions. These measures were then tested for associations with saccadic impairment. Latency of vertical AS was significantly associated with atrophy in the left superior frontal gyrus, left inferior parietal lobule, and bilateral caudate nuclei. This study suggests an important role for measuring vertical saccades. Vertical saccades may possess more statistical power than horizontal saccades, and the latency of vertical AS is associated with gray matter loss in both cortical and subcortical regions important in saccade function.
Yoni Pertzov; Mia Yuan Dong; Muy Cheng Peich; Masud Husain
Forgetting what was where: The fragility of object-location binding Journal Article
In: PLoS ONE, vol. 7, no. 10, pp. e48214, 2012.
Although we frequently take advantage of memory for objects locations in everyday life, understanding how an object's identity is bound correctly to its location remains unclear. Here we examine how information about object identity, location and crucially object-location associations are differentially susceptible to forgetting, over variable retention intervals and memory load. In our task, participants relocated objects to their remembered locations using a touchscreen. When participants mislocalized objects, their reports were clustered around the locations of other objects in the array, rather than occurring randomly. These 'swap' errors could not be attributed to simple failure to remember either the identity or location of the objects, but rather appeared to arise from failure to bind object identity and location in memory. Moreover, such binding failures significantly contributed to decline in localization performance over retention time. We conclude that when objects are forgotten they do not disappear completely from memory, but rather it is the links between identity and location that are prone to be broken over time.
Anders Petersen; Søren Kyllingsbæk; Claus Bundesen
Measuring and modeling attentional dwell time Journal Article
In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1029–1046, 2012.
Attentional dwell time (AD) defines our inability to perceive spatially separate events when they occur in rapid succession. In the standard AD paradigm, subjects should identify two target stimuli presented briefly at differ- ent peripheral locations with a varied stimulus onset asyn- chrony (SOA). The AD effect is seen as a long-lasting impediment in reporting the second target, culminating at SOAs of 200–500 ms. Here, we present the first quantitative computational model of the effect—a theory of temporal visual attention. The model is based on the neural theory of visual attention (Bundesen, Habekost, & Kyllingsbæk, Psychological Review, 112, 291–328 2005) and introduces the novel assumption that a stimulus retained in visual short- term memory takes up visual processing-resources used to encode stimuli into memory. Resources are thus locked and cannot process subsequent stimuli until the stimulus in memory has been recoded, which explains the long-lasting AD effect. The model is used to explain results from two experiments providing detailed individual data from both a standard AD paradigm and an extension with varied expo- sure duration of the target stimuli. Finally, we discuss new predictions by the model.
Mary A. Peterson; Laura Cacciamani; Morgan D. Barense; Paige E. Scalf
The perirhinal cortex modulates V2 activity in response to the agreement between part familiarity and configuration familiarity Journal Article
In: Hippocampus, vol. 22, no. 10, pp. 1965–1977, 2012.
Research has demonstrated that the perirhinal cortex (PRC) represents complex object-level feature configurations, and participates in familiarity versus novelty discrimination. Barense et al. [(in press) Cerebral Cortex, 22:11, doi:10.1093/cercor/bhr347] postulated that, in addition, the PRC modulates part familiarity responses in lower-level visual areas. We used fMRI to measure activation in the PRC and V2 in response to silhouettes presented peripherally while participants maintained central fixation and performed an object recognition task. There were three types of silhouettes: Familiar Configurations portrayed real-world objects; Part-Rearranged Novel Configurations created by spatially rearranging the parts of the familiar configurations; and Control Novel Configurations in which both the configuration and the ensemble of parts comprising it were novel. For right visual field (RVF) presentation, BOLD responses revealed a significant linear trend in bilateral BA 35 of the PRC (highest activation for Familiar Configurations, lowest for Part-Rearranged Novel Configurations, with Control Novel Configurations in between). For left visual field (LVF) presentation, a significant linear trend was found in a different area (bilateral BA 38, temporal pole) in the opposite direction (Part-Rearranged Novel Configurations highest, Familiar Configurations lowest). These data confirm that the PRC is sensitive to the agreement in familiarity between the configuration level and the part level. As predicted, V2 activation mimicked that of the PRC: for RVF presentation, activity in V2 was significantly higher in the left hemisphere for Familiar Configurations than for Part-Rearranged Novel Configurations, and for LVF presentation, the opposite effect was found in right hemisphere V2. We attribute these patterns in V2 to feedback from the PRC because receptive fields in V2 encompass parts but not configurations. These results reveal two new aspects of PRC function: (1) it is sensitive to the congruency between the familiarity of object configurations and the parts comprising those configurations and (2) it likely modulates familiarity responses in visual area V2.
Matthew F. Peterson; Miguel P. Eckstein
Looking just below the eyes is optimal across face recognition tasks Journal Article
In: Proceedings of the National Academy of Sciences, vol. 109, no. 48, pp. E3314–E3323, 2012.
When viewing a human face, people often look toward the eyes. Maintaining good eye contact carries significant social value and allows for the extraction of information about gaze direction. When identifying faces, humans also look toward the eyes, but it is unclear whether this behavior is solely a byproduct of the socially important eye movement behavior or whether it has functional importance in basic perceptual tasks. Here, we propose that gaze behavior while determining a person's identity, emotional state, or gender can be explained as an adaptive brain strategy to learn eye movement plans that optimize performance in these evolutionarily important perceptual tasks. We show that humans move their eyes to locations that maximize perceptual performance determining the identity, gender, and emotional state of a face. These optimal fixation points, which differ moderately across tasks, are predicted correctly by a Bayesian ideal observer that integrates information optimally across the face but is constrained by the decrease in resolution and sensitivity from the fovea toward the visual periphery (foveated ideal observer). Neither a model that disregards the foveated nature of the visual system and makes fixations on the local region with maximal information, nor a model that makes center-of-gravity fixations correctly predict human eye movements. Extension of the foveated ideal observer framework to a large database of real-world faces shows that the optimality of these strategies generalizes across the population. These results suggest that the human visual system optimizes face recognition performance through guidance of eye movements not only toward but, more precisely, just below the eyes.
Matthieu Philippe; Anne-Emmanuelle Priot; Phillippe Fuchs; Corinne Roumes
Vergence tracking: A tool to assess oculomotor performance in stereoscopic displays Journal Article
In: Journal of Eye Movement Research, vol. 5, no. 2, pp. 1–8, 2012.
Oculomotor conflict induced between the accommodative and vergence components in stereoscopic displays represents an unnatural viewing condition. There is now some evidence that stereoscopic viewing may induce discomfort and changes in oculomotor parameters. The present study sought to measure oculomotor performance during stereoscopic viewing. Using a 3D stereo setup and an eye-tracker, vergence responses were measured during 20-min exposure to a virtual visual target oscillating in depth, which participants had to track. The results showed a significant decline in the amplitude of the in-depth oscillatory vergence response over time. We propose that eye-tracking provides a useful tool to objectively assess the timevarying alterations of the vergence system when using stereoscopic displays.
Catherine I. Phillips; Christopher R. Sears; Penny M. Pexman
An embodied semantic processing effect on eye gaze during sentence reading Journal Article
In: Language and Cognition, vol. 4, no. 2, pp. 99–114, 2012.
The present research examines the effects of body-object interaction (BOI) on eye gaze behaviour in a reading task. BOI measures perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g. cat) and a set of low BOI words (e.g. sun) were selected, matched on imageability and concreteness (as well as other lexical and semantic variables). Facilitatory BOI effects were observed: gaze durations and total fixation durations were shorter for high BOI words, and participants made fewer regressions to high BOI words. The results provide evidence of a BOI effect on non-manual responses and in a situation that taps normal reading processes. We discuss how the results (a) suggest that stored motor information (as measured by BOI ratings) is relevant to lexical semantics, and (b) are consistent with an embodied view of cognition (Wilson 2002).
Jessica M. Phillips; Stefan Everling
Neural activity in the macaque putamen associated with saccades and behavioral outcome Journal Article
In: PLoS ONE, vol. 7, no. 12, pp. e51596, 2012.
It is now widely accepted that the basal ganglia nuclei form segregated, parallel loops with neocortical areas. The prevalent view is that the putamen is part of the motor loop, which receives inputs from sensorimotor areas, whereas the caudate, which receives inputs from frontal cortical eye fields and projects via the substantia nigra pars reticulata to the superior colliculus, belongs to the oculomotor loop. Tracer studies in monkeys and functional neuroimaging studies in human subjects, however, also suggest a potential role for the putamen in oculomotor control. To investigate the role of the putamen in saccadic eye movements, we recorded single neuron activity in the caudal putamen of two rhesus monkeys while they alternated between short blocks of pro- and anti-saccades. In each trial, the instruction cue was provided after the onset of the peripheral stimulus, thus the monkeys could either generate an immediate response to the stimulus based on the internal representation of the rule from the previous trial, or alternatively, could await the visual rule-instruction cue to guide their saccadic response. We found that a subset of putamen neurons showed saccade-related activity, that the preparatory mode (internally- versus externally-cued) influenced the expression of task-selectivity in roughly one third of the task-modulated neurons, and further that a large proportion of neurons encoded the outcome of the saccade. These results suggest that the caudal putamen may be part of the neural network for goal-directed saccades, wherein the monitoring of saccadic eye movements, context and performance feedback may be processed together to ensure optimal behavioural performance and outcomes are achieved during ongoing behaviour.
Elmar H. Pinkhardt; Reinhart Jurgens; Dorothée Lulé; Johanna Heimrath; Albert C. Ludolph; Wolfgang Becker; Jan Kassubek
Eye movement impairments in Parkinson's disease: Possible role of extradopaminergic mechanisms Journal Article
In: BMC Neurology, vol. 12, no. 5, pp. 1–8, 2012.
Background: The basal ganglia (BG) are thought to play an important role in the control of eye movements. Accordingly, the broad variety of subtle oculomotor alterations that has been described in Parkinson's disease (PD) are generally attributed to the dysfunction of the BG dopaminergic system. However, the present study suggest that dopamine substitution is much less effective in improving oculomotor performance than it is in restoring skeletomotor abilities.Methods: We investigated reactive, visually guided saccades (RS), smooth pursuit eye movements (SPEM), and rapidly left-right alternating voluntary gaze shifts (AVGS) by video-oculography in 34 PD patients receiving oral dopaminergic medication (PD-DA), 14 patients with deep brain stimulation of the nucleus subthalamicus (DBS-STN), and 23 control subjects (CTL);In addition, we performed a thorough review of recent literature according therapeuthic effects on oculomotor performance in PD by switching deep brain stimulation off and on in the PD-DBS patients, we achieved swift changes between their therapeutic states without the delays of dopamine withdrawal. In addition, participants underwent neuropsychological testing.Results: Patients exhibited the well known deficits such as increased saccade latency, reduced SPEM gain, and reduced frequency and amplitude of AVGS. Across patients none of the investigated oculomotor parameters correlated with UPDRS III whereas there was a negative correlation between SPEM gain and susceptibility to interference (Stroop score). Of the observed deficiencies, DBS-STN slightly improved AVGS frequency but neither AVGS amplitude nor SPEM or RS performance.Conclusions: We conclude that the impairment of SPEM in PD results from a cortical, conceivably non-dopaminergic dysfunction, whereas patients' difficulty to rapidly execute AVGS might be related to their BG dysfunction.
Ebrahim Pishyareh; Mehdi Tehrani-Doost; Javad Mahmoudi-Gharaei; Anahita Khorrami; Mitra Joudi; Mehrnoosh Ahmadi
Attentional bias towards emotional scenes in boys with attention deficit hyperactivity disorder Journal Article
In: Iranian Journal of Psychiatry, vol. 7, no. 2, pp. 93–96, 2012.
OBJECTIVE: Children with attention-deficit/hyperactivity disorder (ADHD) react explosively and inappropriately to emotional stimuli. It could be hypothesized that these children have some impairment in attending to emotional cues. Based on this hypothesis, we conducted this study to evaluate visual directions of children with ADHD towards paired emotional scenes. METHOD: Thirty boys between the ages of 6 and 11 years diagnosed with ADHD were compared with 30 age-matched normal boys. All participants were presented paired emotional and neutral scenes in the four following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral - neutral. Meanwhile, their visual orientations towards these pictures were evaluated using the eye tracking system. The number and duration of first fixation and duration of first gaze were compared between the two groups using the MANOVA analysis. The performance of each group in different categories was also analyzed using the Friedman test. RESULTS: With regards to duration of first gaze, which is the time taken to fixate on a picture before moving to another picture, ADHD children spent less time on pleasant pictures compared to normal group, while they were looking at pleasant - neutral and unpleasant - pleasant pairs. The duration of first gaze on unpleasant pictures was higher while children with ADHD were looking at unpleasant - neutral pairs (P<0.01). CONCLUSION: Based on the findings of this study it could be concluded that children with ADHD attend to unpleasant conditions more than normal children which leads to their emotional reactivity.
Irina Pivneva; Caroline Palmer; Debra Titone
Inhibitory control and L2 proficiency modulate bilingual language production: Evidence from spontaneous monologue and dialogue speech Journal Article
In: Frontiers in Psychology, vol. 3, pp. 57, 2012.
Bilingual language production requires that speakers recruit inhibitory control (IC) to optimally balance the activation of more than one linguistic system when they produce speech. Moreover, the amount of IC necessary to maintain an optimal balance is likely to vary across individuals as a function of second language (L2) proficiency and inhibitory capacity, as well as the demands of a particular communicative situation. Here, we investigate how these factors relate to bilingual language production across monologue and dialogue spontaneous speech. In these tasks, 42 English–French and French–English bilinguals produced spontaneous speech in their first language (L1) and their L2, with and without a conversational partner. Participants also completed a separate battery that assessed L2 proficiency and inhibitory capacity. The results showed that L2 vs. L1 production was generally more effortful, as was dialogue vs. monologue speech production although the clarity of what was produced was higher for dialogues vs. monologues. As well, language production effort significantly varied as a function of individual differences in L2 proficiency and inhibitory capacity. Taken together, the overall pattern of findings suggests that both increased L2 proficiency and inhibitory capacity relate to efficient language production during spontaneous monologue and dialogue speech.
Michael Plöchl; José P. Ossandón; Peter König
Combining EEG and eye tracking: Identification, characterization, and correction of eye movement artifacts in electroencephalographic data Journal Article
In: Frontiers in Human Neuroscience, vol. 6, pp. 278, 2012.
Eye movements introduce large artifacts to electroencephalographic recordings (EEG) and thus render data analysis difficult or even impossible. Trials contaminated by eye movement and blink artifacts have to be discarded, hence in standard EEG-paradigms subjects are required to fixate on the screen. To overcome this restriction, several correction methods including regression and blind source separation have been proposed. Yet, there is no automated standard procedure established. By simultaneously recording eye movements and 64-channel-EEG during a guided eye movement paradigm, we investigate and review the properties of eye movement artifacts, including corneo-retinal dipole changes, saccadic spike potentials and eyelid artifacts, and study their interrelations during different types of eye- and eyelid movements. In concordance with earlier studies our results confirm that these artifacts arise from different independent sources and that depending on electrode site, gaze direction, and choice of reference these sources contribute differently to the measured signal. We assess the respective implications for artifact correction methods and therefore compare the performance of two prominent approaches, namely linear regression and independent component analysis (ICA). We show and discuss that due to the independence of eye artifact sources, regression-based correction methods inevitably over- or under-correct individual artifact components, while ICA is in principle suited to address such mixtures of different types of artifacts. Finally, we propose an algorithm, which uses eye tracker information to objectively identify eye-artifact related ICA-components (ICs) in an automated manner. In the data presented here, the algorithm performed very similar to human experts when those were given both, the topographies of the ICs and their respective activations in a large amount of trials. Moreover it performed more reliable and almost twice as effective than human experts when those had to base their decision on IC topographies only. Furthermore, a receiver operating characteristic (ROC) analysis demonstrated an optimal balance of false positive and false negative at an area under curve (AUC) of more than 0.99. Removing the automatically detected ICs from the data resulted in removal or substantial suppression of ocular artifacts including microsaccadic spike potentials, while the relevant neural signal remained unaffected. In conclusion the present work aims at a better understanding of individual eye movement artifacts, their interrelations and the respective implications for eye artifact correction. Additionally, the proposed ICA-procedure provides a tool for optimized detection and correction of eye movement-related artifact components.
M. Victoria Puig; Earl K. Miller
The role of prefrontal dopamine D1 receptors in the neural mechanisms of associative learning Journal Article
In: Neuron, vol. 74, no. 5, pp. 874–886, 2012.
Dopamine is thought to play a major role in learning. However, while dopamine D1 receptors (D1Rs) in the prefrontal cortex (PFC) have been shown to modulate working memory-related neural activity, their role in the cellular basis of learning is unknown. We recorded activity from multiple electrodes while injecting the D1R antagonist SCH23390 in the lateral PFC as monkeys learned visuomotor associations. Blocking D1Rs impaired learning of novel associations and decreased cognitive flexibility but spared performance of already familiar associations. This suggests a greater role for prefrontal D1Rs in learning new, rather than performing familiar, associations. There was a corresponding greater decrease in neural selectivity and increase in alpha and beta oscillations in local field potentials for novel than for familiar associations. Our results suggest that weak stimulation of D1Rs observed in aging and psychiatric disorders may impair learning and PFC function by reducing neural selectivity and exacerbating neural oscillations associated with inattention and cognitive deficits.
Braden A. Purcell; Pauline K. Weigand; Jeffrey D. Schall
Supplementary eye field during visual search: Salience, cognitive control, and performance monitoring Journal Article
In: Journal of Neuroscience, vol. 32, no. 30, pp. 10273–10285, 2012.
How supplementary eye field (SEF) contributes to visual search is unknown. Inputs from cortical and subcortical structures known to represent visual salience suggest that SEF may serve as an additional node in this network. This hypothesis was tested by recording action potentials and local field potentials (LFPs) in two monkeys performing an efficient pop-out visual search task. Target selection modulation, tuning width, and response magnitude of spikes and LFP in SEF were compared with those in frontal eye field. Surprisingly, only $sim$2% of SEF neurons and $sim$8% of SEF LFP sites selected the location of the search target. The absence of salience in SEF may be due to an absence of appropriate visual afferents, which suggests that these inputs are a necessary anatomical feature of areas representing salience. We also tested whether SEF contributes to overcoming the automatic tendency to respond to a primed color when the target identity switches during priming of pop-out. Very few SEF neurons or LFP sites modulated in association with performance deficits following target switches. However, a subset of SEF neurons and LFPs exhibited strong modulation following erroneous saccades to a distractor. Altogether, these results suggest that SEF plays a limited role in controlling ongoing visual search behavior, but may play a larger role in monitoring search performance.
Leanne Quigley; Andrea L. Nelson; Jonathan Carriere; Daniel Smilek; Christine Purdon
The effects of trait and state anxiety on attention to emotional images: An eye-tracking study Journal Article
In: Cognition and Emotion, vol. 26, no. 8, pp. 1390–1411, 2012.
Attentional biases for threatening stimuli have been implicated in the development of anxiety disorders. However, little is known about the relative influences of trait and state anxiety on attentional biases. This study examined the effects of trait and state anxiety on attention to emotional images. Low, mid, and high trait anxious participants completed two trial blocks of an eye-tracking task. Participants viewed image pairs consisting of one emotional (threatening or positive) and one neutral image while their eye movements were recorded. Between trial blocks, participants underwent an anxiety induction. Primary analyses examined the effects of trait and state anxiety on the proportion of viewing time on emotional versus neutral images. State anxiety was associated with increased attention to threatening images for participants, regardless of trait anxiety. Furthermore, when in a state of anxiety, relative to a baseline condition, durations of initial gaze and average fixation were longer on threat versus neutral images. These findings were specific to the threatening images; no anxiety-related differences in attention were found with the positive images. The implications of these results for future research, models of anxiety-related information processing, and clinical interventions for anxiety are discussed.
Federico Raimondo; Juan E. Kamienkowski; Mariano Sigman; Diego Fernandez Slezak
CUDAICA: GPU optimization of infomax-ICA EEG analysis Journal Article
In: Computational Intelligence and Neuroscience, vol. 2012, pp. 1–8, 2012.
In recent years, Independent Component Analysis (ICA) has become a standard to identify relevant dimensions of the data in neuroscience. ICA is a very reliable method to analyze data but it is, computationally, very costly. The use of ICA for online analysis of the data, used in brain computing interfaces, results are almost completely prohibitive. We show an increase with almost no cost (a rapid video card) of speed of ICA by about 25 fold. The EEG data, which is a repetition of many independent signals in multiple channels, is very suitable for processing using the vector processors included in the graphical units. We profiled the implementation of this algorithm and detected two main types of operations responsible of the processing bottleneck and taking almost 80% of computing time: vector-matrix and matrix-matrix multiplications. By replacing function calls to basic linear algebra functions to the standard CUBLAS routines provided by GPU manufacturers, it does not increase performance due to CUDA kernel launch overhead. Instead, we developed a GPU-based solution that, comparing with the original BLAS and CUBLAS versions, obtains a 25x increase of performance for the ICA calculation.
Tara Rastgardani; Mathias Abegg; Jason J. S. Barton
The inter-trial spatial biases of stimuli and goals in saccadic programming Journal Article
In: Journal of Eye Movement Research, vol. 7, no. 4, pp. 1–7, 2012.
Prior studies have shown an ‘alternate antisaccade-goal bias', in that the saccadic landing points of antisaccades were displaced towards the location of antisaccade goals used in other trials in the same experimental block. Thus the motor response in one trial induced a spatial bias of a motor response in another trial. In this study we investigated whether sensory information, i.e. the location of a visual stimulus, might have a spatial effect on a motor response too. Such an effect might be attractive as for the alternate antisaccade-goal bias or repulsive. For this purpose we used block of trials with either antisaccades, prosaccades or mixed trials in order to study the alternate-trial biases generated by antisaccade goals, antisaccade stimuli, and prosaccade goals. in contrast to the effects of alternate antisaccade goals described in prior studies, alternate antisaccade stimuli generated a significant repulsive bias of about 1.8°: furthermore, if stimulus and motor goal coincide, as with an alternate prosaccade, the repulsive effect of a stimulus prevails, causing a bias of about 0.9°. Taken together with prior results, these findings may reflect averaging of current and alternate trial activity in a salience map, with excitatory activity from the motor response and inhibitory activity from the sensory input.
Tara Rastgardani; Victor Lau; Jason J. S. Barton; Mathias Abegg
Trial history biases the spatial programming of antisaccades Journal Article
In: Experimental Brain Research, vol. 222, no. 3, pp. 175–183, 2012.
The historical context in which saccades are made influences their latency and error rates, but less is known about how context influences their spatial parameters. We recently described a novel spatial bias for antisaccades, in which the endpoints of these responses deviate towards alternative goal locations used in the same experimental block, and showed that expectancy (prior probability) is at least partly responsible for this 'alternate-goal bias'. In this report we asked whether trial history also plays a role. Subjects performed antisaccades to a stimulus randomly located on the horizontal meridian, on a 40° angle downwards from the horizontal meridian, or on a 40° upward angle, with all three locations equally probable on any given trial. We found that the endpoints of antisaccades were significantly displaced towards the goal location of not only the immediately preceding trial (n - 1) but also the penultimate (n - 2) trial. Furthermore, this bias was mainly present for antisaccades with a short latency of <250 ms and was rapidly corrected by secondary saccades. We conclude that the location of recent antisaccades biases the spatial programming of upcoming antisaccades, that this historical effect persists over many seconds, and that it influences mainly rapidly generated eye movements. Because corrective saccades eliminate the historical bias, we suggest that the bias arises in processes generating the response vector, rather than processes generating the perceptual estimate of goal location.
Eyal M. Reingold; Erik D. Reichle; Mackenzie G. Glaholt; Heather Sheridan
Direct lexical control of eye movements in reading: Evidence from a survival analysis of fixation durations Journal Article
In: Cognitive Psychology, vol. 65, no. 2, pp. 177–206, 2012.
Participants' eye movements were monitored in an experiment that manipulated the frequency of target words (high vs. low) as well as their availability for parafoveal processing during fixations on the pre-target word (valid vs. invalid preview). The influence of the word-frequency by preview validity manipulation on the distributions of first fixation duration was examined by using ex-Gaussian fitting as well as a novel survival analysis technique which provided precise estimates of the timing of the first discernible influence of word frequency on first fixation duration. Using this technique, we found a significant influence of word frequency on fixation duration in normal reading (valid preview) as early as 145. ms from the start of fixation. We also demonstrated an equally rapid non-lexical influence on first fixation duration as a function of initial landing position (location) on target words. The time-course of frequency effects, but not location effects was strongly influenced by preview validity, demonstrating the crucial role of parafoveal processing in enabling direct lexical control of reading fixation times. Implications for models of eye-movement control are discussed.
Robert M. G. Reinhart; Richard P. Heitz; Braden A. Purcell; Pauline K. Weigand; Jeffrey D. Schall; Geoffrey F. Woodman
Homologous mechanisms of visuospatial working memory maintenance in macaque and human: Properties and sources Journal Article
In: Journal of Neuroscience, vol. 32, no. 22, pp. 7711–7722, 2012.
Although areas of frontal cortex are thought to be critical for maintaining information in visuospatial working memory, the event-related potential (ERP) index of maintenance is found over posterior cortex in humans. In the present study, we reconcile these seemingly contradictory findings. Here, we show that macaque monkeys and humans exhibit the same posterior ERP signature of working memory maintenance that predicts the precision of the memory-based behavioral responses. In addition, we show that the specific pattern of rhythmic oscillations in the alpha band, recently demonstrated to underlie the human visual working memory ERP component, is also present in monkeys. Next, we concurrently recorded intracranial local field potentials from two prefrontal and another frontal cortical area to determine their contribution to the surface potential indexing maintenance. The local fields in the two prefrontal areas, but not the cortex immediately posterior, exhibited amplitude modulations, timing, and relationships to behavior indicating that they contribute to the generation of the surface ERP component measured from the distal posterior electrodes. Rhythmic neural activity in the theta and gamma bands during maintenance provided converging support for the engagement of the same brain regions. These findings demonstrate that nonhuman primates have homologous electrophysiological signatures of visuospatial working memory to those of humans and that a distributed neural network, including frontal areas, underlies the posterior ERP index of visuospatial working memory maintenance.
M. L. Reinholdt-Dunne; Karin Mogg; V. Benson; B. P. Bradley; M. G. Hardin; Simon P. Liversedge; Daniel S. Pine; M. Ernst
Anxiety and selective attention to angry faces: An antisaccade study Journal Article
In: Journal of Cognitive Psychology, vol. 24, no. 1, pp. 54–65, 2012.
Cognitive models of anxiety propose that anxiety is associated with an attentional bias for threat, which increases vulnerability to emotional distress and is difficult to control. The study aim was to investigate relationships between the effects of threatening information, anxiety, and attention control on eye movements. High and low trait anxious individuals performed antisaccade and prosaccade tasks with angry, fearful, happy, and neutral faces. Results indicated that high-anxious participants showed a greater antisaccade cost for angry than neutral faces (i.e., relatively slower to look away from angry faces), compared with low-anxious individuals. This bias was not found for fearful or happy faces. The bias for angry faces was not related to individual differences in attention control assessed on self-report and behavioural measures. Findings support the view that anxiety is associated with difficulty in using cognitive control resources to inhibit attentional orienting to angry faces, and that attention control is multifaceted.
Eva Reinisch; Andrea Weber
Adapting to suprasegmental lexical stress errors in foreign-accented speech Journal Article
In: The Journal of the Acoustical Society of America, vol. 132, no. 2, pp. 1165–1176, 2012.
Can native listeners rapidly adapt to suprasegmental mispronunciations in foreign-accented speech? To address this question, an exposure-test paradigm was used to test whether Dutch listeners can improve their understanding of non-canonical lexical stress in Hungarian-accented Dutch. During exposure, one group of listeners heard a Dutch story with only initially stressed words, whereas another group also heard 28 words with canonical second-syllable stress (e.g., EEKhorn, "squirrel" was replaced by koNIJN "rabbit"; capitals indicate stress). The 28 words, however, were non-canonically marked by the Hungarian speaker with high pitch and amplitude on the initial syllable, both of which are stress cues in Dutch. After exposure, listeners' eye movements were tracked to Dutch target-competitor pairs with segmental overlap but different stress patterns, while they listened to new words from the same Hungarian speaker (e.g., HERsens, herSTEL, "brain," "recovery"). Listeners who had previously heard non-canonically produced words distinguished target-competitor pairs better than listeners who had only been exposed to Hungarian accent with canonical forms of lexical stress. Even a short exposure thus allows listeners to tune into speaker-specific realizations of words' suprasegmental make-up, and use this information for word recognition.
Kathleen Pirog Revill; Daniel H. Spieler
The effect of lexical frequency on spoken word recognition in young and older listeners Journal Article
In: Psychology and Aging, vol. 27, no. 1, pp. 80–87, 2012.
When identifying spoken words, older listeners may have difficulty resolving lexical competition or may place a greater weight on factors like lexical frequency. To obtain information about age differences in the time course of spoken word recognition, young and older adults' eye movements were monitored as they followed spoken instructions to click on objects displayed on a computer screen. Older listeners were more likely than younger listeners to fixate high-frequency displayed phonological competitors. However, degradation of auditory quality in younger listeners does not reproduce this result. These data are most consistent with an increased role for lexical frequency with age.
Helen J. Richards; Valerie Benson; Julie A. Hadwin
The attentional processes underlying impaired inhibition of threat in anxiety: The remote distractor effect Journal Article
In: Cognition and Emotion, vol. 26, no. 5, pp. 934–942, 2012.
The current study explored the proposition that anxiety is associated with impaired inhibition of threat. Using a modified version of the remote distractor paradigm, we considered whether this impairment is related to attentional capture by threat, difficulties disengaging from threat presented within foveal vision, or difficulties orienting to task-relevant stimuli when threat is present in central, parafoveal and peripheral locations in the visual field. Participants were asked to direct their eyes towards and identify a target in the presence and absence of a distractor (an angry, happy or neutral face). Trait anxiety was associated with a delay in initiating eye movements to the target in the presence of central, parafoveal and peripheral threatening distractors. These findings suggest that elevated anxiety is linked to difficulties inhibiting task-irrelevant threat presented across a broad region of the visual field.
Gerulf Rieger; Ritch C. Savin-Williams
The eyes have it: Sex and sexual orientation differences in pupil dilation patterns Journal Article
In: PLoS ONE, vol. 7, no. 8, pp. e40256, 2012.
Recent research suggests profound sex and sexual orientation differences in sexual response. These results, however, are based on measures of genital arousal, which have potential limitations such as volunteer bias and differential measures for the sexes. The present study introduces a measure less affected by these limitations. We assessed the pupil dilation of 325 men and women of various sexual orientations to male and female erotic stimuli. Results supported hypotheses. In general, self-reported sexual orientation corresponded with pupil dilation to men and women. Among men, substantial dilation to both sexes was most common in bisexual-identified men. In contrast, among women, substantial dilation to both sexes was most common in heterosexual-identified women. Possible reasons for these differences are discussed. Because the measure of pupil dilation is less invasive than previous measures of sexual response, it allows for studying diverse age and cultural populations, usually not included in sexuality research.
Hector Rieiro; Susana Martinez-Conde; Andrew P. Danielson; Jose L. Pardo-Vazquez; Nishit Srivastava; Stephen L. Macknik
Optimizing the temporal dynamics of light to human perception Journal Article
In: Proceedings of the National Academy of Sciences, vol. 109, no. 48, pp. 19828–19833, 2012.
No previous research has tuned the temporal characteristics of light-emitting devices to enhance brightness perception in human vision, despite the potential for significant power savings. The role of stimulus duration on perceived contrast is unclear, due to contradiction between the models proposed by Bloch and by Broca and Sulzer over 100 years ago. We propose that the discrepancy is accounted for by the observer's "inherent expertise bias," a type of experimental bias in which the observer's life-long experience with interpreting the sensory world overcomes perceptual ambiguities and biases experimental outcomes. By controlling for this and all other known biases, we show that perceived contrast peaks at durations of 50-100 ms, and we conclude that the Broca-Sulzer effect best describes human temporal vision. We also show that the plateau in perceived brightness with stimulus duration, described by Bloch's law, is a previously uncharacterized type of temporal brightness constancy that, like classical constancy effects, serves to enhance object recognition across varied lighting conditions in natural vision-although this is a constancy effect that normalizes perception across temporal modulation conditions. A practical outcome of this study is that tuning light-emitting devices to match the temporal dynamics of the human visual system's temporal response function will result in significant power savings.
Rebecca Rienhoff; Joseph Baker; Lennart Fischer; Bernd Strauss; Jörg Schorer
Field of vision influences sensory-motor control of skilled and less-skilled dart players Journal Article
In: Journal of Sports Science and Medicine, vol. 11, no. 3, pp. 542–550, 2012.
One characteristic of perceptual expertise in sport and other domains is known as ‘the quiet eye', which assumes that fixated information is processed during gaze stability and insufficient spatial information leads to a decrease in performance. The aims of this study were a) replicating inter- and intra-group variability and b) investigating the extent to which quiet eye supports information pick-up of varying fields of vision (i.e., central versus peripheral) using a specific eye-tracking paradigm to compare different skill levels in a dart throwing task. Differences between skill levels were replicated at baseline, but no significant differences in throwing performance were revealed among the visual occlusion conditions. Findings are generally in line with the association between quiet eye duration and aiming perform- ance, but raise questions regarding the relevance of central vision information pick-up for the quiet eye.
Simon Rigoulot; Marc D. Pell
Seeing emotion with your ears: Emotional prosody implicitly guides visual attention to faces Journal Article
In: PLoS ONE, vol. 7, no. 1, pp. e30740, 2012.
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.
Evan F. Risko; Nicola C. Anderson; Sophie Lanthier; Alan Kingstone
Curious eyes: Individual differences in personality predict eye movement behavior in scene-viewing Journal Article
In: Cognition, vol. 122, no. 1, pp. 86–90, 2012.
Visual exploration is driven by two main factors - the stimuli in our environment, and our own individual interests and intentions. Research investigating these two aspects of attentional guidance has focused almost exclusively on factors common across individuals. The present study took a different tack, and examined the role played by individual differences in personality. Our findings reveal that trait curiosity is a robust and reliable predictor of an individual's eye movement behavior in scene-viewing. These findings demonstrate that who a person is relates to how they move their eyes.
Sarah Risse; Reinhold Kliegl
Evidence for delayed parafoveal-on-foveal effects from word n+2 in reading Journal Article
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 4, pp. 1026–1042, 2012.
During reading information is acquired from word(s) beyond the word that is currently looked at. It is still an open question whether such parafoveal information can influence the current viewing of a word, and if so, whether such parafoveal-on-foveal effects are attributable to distributed processing or to mislocated fixations which occur when the eyes are directed at a parafoveal word but land on another word instead. In two display-change experiments, we orthogonally manipulated the preview and target difficulty of word n+2 to investigate the role of mislocated fixations on the previous word n+1. When the eyes left word n, an easy or difficult word n+2 preview was replaced by an easy or difficult n+2 target word. In Experiment 1, n+2 processing difficulty was manipulated by means of word frequency (i.e., easy high-frequency vs. difficult low-frequency word n+2). In Experiment 2, we varied the visual familiarity of word n+2 (i.e., easy lower-case vs. difficult alternating-case writing). Fixations on the short word n+1, which were likely to be mislocated, were nevertheless not influenced by the difficulty of the adjacent word n+2, the hypothesized target of the mislocated fixation. Instead word n+1 was influenced by the preview difficulty of word n+2, representing a delayed parafoveal-on-foveal effect. The results challenge the mislocated-fixation hypothesis as an explanation of parafoveal-on-foveal effects and provide new insight into the complex spatial and temporal effect structure of processing inside the perceptual span during reading.
Kay L. Ritchie; Amelia R. Hunt; Arash Sahraie
Trans-saccadic priming in hemianopia: Sighted-field sensitivity is boosted by a blind-field prime Journal Article
In: Neuropsychologia, vol. 50, no. 5, pp. 997–1005, 2012.
We experience visual stability despite shifts of the visual array across the retina produced by eye movements. A process known as remapping is thought to keep track of the spatial locations of objects as they move on the retina. We explored remapping in damaged visual cortex by presenting a stimulus in the blind field of two patients with hemianopia. When they executed a saccadic eye movement that would bring the stimulated location into the sighted field, reported awareness of the stimulus increased, even though the stimulus was removed before the saccade began and so never actually fell in the sighted field. Moreover, when a location was primed by a blind-field stimulus and then brought into the sighted field by a saccade, detection sensitivity for near-threshold targets appearing at this location increased dramatically. The results demonstrate that brain areas supporting conscious vision are not necessary for remapping, and suggest visual stability is maintained for salient objects even when they are not consciously perceived.
Martin Rolfs; Marisa Carrasco
Rapid simultaneous enhancement of visual sensitivity and perceived contrast during saccade preparation Journal Article
In: Journal of Neuroscience, vol. 32, no. 40, pp. 13744–13752, 2012.
Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ∼300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength.
Maria C. Romero; Ilse C. Van Dromme; Peter Janssen
Responses to two-dimensional shapes in the macaque anterior intraparietal area Journal Article
In: European Journal of Neuroscience, vol. 36, no. 3, pp. 2324–2334, 2012.
Neurons in the macaque dorsal visual stream respond to the visual presentation of objects in the context of a grasping task and to three-dimensional (3D) surfaces defined by binocular disparity, but little is known about the neural representation of two-dimensional (2D) shape in the dorsal stream. We recorded the activity of single neurons in the macaque anterior intraparietal area (AIP), which is known to be crucial for grasping, during the presentation of images of objects and silhouette, outline and line-drawing versions of these images (contour stimuli). The vast majority of AIP neurons responding selectively to 2D images were also selective for at least one of the contour stimuli with the same boundary shape, suggesting that the boundary is sufficient for the image selectivity of most AIP neurons. Furthermore, a subset of these neurons with foveal receptive fields generally preserved the shape preference across positions, whereas for more than half of the AIP population the center of the receptive field was at a parafoveal location with less tolerance to changes in stimulus position. AIP neurons frequently exhibited shape selectivity across different stimulus sizes. These results demonstrate that AIP neurons encode not only 3D but also 2D shape features.
Benjamin A. Parris; Sarah Bate; Scott D. Brown; Timothy L. Hodgson
Facilitating goal-oriented behaviour in the Stroop task: When executive control Is influenced by automatic processing Journal Article
In: PLoS ONE, vol. 7, no. 10, pp. e46994, 2012.
A portion of Stroop interference is thought to arise from a failure to maintain goal-oriented behaviour (or goal neglect). The aim of the present study was to investigate whether goal- relevant primes could enhance goal maintenance and reduce the Stroop interference effect. Here it is shown that primes related to the goal of responding quickly in the Stroop task (e.g. fast, quick, hurry) substantially reduced Stroop interference by reducing reaction times to incongruent trials but increasing reaction times to congruent and neutral trials. No effects of the primes were observed on errors. The effects on incongruent, congruent and neutral trials are explained in terms of the influence of the primes on goal maintenance. The results show that goal priming can facilitate goal-oriented behaviour and indicate that automatic processing can modulate executive control.
Kevin B. Paterson; Victoria A. McGowan; Timothy R. Jordan
Eye movements reveal effects of visual content on eye guidance and lexical access during reading Journal Article
In: PLoS ONE, vol. 7, no. 8, pp. e41766, 2012.
Background: Normal reading requires eye guidance and activation of lexical representations so that words in text can be identified accurately. However, little is known about how the visual content of text supports eye guidance and lexical activation, and thereby enables normal reading to take place. Methods and Findings: To investigate this issue, we investigated eye movement performance when reading sentences displayed as normal and when the spatial frequency content of text was filtered to contain just one of 5 types of visual content: very coarse, coarse, medium, fine, and very fine. The effect of each type of visual content specifically on lexical activation was assessed using a target word of either high or low lexical frequency embedded in each sentence Results: No type of visual content produced normal eye movement performance but eye movement performance was closest to normal for medium and fine visual content. However, effects of lexical frequency emerged early in the eye movement record for coarse, medium, fine, and very fine visual content, and were observed in total reading times for target words for all types of visual content. Conclusion: These findings suggest that while the orchestration of multiple scales of visual content is required for normal eye-guidance during reading, a broad range of visual content can activate processes of word identification independently. Implications for understanding the role of visual content in reading are discussed.
Silke Paulmann; Debra Titone; Marc D. Pell
How emotional prosody guides your way: Evidence from eye movements Journal Article
In: Speech Communication, vol. 54, no. 1, pp. 92–107, 2012.
This study investigated cross-modal effects of emotional voice tone (prosody) on face processing during instructed visual search. Specifically, we evaluated whether emotional prosodic cues in speech have a rapid, mandatory influence on eye movements to an emotionally-related face, and whether these effects persist as semantic information unfolds. Participants viewed an array of six emotional faces while listening to instructions spoken in an emotionally congruent or incongruent prosody (e.g.; "Click on the happy face" spoken in a happy or angry voice). The duration and frequency of eye fixations were analyzed when only prosodic cues were emotionally meaningful (pre-emotional label window: "Click on the/..."), and after emotional semantic information was available (post-emotional label window: ".../happy face"). In the pre-emotional label window, results showed that participants made immediate use of emotional prosody, as reflected in significantly longer frequent fixations to emotionally congruent versus incongruent faces. However, when explicit semantic information in the instructions became available (post-emotional label window), the influence of prosody on measures of eye gaze was relatively minimal. Our data show that emotional prosody has a rapid impact on gaze behavior during social information processing, but that prosodic meanings can be overridden by semantic cues when linguistic information is task relevant.
Brennan R. Payne; Elizabeth A. L. Stine-Morrow
Aging, parafoveal preview, and semantic integration in sentence processing: Testing the cognitive workload of wrap-up Journal Article
In: Psychology and Aging, vol. 27, no. 3, pp. 638–649, 2012.
The current study investigated the degree to which semantic-integration processes (“wrap-up”) during sentence understanding demand attentional resources by examining the effects of clause and sentence wrap-up on the parafoveal preview benefit (PPB) in younger and older adults. The PPB is defined as facilitation in processing word N + 1, based on information extracted while the eyes are fixated on word N, and is known to be reduced by processing difficulty at word N. Participants read passages in which word N occurred in a sentence-internal, clause-final, or sentence-final position, and a gaze-contingent boundary-change paradigm was used to manipulate the information available in parafoveal vision for word N + 1. Wrap-up effects were found on word N for both younger and older adults. Early-pass measures (first-fixation duration and single-fixation duration) of the PPB on word N + 1 were reduced by clause wrap-up and sentence wrap-up on word N, with similar effects for younger and older adults. However, for intermediate (gaze duration) and later-pass measures (regression-path duration, and selective regression-path duration), sentence wrap-up (but not clause wrap-up) on word N differentially reduced the PPB of word N + 1 for older adults. These findings suggest that wrap-up is demanding and may be less efficient with advancing age, resulting in a greater cognitive processing load for older readers.
Revisiting Huey: On the importance of the upper part of words during reading Journal Article
In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1148–1153, 2012.
Recent research has shown that that the upper part of words enjoys an advantage over the lower part of words in the recognition of isolated words. The goal of the present article was to examine how removing the upper/lower part of the words influences eye movement control during silent normal reading. The participants' eye movements were monitored when reading intact sentences and when reading sentences in which the upper or the lower portion of the text was deleted. Results showed a greater reading cost (longer fixations) when the upper part of the text was removed than when the lower part of the text was removed (i.e., it influenced when to move the eyes). However, there was little influence on the initial landing position on a target word (i.e., on the decision as to where to move the eyes). In addition, lexical-processing difficulty (as inferred from the magnitude of the word frequency effect on a target word) was affected by text degradation. The implications of these findings for models of visual-word recognition and reading are discussed.
Manuel Perea; Pablo Gomez
Subtle increases in interletter spacing facilitate the encoding of words during normal reading Journal Article
In: PLoS ONE, vol. 7, no. 10, pp. e47568, 2012.
BACKGROUND: Several recent studies have revealed that words presented with a small increase in interletter spacing are identified faster than words presented with the default interletter spacing (i.e., w a t e r faster than water). Modeling work has shown that this advantage occurs at an early encoding level. Given the implications of this finding for the ease of reading in the new digital era, here we examined whether the beneficial effect of small increases in interletter spacing can be generalized to a normal reading situation. METHODOLOGY: We conducted an experiment in which the participant's eyes were monitored when reading sentences varying in interletter spacing: i) sentences were presented with the default (0.0) interletter spacing; ii) sentences presented with a +1.0 interletter spacing; and iii) sentences presented with a +1.5 interletter spacing. PRINCIPAL FINDINGS: Results showed shorter fixation duration times as an inverse function of interletter spacing (i.e., fixation durations were briefest with +1.5 spacing and slowest with the default spacing). CONCLUSIONS: Subtle increases in interletter spacing facilitate the encoding of the fixated word during normal reading. Thus, interletter spacing is a parameter that may affect the ease of reading, and it could be adjustable in future implementations of e-book readers.
Juan A. Pérez; Stefano Passini
Avoiding minorities: Social invisibility Journal Article
In: European Journal of Social Psychology, vol. 42, no. 7, pp. 864–874, 2012.
Three experiments examined how self-consciousness has an impact on the visual exploration of a social field. The main hypothesis was that merely a photograph of people can trigger a dynamic process of social visual interaction such that minority images are avoided when people are in a state of self-reflective consciousness. In all three experiments, pairs of pictures—one with characters of social minorities and one with characters of social majorities—were shown to the participants. By means of eye-tracking technology, the results of Experiment 1 (n = 20) confirmed the hypothesis that in the reflective consciousness condition, people look more at the majority than minority characters. The results of Experiment 2 (n = 89) confirmed the hypothesis that reflective consciousness also induces avoiding reciprocal visual interaction with minorities. Finally, by manipulating the visual interaction (direct vs. non-direct) with the photos of minority and majority characters, the results of Experiment 3 (n = 56) confirmed the hypothesis that direct visual interaction with minority characters is perceived as being longer and more aversive. The overall conclusion is that self-reflective consciousness leads people to avoid visual interaction with social minorities, consigning them to social invisibility.
Carolyn J. Perry; Mazyar Fallah
Color improves speed of processing but not perception in a motion illusion Journal Article
In: Frontiers in Psychology, vol. 3, pp. 92, 2012.
When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored.
Joseph L. Brooks; Sharon Gilaie-Dotan; Geraint Rees; Shlomo Bentin; Jon Driver
Preserved local but disrupted contextual figure-ground influences in an individual with abnormal function of intermediate visual areas Journal Article
In: Neuropsychologia, vol. 50, no. 7, pp. 1393–1407, 2012.
Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG.
Susanne Brouwer; Holger Mitterer; Falk Huettig
Speech reductions change the dynamics of competition during spoken word recognition Journal Article
In: Language and Cognitive Processes, vol. 27, no. 4, pp. 539–571, 2012.
Three eye-tracking experiments investigated how phonological reductions (e.g., ‘‘puter'' for ‘‘computer'') modulate phonological competition. Participants listened to sentences extracted from a pontaneous speech corpus and saw four printed words: a target (e.g., ‘‘computer''), a competitor similar to the canonical form (e.g., ‘‘companion''), one similar to the reduced form (e.g., ‘‘pupil''), and an unrelated distractor. In Experiment 1, we presented canonical and reduced forms in a syllabic and in a sentence context. Listeners directed their attention to a similar degree to both competitors independent of the target's spoken form. In Experiment 2, we excluded reduced forms and presented canonical forms only. In such a listening situation, participants showed a clear preference for the ‘‘canonical form'' competitor. In Experiment 3, we presented canonical forms intermixed with reduced forms in a sentence context and replicated the competition pattern of Experiment 1. These data suggest that listeners penalize acoustic mismatches less strongly when listeningto reduced speech than when listening to fully articulated speech. We conclude that flexibility to adjust to speech-intrinsic factors is a key feature of the spoken word recognition system.
Susanne Brouwer; Holger Mitterer; Falk Huettig
Can hearing puter activate pupil? Phonological competition and the processing of reduced spoken words in spontaneous conversations Journal Article
In: Quarterly Journal of Experimental Psychology, vol. 65, no. 11, pp. 2193–2220, 2012.
In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjutər] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjutər] was replaced with a "real" onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts.
Beyond common and privileged: Gradient representations of common ground in real-time language use Journal Article
In: Language and Cognitive Processes, vol. 27, no. 1, pp. 62–89, 2012.
The present research tested the hypothesis that on-line language processing is guided by gradient representations of linguistic common ground that reflect details of how common ground was established, including the discourse context and partner feedback. This hypothesis was contrasted with a simpler hypothesis that interpretation processes are only sensitive to simple binary representations of whether a potential discourse referent is or is not common ground. In order to evaluate these hypotheses, participants engaged in a task-based conversation with an experimenter in which some of the participant's game-pieces were hidden from the experimenter. On critical trials, the participant revealed the identity of the hidden game-pieces. Critical utterances contained referring expressions temporarily ambiguous between a visually shared game-piece, and a hidden game-piece. Analysis of participant eye movements during interpretation of these utterances revealed that participants were more likely to consider the hidden game-piece a potential referent if the experimenter had initially asked about its identity; whether the experimenter provided clear feedback that s/he understood its identity modulated this effect somewhat. These results provide key evidence for the richness of common ground representations, and are discussed in terms of the implications for models of the underlying representations of common ground.
Pernille Bruhn; Claus Bundesen
Anticipation of visual form independent of knowing where the form will occur Journal Article
In: Attention, Perception, and Psychophysics, vol. 74, no. 5, pp. 930–941, 2012.
We investigated how selective preparation for specific forms is affected by concurrent preknowledge of location when upcoming visual stimuli are anticipated. In three experiments, participants performed a two-choice response time (RT) task in which they discriminated between standard upright and rotated alphanumeric characters while fixating a central fixation cross. In different conditions, we gave the participants preknowledge of only form, only location, both location and form, or neither location nor form. We found main effects of both preknowledge of form and preknowledge of location, with significantly lower RTs when preknowledge was present than when it was absent. Our main finding was that the two factors had additive effects on RTs. A strong interaction between the two factors, such that preknowledge of form had little or no effect without preknowledge of location, would have supported the hypothesis that form anticipation relies on depictive, perception-like activations in topographically organized parts of the visual cortex. The results provided no support for this hypothesis. On the other hand, by an additive-factors logic Sternberg (Sternberg, Acta Psychologica 30:276-315, 1969), the additivity of our effects suggested that preknowledge of form and location, respectively, affected two functionally independent, serial stages of processing. We suggest that the two stages were, first, direction of attention to the stimulus location and, subsequently, discrimination between upright and rotated stimuli. Presumably, preknowledge of location advanced the point in time at which attention was directed at the stimulus location, whereas preknowledge of form reduced the time subsequently taken for stimulus discrimination.
Aneta Brzezicka; Izabela Krejtz; Ulrich Hecker; Jochen Laubrock
Eye movement evidence for defocused attention in dysphoria: A perceptual span analysis Journal Article
In: International Journal of Psychophysiology, vol. 85, no. 1, pp. 129–133, 2012.
The defocused attention hypothesis (von Hecker and Meiser, 2005) assumes that negative mood broadens attention, whereas the analytical rumination hypothesis (Andrews and Thompson, 2009) suggests a narrowing of the attentional focus with depression. We tested these conflicting hypotheses by directly measuring the perceptual span in groups of dysphoric and control subjects, using eye tracking. In the moving window paradigm, information outside of a variable-width gaze-contingent window was masked during reading of sentences. In measures of sentence reading time and mean fixation duration, dysphoric subjects were more pronouncedly affected than controls by a reduced window size. This difference supports the defocused attention hypothesis and seems hard to reconcile with a narrowing of attentional focus.
Julie N. Buchan; Kevin G. Munhall
The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information Journal Article
In: Seeing and Perceiving, vol. 25, no. 1, pp. 87–106, 2012.
Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.