全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2012 |
Nathan Van der Stoep; Tanja C. W. Nijboer; Stefan Van der Stigchel Non-lateralized auditory input enhances averaged vectors in the oculomotor system Journal Article In: Experimental Brain Research, vol. 221, no. 4, pp. 377–384, 2012. @article{VanderStoep2012, The decision about which location should be the goal of the next eye movement is known to be determined by the interaction between auditory and visual input. This interaction can be explained by the vector theory that states that each element (either visual or auditory) in a scene evokes a vector in the oculomotor system. These vectors determine the direction in which the eye movement is initiated. Because auditory input is lateralized and localizable in most studies, it is currently unclear how non-lateralized auditory input interacts with the vectors evoked by visual input. In the current study, we investigated the influence of a non-lateralized auditory non-target on saccade accuracy (saccade angle deviation from the target) and latency in a single-target condition in Experiment 1 and a double-target condition in Experiment 2. The visual targets in Experiment 2 were positioned in such a way that saccades on average landed in between the two targets (i.e., a global effect). There was no effect of the auditory input on saccade accuracy in the single-target condition, but auditory input did influence saccade accuracy in the double-target condition. In both experiments, saccade latency increased when auditory input accompanied the visual target(s). Together, these findings show that non-lateralized auditory input enhances all vectors evoked by visual input. The results will be discussed in terms of their possible neural substrates. |
Amanda E. Lamsweerde; Melissa R. Beck Attention shifts or volatile representations: What causes binding deficits in visual working memory? Journal Article In: Visual Cognition, vol. 20, no. 7, pp. 771–792, 2012. @article{Lamsweerde2012, The current study tested two hypotheses of feature binding memory: The attention hypothesis, which suggests that attention is needed to maintain feature bindings in visual working memory (VWM) and the volatile representation hypothesis, which suggests that feature bindings in memory are volatile and easily overwritten, but do not require sustained attention. Experiment 1 tested the attention hypothesis by measuring shifts of overt attention during the study array of a change detection task; serial shifts of attention did not disrupt feature bindings. Experiments 2 and 3 encouraged encoding of more volatile (Experiment 2) or durable (Experiment 3) representations during the study array. Binding change detection performance was impaired in Experiment 2, but not in Experiment 3, suggesting that binding performance is impaired when encoding supports a less durable memory representation. Together, these results suggest that although feature bindings may be volatile and easily overwritten, attention is not required to maintain feature bindings in VWM.$backslash$nThe current study tested two hypotheses of feature binding memory: The attention hypothesis, which suggests that attention is needed to maintain feature bindings in visual working memory (VWM) and the volatile representation hypothesis, which suggests that feature bindings in memory are volatile and easily overwritten, but do not require sustained attention. Experiment 1 tested the attention hypothesis by measuring shifts of overt attention during the study array of a change detection task; serial shifts of attention did not disrupt feature bindings. Experiments 2 and 3 encouraged encoding of more volatile (Experiment 2) or durable (Experiment 3) representations during the study array. Binding change detection performance was impaired in Experiment 2, but not in Experiment 3, suggesting that binding performance is impaired when encoding supports a less durable memory representation. Together, these results suggest that although feature bindings may be volatile and easily overwritten, attention is not required to maintain feature bindings in VWM. |
Hedderik Rijn; Jelle R. Dalenberg; Jelmer P. Borst; Simone A. Sprenger Pupil Dilation Co-Varies with Memory Strength of Individual Traces in a Delayed Response Paired-Associate Task Journal Article In: PLoS ONE, vol. 7, no. 12, pp. e51134, 2012. @article{Rijn2012, Studies on cognitive effort have shown that pupil dilation is a reliable indicator of memory load. However, it is conceivable that there are other sources of effort involved in memory that also affect pupil dilation. One of these is the ease with which an item can be retrieved from memory. Here, we present the results of an experiment in which we studied the way in which pupil dilation acts as an online marker for memory processing during the retrieval of paired associates while reducing confounds associated with motor responses. Paired associates were categorized into sets containing either 4 or 7 items. After learning the paired associates once, pupil dilation was measured during the presentation of the retrieval cue during four repetitions of each set. Memory strength was operationalized as the number of repetitions (frequency) and set-size, since having more items per set results in a lower average recency. Dilation decreased with increased memory strength, supporting the hypothesis that the amplitude of the evoked pupillary response correlates positively with retrieval effort. Thus, while many studies have shown that "memory load" influences pupil dilation, our results indicate that the task-evoked pupillary response is also sensitive to the experimentally manipulated memory strength of individual items. As these effects were observed well before the response had been given, this study also suggests that pupil dilation can be used to assess an item's memory strength without requiring an overt response. |
Wieske Zoest; Mieke Donk; Stefan Van der Stigchel Stimulus-salience and the time-course of saccade trajectory deviations Journal Article In: Journal of Vision, vol. 12, no. 8, pp. 1–16, 2012. @article{Zoest2012, The deviation of a saccade trajectory is a measure of the oculomotor competition evoked by a distractor. The aim of the present study was to investigate the impact of stimulus-salience on the time-course of saccade trajectory deviations to get a better insight into how stimulus-salience influences oculomotor competition over time. Two experiments were performed in which participants were required to make a vertical saccade to a target presented in an array of nontarget line elements and one additional distractor. The distractor varied in salience, where salience was defined by an orientation contrast relative to the surrounding nontargets. In Experiment 2, target-distractor similarity was additionally manipulated. In both Experiments 1 and 2, the results revealed that the eyes deviated towards the irrelevant distractor and did so more when the distractor was salient compared to when it was not salient. Critically, salience influenced performance only when people were fast to elicit an eye movement and had no effect when saccade latencies were long. Target-distractor similarity did not influence this pattern. These results show that the impact of salience in the visual system is transient. |
Melissa L. -H. Võ; Jeremy M. Wolfe When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, pp. 23–41, 2012. @article{Vo2012, One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches-despite previous encounters with the target objects-demonstrates the dominance of guidance by generic scene knowledge in real-world search. |
Christian Vorstius; Ralph Radach; Alan R. Lang Effects of acute alcohol intoxication on automated processing: evidence from the double-step paradigm Journal Article In: Journal of Psychopharmacology, vol. 26, no. 2, pp. 262–272, 2012. @article{Vorstius2012, Reflexive and voluntary levels of processing have been studied extensively with respect to possible impairments due to alcohol intoxication. This study examined alcohol effects at the 'automated' level of processing essential to many complex visual processing tasks (e.g., reading, visual search) that involve ongoing modifications or reprogramming of well-practiced routines. Data from 30 participants (16 male) were collected in two counterbalanced sessions (alcohol vs. no-alcohol control; mean breath alcohol concentration = 68 mg/dL vs. 0 mg/dL). Eye movements were recorded during a double-step task where 75% of trials involved two target stimuli in rapid succession (inter-stimulus interval [ISI]=40, 70, or 100 ms) so that they could elicit two distinct saccades or eye movements (double steps). On 25% of trials a single target appeared. Results indicated that saccade latencies were longer under alcohol. In addition, the proportion of single-step responses and the mean saccade amplitude (length) of primary saccades decreased significantly with increasing ISI. The key novel finding, however, was that the reprogramming time needed to cancel the first saccade and adjust saccade amplitude was extended significantly by alcohol. The additional time made available by prolonged latencies due to alcohol was not utilized by the saccade programming system to decrease the number of two-step responses. These results represent the first demonstration of specific alcohol-induced programming deficits at the automated level of oculomotor processing. |
Chin-An Wang; Susan E. Boehnke; Brian J. White; Douglas P. Munoz Microstimulation of the monkey superior colliculus induces pupil dilation without evoking saccades Journal Article In: Journal of Neuroscience, vol. 32, no. 11, pp. 3629–3636, 2012. @article{Wang2012c, The orienting reflex is initiated by a salient stimulus and facilitates quick, appropriate action. It involves a rapid shift of the eyes, head, and attention and other physiological responses such as changes in heart rate and transient pupil dilation. The SC is a critical structure in the midbrain that selects incoming stimuli based on saliency and relevance to coordinate orienting behaviors, particularly gaze shifts, but its causal role in pupil dilation remains poorly understood in mammals. Here, we examined the role of the primate SC in the control of pupil dynamics. While requiring monkeys to keep their gaze fixed, we delivered weak electrical microstimulation to the SC, so that saccadic eye movements were not evoked. Pupil size increased transiently after microstimulation of the intermediate SC layers (SCi) and the size of evoked pupil dilation was larger on a dim versus bright background. In contrast, microstimulation of the superficial SC layers did not cause pupil dilation. Thus, the SCi is directly involved not only in shifts of gaze and attention, but also in pupil dilation as part of the orienting reflex, and the function of pupil dilation may be related to increasing visual sensitivity. The shared neural mechanisms suggest that pupil dilation may be associated with covert attention. |
Polina M. Vanyukov; Tessa Warren; Mark E. Wheeler; Erik D. Reichle The emergence of frequency effects in eye movements Journal Article In: Cognition, vol. 123, no. 1, pp. 185–189, 2012. @article{Vanyukov2012, A visual search experiment employed strings of Landolt Cs to examine how the gap size of and frequency of exposure to distractor strings affected eye movements. Increases in gap size were associated with shorter first-fixation durations, gaze durations, and total times, as well as fewer fixations. Importantly, both the number and duration of fixations decreased with repeated exposures. The findings provide evidence for the role of cognition in guiding eye-movements, and a potential explanation for word-frequency effects observed in reading. |
Preeti Verghese Active search for multiple targets is inefficient Journal Article In: Vision Research, vol. 74, pp. 61–71, 2012. @article{Verghese2012, This study examines saccade strategy in a novel task where observers actively search a display to find multiple targets in a limited time. Theory predicts that the relative merit of different saccade strategies depends on the prior probability of the target at a location: when the target prior is low and multiple-target trials are rare, making a saccade to the most likely target location is close to the optimal strategy, but when the target prior is high and multiple-target trials are frequent, selecting uncertain locations is more informative. The prior probability of the target was varied from 0.17 to 0.67 to determine whether observers adjusted their saccades strategies to maximize information. Observers actively searched a noisy display with six potential target locations. Each location had an independent probability of a target, so the number of targets in a trial ranged from 0 to 6. For all target priors ranging from low to high, a trial-by-trial analysis of saccade strategy indicated that observers made saccades to the most likely target location more often than the most uncertain location. Fixating likely locations is efficient only when multiple targets are rare, as in the case of a low target prior, or in the case of the more standard single-target search task. Yet it is the preferred saccade strategy in all our conditions, even when multiple targets are frequent. These findings indicate that humans are far from ideal searchers in multiple-target search. |
Dorine Vergilino-Perez; Alexandra Fayel; Christelle Lemoine; Patrice Senot; Judith Vergne; Karine Doré-Mazars Are there any left-right asymmetries in saccade parameters? Examination of latency, gain, and peak velocity Journal Article In: Investigative Ophthalmology & Visual Science, vol. 53, no. 7, pp. 3340–3348, 2012. @article{VergilinoPerez2012, PURPOSE Hemispheric specialization in saccadic control is still under debate. Here we examine the latency, gain, and peak velocity of reactive and voluntary leftward and rightward saccades to assess the respective roles of eye and hand dominance. METHODS Participants with contrasting hand and eye dominance were asked to make saccades toward a target displayed at 5°, 10°, or 15° left or right of the central fixation point. In separate sessions, reactive and voluntary saccades were elicited by Gap-200, Gap-0, Overlap-600, and Antisaccade procedures. RESULTS Left-right asymmetries were not found in saccade latencies but appeared in saccade gain and peak velocity. Regardless of the dominant hand, saccades directed to the ipsilateral side relative to the dominant eye had larger amplitudes and faster peak velocities. CONCLUSIONS Left-right asymmetries can be explained by naso-temporal differences for some subjects and by eye dominance for others. Further investigations are needed to examine saccadic parameters more systematically in relation to eye dominance. Indeed, any method that allows one to determine ocular dominance from objective measures based on saccade parameters should greatly benefit clinical applications, such as monovision surgery. |
Dorine Vergilino-Perez; Christelle Lemoine; Eric Siéroff; Anne Marie Ergis; Redha Bouhired; Emilie Rigault; Karine Doré-Mazars The role of saccade preparation in lateralized word recognition: Evidence for the attentional bias theory Journal Article In: Neuropsychologia, vol. 50, no. 12, pp. 2796–2804, 2012. @article{VergilinoPerez2012a, Words presented to the right visual field (RVF) are recognized more readily than those presented to the left visual field (LVF). Whereas the attentional bias theory proposes an explanation in terms of attentional imbalance between visual fields, the attentional advantage theory assumes that words presented to the RVF are processed automatically while LVF words need attention. In this study, we exploited coupling between attention and saccadic eye movements to orient spatial attention to one or the other visual field. The first experiment compared conditions wherein participants had to remain fixated centrally or had to make a saccade to the visual field in which subsequent verbal stimuli were displayed. The orienting of attention by saccade preparation improved performance in a lexical decision task in both the LVF and the RVF. In the second experiment, participants had to make a saccade either to the visual field where verbal stimuli were presented subsequently or to the opposite side. For RVF as well as for LVF presentation, saccade preparation toward the opposite side decreased performance compared to the same side condition. These results are better explained by the attentional bias theory, and are discussed in the light of a new attentional theory dissociating two major components of attention, namely preparation and selection. |
Bram-Ernst Verhoef; Rufin Vogels; Peter Janssen Inferotemporal cortex subserves three-dimensional structure categorization Journal Article In: Neuron, vol. 73, no. 1, pp. 171–182, 2012. @article{Verhoef2012, We perceive real-world objects as three-dimensional (3D), yet it is unknown which brain area underlies our ability to perceive objects in this way. The macaque inferotemporal (IT) cortex contains neurons that respond selectively to 3D structures defined by binocular disparity. To examine the causal role of IT in the categorization of 3D structures, we electrically stimulated clusters of IT neurons with a similar 3D-structure preference while monkeys performed a 3D-structure categorization task. Microstimulation of 3D-structure-selective IT clusters caused monkeys to choose the preferred structure of the 3D-structure-selective neurons considerably more often. Microstimulation in IT also accelerated the monkeys' choice for the preferred structure, while delaying choices corresponding to the nonpreferred structure of a given site. These findings reveal that 3D-structure-selective neurons in IT contribute to the categorization of 3D objects. How and where 3D shape perception arises from neuronal activity of neurons remains an unanswered question. Verhoef etal. find that manipulating the activity of neurons in the temporal lobe can influence the performance of monkeys performing a 3D shape categorization task. |
Petra Vetter; Grace Edwards; Lars Muckli Transfer of predictive signals across saccades Journal Article In: Frontiers in Psychology, vol. 3, pp. 176, 2012. @article{Vetter2012, Predicting visual information facilitates efficient processing of visual signals. Higher visual areas can support the processing of incoming visual information by generating predictive models that are fed back to lower visual areas. Functional brain imaging has previously shown that predictions interact with visual input already at the level of the primary visual cortex (V1; Harrison et al., 2007; Alink et al., 2010). Given that fixation changes up to four times a second in natural viewing conditions, cortical predictions are effective in V1 only if they are fed back in time for the processing of the next stimulus and at the corresponding new retinotopic position. Here, we tested whether spatio-temporal predictions are updated before, during, or shortly after an inter-hemifield saccade is executed, and thus, whether the predictive signal is transferred swiftly across hemifields. Using an apparent motion illusion, we induced an internal motion model that is known to produce a spatio-temporal prediction signal along the apparent motion trace in V1 (Muckli et al., 2005; Alink et al., 2010). We presented participants with both visually predictable and unpredictable targets on the apparent motion trace. During the task, participants saccaded across the illusion whilst detecting the target. As found previously, predictable stimuli were detected more frequently than unpredictable stimuli. Furthermore, we found that the detection advantage of predictable targets is detectable as early as 50-100 ms after saccade offset. This result demonstrates the rapid nature of the transfer of a spatio-temporally precise predictive signal across hemifields, in a paradigm previously shown to modulate V1. |
Benjamin T. Vincent How do we use the past to predict the future in oculomotor search? Journal Article In: Vision Research, vol. 74, pp. 93–101, 2012. @article{Vincent2012, A variety of findings suggest that when conducting visual search, we can exploit cues that are statistically related to a target's location. But is this the result of heuristic mechanisms or an internal model that tracks the statistics of the environment? Here, connections are made between the two explanations, and four models are assessed to probe the mechanisms underlying prediction in search. Participants conducted a simple gaze-contingent search task with five conditions, each of which consists of different combinations of 1st and 2nd order statistics. People's exploration behaviour adapted to the statistical rules governing target behaviour. Behaviour was most consistent with a model that represents transitions from one location to another, and that makes the underlying assumption that the world is dynamic. This assumption that the world is changeable could not be overridden despite task instruction and nearly 1. h of exposure to unchanging statistics. This means that while people may be suboptimal in some experimental contexts, it may be because their internal mental model makes assumptions that are adaptive in a complex, changeable world. |
Björn N. S. Vlaskamp; Anna Schubö Eye movements during action preparation Journal Article In: Experimental Brain Research, vol. 216, no. 3, pp. 463–472, 2012. @article{Vlaskamp2012, Looking at actions of others activates representations of similar own actions, that is, the action resonates. This may facilitate or interfere with the actions that one intends to make. We asked whether people promote or block those effects by making eye movements to or away from the actions of others. We investigated gaze behavior with a cup-clinking task: An actor shown on a video grabbed a cup and moved it toward the participant who next grabbed his own cup in the 'same' or in a different, 'complementary', way. In the 'same' condition, participants mostly looked at the place where the actor held the cup. In the 'complementary' condition, gaze behavior was similar at the start of the actor's action. To our surprise, as the action reached completion, participants started to look at the cup's site that corresponded to the grabbing instruction for their own action. A second experiment showed that this effect grew with delay of the go-signal. This indicates that a reason for the effect may be to support memorizing the instructed action. The bottom line of the study is that passively viewed scenes (passive in the sense that nothing in the observed scene is manipulated by the viewer) are scanned to support preparation of actions that one intends to make. We discuss how this finding relates to action resonance and how it relates to links between representations of actions and objects. |
H. X. Wang; Jeremy Freeman; Elisha P. Merriam; Uri Hasson; David J. Heeger Temporal eye movement strategies during naturalistic viewing Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 16–16, 2012. @article{Wang2012d, The deployment of eye movements to complex spatiotemporal stimuli likely involves a variety of cognitive factors. However, eye movements to movies are surprisingly reliable both within and across observers. We exploited and manipulated that reliability to characterize observers' temporal viewing strategies while they viewed naturalistic movies. Introducing cuts and scrambling the temporal order of the resulting clips systematically changed eye movement reliability. We developed a computational model that exhibited this behavior and provided an excellent fit to the measured eye movement reliability. The model assumed that observers searched for, found, and tracked a point of interest and that this process reset when there was a cut. The model did not require that eye movements depend on temporal context in any other way, and it managed to describe eye movements consistently across different observers and two movie sequences. Thus, we found no evidence for the integration of information over long time scales (greater than a second). The results are consistent with the idea that observers employ a simple tracking strategy even while viewing complex, engaging naturalistic stimuli. |
Hsueh-Cheng Wang; Marc Pomplun The attraction of visual attention to texts in real-world scenes Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 26–26, 2012. @article{Wang2012a, When we look at real-world scenes, attention seems disproportionately attracted by texts that are embedded in these scenes, for instance, on signs or billboards. The present study was aimed at verifying the existence of this bias and investigating its underlying factors. For this purpose, data from a previous experiment were reanalyzed and four new experiments measuring eye movements during the viewing of real-world scenes were conducted. By pairing text objects with matching control objects and regions, the following main results were obtained: (a) Greater fixation probability and shorter minimum fixation distance of texts confirmed the higher attractiveness of texts; (b) the locations where texts are typically placed contribute partially to this effect; (c) specific visual features of texts, rather than typically salient features (e.g., color, orientation, and contrast), are the main attractors of attention; (d) the meaningfulness of texts does not add to their attentional capture; and (e) the attraction of attention depends to some extent on the observer's familiarity with the writing system and language of a given text. |
Jing Wang; Ruobing Xia; Mingsha Zhang; Yujun Pan Long term retention of saccadic adaptation is induced by a dark environmental context Journal Article In: Brain Research, vol. 1489, pp. 56–65, 2012. @article{Wang2012e, Under many circumstances, motor memory needs to be retained for a long period of time to enable accurate behavior. Since the first introduction of the saccadic adaptation paradigm in 1960s, saccadic adaptation protocols have been widely used to study the mechanisms of motor learning and motor memory. However, previous studies reported that the effect of saccadic adaptation on the oculomotor system was rather short (minutes to hours) in human and non-human primates. Here we ask whether the fast decay of the effects of saccadic adaptation is due to the influence of environmental context. To test this hypothesis, we asked human subjects to perform a saccadic adaptation task in a very dark environment. Our data showed that saccade gain remained at the post-adaptation level 24-72 h after exposure to the saccadic adaptation task without significant recovery, and that the effect of saccadic adaptation on saccade gain could still be found 2 months later, much longer than previously reported. Our results indicate a vital role for environmental context in the retention of saccadic adaptation. |
Quan Wang; Jantina Bolhuis; Constantin A. Rothkopf; Thorsten Kolling; Monika Knopf; Jochen Triesch Infants in control: Rapid anticipation of action outcomes in a gaze-contingent paradigm Journal Article In: PLoS ONE, vol. 7, no. 2, pp. e30884, 2012. @article{Wang2012f, Infants' poor motor abilities limit their interaction with their environment and render studying infant cognition notoriously difficult. Exceptions are eye movements, which reach high accuracy early, but generally do not allow manipulation of the physical environment. In this study, real-time eye tracking is used to put 6- and 8-month-old infants in direct control of their visual surroundings to study the fundamental problem of discovery of agency, i.e. the ability to infer that certain sensory events are caused by one's own actions. We demonstrate that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in as few as 3 trials. Our findings show that infants can rapidly discover new ways of controlling their environment. We suggest that gaze-contingent paradigms offer effective new ways for studying many aspects of infant learning and cognition in an interactive fashion and provide new opportunities for behavioral training and treatment in infants. |
Shuo Wang; Masaki Fukuchi; Christof Koch; Naotsugu Tsuchiya Spatial attention is attracted in a sustained fashion toward singular points in the optic flow Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e41040, 2012. @article{Wang2012g, While a single approaching object is known to attract spatial attention, it is unknown how attention is directed when the background looms towards the observer as s/he moves forward in a quasi-stationary environment. In Experiment 1, we used a cued speeded discrimination task to quantify where and how spatial attention is directed towards the target superimposed onto a cloud of moving dots. We found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The effects were less pronounced when the motion was contractive. The more ecologically valid the motion features became (e.g., temporal expansion of each dot, spatial depth structure implied by distribution of the size of the dots), the stronger the attentional effects. Further, the attentional effects were sustained over 1000 ms. Experiment 2 quantified these attentional effects using a change detection paradigm by zooming into or out of photographs of natural scenes. Spatial attention was attracted in a sustained manner such that change detection was facilitated or delayed depending on the location of the FOE only when the motion was expansive. Our results suggest that focal attention is strongly attracted towards singular points that signal the direction of forward ego-motion. |
Zhiguo Wang; Raymond M. Klein Focal spatial attention can eliminate inhibition of return Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 3, pp. 462–469, 2012. @article{Wang2012, Inhibition of return (IOR) is an orienting phenomenon characterized by slower responses to spatially cued than to uncued targets. In Experiment 1, a physically small digit that required identification was presented immediately following a peripheral cue. The digit could appear in the cued peripheral box or in the central box, thus guaranteeing a saccadic response to the cue in one condition and maintenance of fixation in the other. An IOR effect was observed when a saccadic response to the cue was required, but IOR was not generated by the peripheral cue when fixation was maintained in order to process the central digit. In Experiment 2, IOR effects were observed when participants were instructed to ignore the digits, whether those digits were presented in the periphery or at fixation. These findings suggest that behaviorally manifested, cue-induced IOR effects can be eliminated by focal spatial attentional control settings. |
Zhiguo Wang; Jason Satel; Matthew D. Hilchey; Raymond M. Klein Averaging saccades are repelled by prior uninformative cues at both short and long intervals Journal Article In: Visual Cognition, vol. 20, no. 7, pp. 825–847, 2012. @article{Wang2012h, When two spatially proximal stimuli are presented simultaneously, a first saccade is often directed to an intermediate location between the stimuli (averaging saccade). In an earlier study, Watanabe (2001) showed that, at a long cue?target onset asynchrony (CTOA; 600 ms), uninformative cues not only slowed saccadic response times (SRTs) to targets presented at the cued location in single target trials (inhibition of return, IOR), but also biased averaging saccades away from the cue in double target trials. The present study replicatedWatanabe's experimental task with a short CTOA (50 ms), as well as with mixed short (50 ms) and long (600 ms) CTOAs. In all conditions on double target trials, uninformative cues robustly biased averaging saccades away from cued locations. Although SRTs on single target trials were delayed at previously cued locations at both CTOAs when they were mixed, this delay was not observed in the blocked, short CTOA condition.We suggest that top-down factors, such as expectation and attentional control settings, may have asymmetric effects on the temporal and spatial dynamics of oculomotor processing. |
Zhiguo Wang; Jan Theeuwes Dissociable Spatial and Temporal Effects of Inhibition of Return Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e44290, 2012. @article{Wang2012b, Inhibition of return (IOR) refers to the relative suppression of processing at locations that have recently been attended. It is frequently explored using a spatial cueing paradigm and is characterized by slower responses to cued than to uncued locations. The current study investigates the impact of IOR on overt visual orienting involving saccadic eye movements. Using a spatial cueing paradigm, our experiments have demonstrated that at a cue-target onset asynchrony (CTOA) of 400 ms saccades to the vicinity of cued locations are not only delayed (temporal cost) but also biased away (spatial effect). Both of these effects are basically no longer present at a CTOA of 1200 ms. At a shorter 200 ms CTOA, the spatial effect becomes stronger while the temporal cost is replaced by a temporal benefit. These findings suggest that IOR has a spatial effect that is dissociable from its temporal effect. Simulations using a neural field model of the superior colliculus (SC) revealed that a theory relying on short-term depression (STD) of the input pathway can explain most, but not all, temporal and spatial effects of IOR. |
Paul A. Warren; Simon K. Rushton; Andrew J. Foulkes Does optic flow parsing depend on prior estimation of heading? Journal Article In: Journal of Vision, vol. 12, no. 11, pp. 8–8, 2012. @article{Warren2012, We have recently suggested that neural flow parsing mechanisms act to subtract global optic flow consistent with observer movement to aid in detecting and assessing scene-relative object movement. Here, we examine whether flow parsing can occur independently from heading estimation. To address this question we used stimuli comprising two superimposed optic flow fields comprising limited lifetime dots (one planar and one radial). This stimulus gives rise to the so-called optic flow illusion (OFI) in which perceived heading is biased in the direction of the planar flow field. Observers were asked to report the perceived direction of motion of a probe object placed in the OFI stimulus. If flow parsing depends upon a prior estimate of heading then the perceived trajectory should reflect global subtraction of a field consistent with the heading experienced under the OFI. In Experiment 1 we tested this prediction directly, finding instead that the perceived trajectory was biased markedly in the direction opposite to that predicted under the OFI. In Experiment 2 we demonstrate that the results of Experiment 1 are consistent with a positively weighted vector sum of the effects seen when viewing the probe together with individual radial and planar flow fields. These results suggest that flow parsing is not necessarily dependent on prior estimation of heading direction. We discuss the implications of this finding for our understanding of the mechanisms of flow parsing. |
Annelie Tuinman; Holger Mitterer; Anne Cutler Resolving ambiguity in familiar and unfamiliar casual speech Journal Article In: Journal of Memory and Language, vol. 66, no. 4, pp. 530–544, 2012. @article{Tuinman2012, In British English, the phrase . Canada aided can sound like . Canada raided if the speaker links the two vowels at the word boundary with an intrusive /r/. There are subtle phonetic differences between an onset /r/ and an intrusive /r/, however. With cross-modal priming and eye-tracking, we examine how native British English listeners and non-native (Dutch) listeners deal with the lexical ambiguity arising from this language-specific connected speech process. Together the results indicate that the presence of /r/ initially activates competing words for both listener groups; however, the native listeners rapidly exploit the phonetic cues and achieve correct lexical selection. In contrast, The Dutch-native advanced L2 listeners to English failed to recover from the /r/-induced competition, and failed to match native performance in either task. The /r/-intrusion process, which adds a phoneme to speech input, thus causes greater difficulty for L2 listeners than connected-speech processes which alter or delete phonemes. |
Marco Turi; David C. Burr Spatiotopic perceptual maps in humans: Evidence from motion adaptation Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 279, no. 1740, pp. 3091–3097, 2012. @article{Turi2012, How our perceptual experience of the world remains stable and continuous despite the frequent repositioning eye movements remains very much a mystery. One possibility is that our brain actively constructs a spatiotopic representation of the world, which is anchored in external–or at least head-centred–coordinates. In this study, we show that the positional motion aftereffect (the change in apparent position after adaptation to motion) is spatially selective in external rather than retinal coordinates, whereas the classic motion aftereffect (the illusion of motion after prolonged inspection of a moving source) is selective in retinotopic coordinates. The results provide clear evidence for a spatiotopic map in humans: one which can be influenced by image motion. |
Yusuke Uchida; Daisuke Kudoh; Akira Murakami; Masaaki Honda; Shigeru Kitazawa Origins of superior dynamic visual acuity in baseball players: Superior eye movements or superior image processing Journal Article In: PLoS ONE, vol. 7, no. 2, pp. e31530, 2012. @article{Uchida2012, Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in athletes than in non-athletes, and the better DVA of athletes has been attributed to a better ability to track moving objects. In the present study, we hypothesized that the better DVA of athletes is partly derived from better perception of moving images on the retina through some kind of perceptual learning. To test this hypothesis, we quantitatively measured DVA in baseball players and non-athletes using moving Landolt rings in two conditions. In the first experiment, the participants were allowed to move their eyes (free-eye-movement conditions), whereas in the second they were required to fixate on a fixation target (fixation conditions). The athletes displayed significantly better DVA than the non-athletes in the free-eye-movement conditions. However, there was no significant difference between the groups in the fixation conditions. These results suggest that the better DVA of athletes is primarily due to an improved ability to track moving targets with their eyes, rather than to improved perception of moving images on the retina. |
Yoshiyuki Ueda; Asuka Komiya Cultural adaptation of visual attention: Calibration of the oculomotor control system in accordance with cultural scenes Journal Article In: PLoS ONE, vol. 7, no. 11, pp. e50282, 2012. @article{Ueda2012a, Previous studies have found that Westerners are more likely than East Asians to attend to central objects (i.e., analytic attention), whereas East Asians are more likely than Westerners to focus on background objects or context (i.e., holistic attention). Recently, it has been proposed that the physical environment of a given culture influences the cultural form of scene cognition, although the underlying mechanism is yet unclear. This study examined whether the physical environment influences oculomotor control. Participants saw culturally neutral stimuli (e.g., a dog in a park) as a baseline, followed by Japanese or United States scenes, and finally culturally neutral stimuli again. The results showed that participants primed with Japanese scenes were more likely to move their eyes within a broader area and they were less likely to fixate on central objects compared with the baseline, whereas there were no significant differences in the eye movements of participants primed with American scenes. These results suggest that culturally specific patterns in eye movements are partly caused by the physical environment. |
Yoshiyuki Ueda; Jun Saiki Characteristics of eye movements in 3-D object learning: Comparison between within-modal and cross-modal object recognition Journal Article In: Perception, vol. 41, no. 11, pp. 1289–1298, 2012. @article{Ueda2012, Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning - visual recognition) and cross-modal learning (egÿvisual learning - haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances. |
Ryan J. Vaden; Nathan L. Hutcheson; Lesley A. McCollum; Jonathan Kentros; Kristina M. Visscher Older adults, unlike younger adults, do not modulate alpha power to suppress irrelevant information Journal Article In: NeuroImage, vol. 63, no. 3, pp. 1127–1133, 2012. @article{Vaden2012, This study examines the neural mechanisms through which younger and older adults ignore irrelevant information, a process that is necessary to effectively encode new memories. Some age-related memory deficits have been linked to a diminished ability to dynamically gate sensory input, resulting in problems inhibiting the processing of distracting stimuli. Whereas oscillatory power in the alpha band (8-12. Hz) over visual cortical areas is thought to dynamically gate sensory input in younger adults, it is not known whether older adults use the same mechanism to gate out sensory input. Here we identified a task in which both older and younger adults could suppress the processing of irrelevant sensory stimuli, allowing us to use electroencephalography (EEG) to explore the neural activity associated with suppression of visual processing. As expected, we found that the younger adults' suppression of visual processing was correlated with robust modulation of alpha oscillatory power. However, older adults did not modulate alpha power to suppress processing of visual information. These results demonstrate that suppression of alpha power is not necessary to inhibit the processing of distracting stimuli in older adults, suggesting the existence of alternative strategies for suppressing irrelevant, potentially distracting information. |
Araceli Valle; Katherine S. Binder; Caitlin B. Walsh; Carolyn Nemier; Kathryn E. Bangs Eye movements, prosody, and word frequency among average-and high-skilled second-grade readers Journal Article In: School Psychology Review, vol. 42, no. October, pp. 171–190, 2012. @article{Valle2012, The present study explored how average-and high-skilled second-grade readers (as identified by their Woodcock-Johnson III Test of Academic Achieve-ment Broad Reading scores) differed on behavioral measures of reading related to comprehension: eye movements during silent reading and prosody during oral reading. Results from silent reading implicate word processing efficiency: high skilled readers had fewer fixations and intraword regressions, and shorter first fixation, gaze duration, and total word reading times. Their skipping and regres-sion patterns during silent reading were representative of a more systematic approach to passage reading, suggesting that meta-cognitive or motivational factors may also differentiate the groups. Compared to high-skilled readers, average readers' oral reading was characterized by longer pauses, less differen-tiation across pause types, and more intrusions. Counter to prior research, aspects of prosody associated with expressivity favored average readers: they had a sharper pitch declination at the end of declarative sentences and used a wider range of pitch within sentences. High-and low-frequency target words yielded frequency effects during both silent and oral reading. Interactions with skill level on the oral reading task are discussed in terms of potential differences in strategic approaches to reading challenges. |
Karli K. Watson; Michael L. Platt Social signals in primate orbitofrontal cortex Journal Article In: Current Biology, vol. 22, no. 23, pp. 2268–2273, 2012. @article{Watson2012, Primate evolution produced an increased capacity to respond flexibly to varying social contexts as well as expansion of the prefrontal cortex [1, 2]. Despite this association, how prefrontal neurons respond to social information remains virtually unknown. People with damage to their orbitofrontal cortex (OFC) struggle to recognize facial expressions [3, 4], make poor social judgments [5, 6], and frequently make social faux pas [7, 8]. Here we test explicitly whether neurons in primate OFC signal social information and, if so, how such signals compare with responses to primary fluid rewards. We find that OFC neurons distinguish images that belong to socially defined categories, such as female perinea and faces, as well as the social dominance of those faces. These modulations signaled both how much monkeys valued these pictures and their interest in continuing to view them. Far more neurons signaled social category than signaled fluid value, despite the stronger impact of fluid reward on monkeys' choices. These findings indicate that OFC represents both the motivational value and attentional priority of other individuals, thus contributing to both the acquisition of information about others and subsequent social decisions. Our results betray a fundamental disconnect between preferences expressed through overt choice, which were primarily driven by the desire for more fluid, and preferential neuronal processing, which favored social computations. |
Matthew David Weaver; Dane Aronsen; Johan Lauwereyns A short-lived face alert during inhibition of return Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 3, pp. 510–520, 2012. @article{Weaver2012, In the present study, we explored the role of faces in oculomotor inhibition of return (IOR) using a tightly controlled spatial cuing paradigm. We measured saccadic response latency to targets following peripheral cues that were either faces or objects of lesser sociobiological salience. A recurring influence from cue content was observed across numerous methodological variations. Faces versus other object cues briefly reduced saccade latencies toward subsequently presented targets, independently of attentional allocation and IOR. The results suggest a short-lived priming effect or social facilitation effect from the mere presence of a face. In the present study, we further showed that saccadic responses were unaffected by face versus nonface objects in double-cue presentations. Our findings indicate that peripheral face cues do not influence attentional orienting processes involved in IOR any differently from other objects in a tightly controlled oculomotor IOR paradigm. |
Andrea Weber; Matthew W. Crocker On the nature of semantic constraints on lexical access Journal Article In: Journal of Psycholinguistic Research, vol. 41, no. 3, pp. 195–214, 2012. @article{Weber2012, We present two eye-tracking experiments that investigate lexical frequency and semantic context constraints in spoken-word recognition in German. In both experiments, the pivotal words were pairs of nouns overlapping at onset but varying in lexical frequency. In Experiment 1, German listeners showed an expected frequency bias towards high-frequency competitors (e.g., Blume, 'flower') when instructed to click on low-frequency targets (e.g., Bluse, 'blouse'). In Experiment 2, semantically constraining context increased the availability of appropriate low-frequency target words prior to word onset, but did not influence the availability of semantically inappropriate high-frequency competitors at the same time. Immediately after target word onset, however, the activation of high-frequency competitors was reduced in semantically constraining sentences, but still exceeded that of unrelated distractor words significantly. The results suggest that (1) semantic context acts to downgrade activation of inappropriate competitors rather than to exclude them from competition, and (2) semantic context influences spoken-word recognition, over and above anticipation of upcoming referents. |
Sarah J. White; Adrian Staub The distribution of fixation durations during reading: Effects of stimulus quality Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 3, pp. 603–617, 2012. @article{White2012, Participants' eye movements were recorded as they read single sentences presented normally, presented entirely in faint text, or presented normally except for a single faint word. Fixations were longer when the entire sentence was faint than when the sentence was presented normally. In addition, fixations were much longer on a single faint word embedded in normal text, compared to when the entire sentence was faint. The primary aim of the study was to examine the influence of stimulus quality on the distribution of fixation durations. Ex-Gaussian fitting revealed that stimulus quality affected the mean of the Normal component, but in contrast to results from single-word tasks (Plourde & Besner, 1997), stimulus quality did not affect the exponential component, regardless of whether one or all words were faint. The results also contrast with the finding (Staub, White, Drieghe, Hollway, & Rayner, 2010) that the word frequency effect on fixation durations is an effect on both of the critical distributional parameters. These findings are argued to have implications for the interpretation of the role of stimulus quality in word recognition, and for models of eye movement control in reading. |
Yu-Feng Huang; Feng-Yang Kuo How impulsivity affects consumer decision-making in e-commerce Journal Article In: Electronic Commerce Research and Applications, vol. 11, no. 6, pp. 582–590, 2012. @article{Huang2012, This research investigates whether a person's mood can influence impulsivity in online shopping decisions, and how involvement can regulate it. We adopt a process view of impulsivity, and recorded the detailed information search patterns of consumers using an eye-tracker methodology. The results show that incidental moods tend to increase process impulsivity, and this effect may not be restrained by involvement. We also demonstrate that the decision-making process can be separated into two stages - orientation and evaluation. We further find that differences in impulsivity are most evident in the evaluation stage. These results suggest the importance of mood-elicited impulsivity of purchases in e-commerce. |
Lynn Huestegge; Iring Koch Eye movements as a gatekeeper for memorization: Evidence for the persistence of attentional sets in visual memory search Journal Article In: Psychological Research, vol. 76, no. 3, pp. 270–279, 2012. @article{Huestegge2012, Attention is known to serve multiple goals, including the selection of information for further perceptual analysis (selection for perception) and for goal-directed behavior (selection for action). Here, we study the role of overt attention (i.e., eye movements) as a gatekeeper for memorization processes (selection for memorization). Subjects memorized complex multidimensional stimulus displays and subsequently indicated whether a specific (probe) item was present. In Experiment 1 we utilized an incidental learning setting where in the beginning only a subset of display stimuli was relevant, whereas in a transfer block all stimuli were possible probe items. In Experiment 2, we used an explicit learning setting within a between-group design. Response times and gaze patterns indicated that subjects learned to ignore irrelevant stimuli while forming memory representations. The findings suggest that complex feature binding processes in peripheral vision may serve to guide overt selective attention, which eventually contributes to filtering out irrelevant information even in highly complex environments. Gaze patterns suggested that attentional control settings persisted even when they were no longer required. |
Lynn Huestegge; Magali Kreutzfeldt Action effects in saccade control Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 2, pp. 198–203, 2012. @article{Huestegge2012a, According to the ideomotor principle, action preparation involves the activation of associations between actions and their effects. However, there is only sparse research on the role of action effects in saccade control. Here, participants responded to lateralized auditory stimuli with spatially compatible saccades toward peripheral targets (e.g., a rhombus in the left hemifield and a square in the right hemifield). Prior to the imperative auditory stimulus (e.g., a left tone), an irrelevant central visual stimulus was presented that was congruent (e.g., a rhombus), incongruent (e.g., a square), or unrelated (e.g., a circle) to the peripheral saccade target (i.e., the visual effect of the saccade). Saccade targets were present throughout a trial (Experiment 1) or appeared after saccade initiation (Experiment 2). Results showed shorter response times and fewer errors in congruent (vs. incongruent) conditions, suggesting that associations between oculomotor actions and their visual effects play an important role in saccade control. |
Lynn Huestegge; Ralph Radach Visual and memory search in complex environments: Determinants of eye movements and search performance Journal Article In: Ergonomics, vol. 55, no. 9, pp. 1009–1027, 2012. @article{Huestegge2012b, Previous research on visual and memory search revealed various top down and bottom up factors influencing performance. However, utilising abstract stimuli (e.g. geometrical shapes or letters) and focussing on individual factors has often limited the applicability of research findings. Two experiments were designed to analyse which attributes of a product facilitate search in an applied environment. Participants scanned displays containing juice packages while their eye movements were recorded. The familiarity, saliency, and position of search targets were systematically varied. Experiment 1 involved a visual search task, whereas Experiment 2 focussed on memory search. The results showed that bottom up (target saliency) and top down (target familiarity) factors strongly interacted. Overt visual attention was influenced by cultural habits, purposes, and current task demands. The results provide a solid database for assessing the impact and interplay of fundamental top down and bottom up determinants of search processes in applied fields of psychology. |
Stephanie Huette; Bodo Winter; Teenie Matlock; Michael J. Spivey Processing motion implied in language: Eye-movement differences during aspect comprehension Journal Article In: Cognitive Processing, vol. 13, pp. S193–S197, 2012. @article{Huette2012, Previous research on language comprehension has used the eyes as a window into processing. However, these methods are entirely reliant upon using visual or orthographic stimuli that map onto the linguistic stimuli being used. The potential danger of this method is that the pictures used may not perfectly match the internal aspects of language processing. Thus, a method was developed in which participants listened to stories while wearing a head-mounted eyetracker. Preliminary results demonstrate that this method is uniquely suited to measure responses to stimuli in the absence of visual stimulation. |
Clara J. Hungr; Amelia R. Hunt Physical self-similarity enhances the gaze- cueing effect Rapid communication Journal Article In: Quarterly Journal of Experimental Psychology, vol. 65, no. 7, pp. 1250–1259, 2012. @article{Hungr2012, Important social information can be gathered from the direction ofanother person's gaze, such as their intentions and aspects ofthe environment that are relevant to those intentions. Previous work has exam- ined the effect of gaze on attention through the gaze-cueing effect: an enhancement of performance in detecting targets that appear where another person is looking. The present study investigated whether the physical self-similarity of a face could increase its impact on attention. Self-similarity was manipu- lated by morphing participants' faces with those of strangers. The effect of gaze direction on target detection was strongest for faces morphed with the participant's face. The results support previous work suggesting that self-similar faces are processed differently from dissimilar faces. The data also demonstrate that a face's similarity to one's own face influences the degree to which that face guides our attention in the environment. |
Kohitij Kar; Bart Krekelberg Transcranial electrical stimulation over visual cortex evokes phosphenes with a retinal origin Journal Article In: Journal of Neurophysiology, vol. 108, no. 8, pp. 2173–2178, 2012. @article{Kar2012, Transcranial electrical stimulation (tES) is a promising therapeutic tool for a range of neurological diseases. Understanding how the small currents used in tES spread across the scalp and penetrate the brain will be important for the rational design of tES therapies. Alternating currents applied transcranially above visual cortex induce the perception of flashes of light (phosphenes). This makes the visual system a useful model to study tES. One hypothesis is that tES generates phosphenes by direct stimulation of the cortex underneath the transcranial electrode. Here, we provide evidence for the alternative hypothesis that phosphenes are generated in the retina by current spread from the occipital electrode. Building on the existing literature, we first confirm that phosphenes are induced at lower currents when electrodes are placed farther away from visual cortex and closer to the eye. Second, we explain the temporal frequency tuning of phosphenes based on the well-known response properties of primate retinal ganglion cells. Third, we show that there is no difference in the time it takes to evoke phosphenes in the retina or by stimulation above visual cortex. Together, these findings suggest that phosphenes induced by tES over visual cortex originate in the retina. From this, we infer that tES currents spread well beyond the area of stimulation and are unlikely to lead to focal neural activation. Novel stimulation protocols that optimize current distributions are needed to overcome these limitations of tES. |
Argyro Katsika; David Braze; Ashwini Deo; Maria Mercedes Piñango Complement Coercion: Distinguishing between type-shifting and pragmatic inferencing Journal Article In: The Mental Lexicon, vol. 7, no. 1, pp. 58–76, 2012. @article{Katsika2012, Although Complement Coercion has been systematically associated with computational cost, there remains a serious confound in the experimental evidence built up in previous studies. The confound arises from the fact that lexico-semantic differences within the set of verbs assumed to involve coercion have not been taken into consideration. From among the set of verbs that have been reported to exhibit complement coercion effects we identified two clear semantic classes — aspectual verbs and psychological verbs. We hypothesize that the semantic difference between the two should result in differing processing profiles. Aspectual predicates ( begin ) trigger coercion and processing cost while psychological predicates ( enjoy ) do not. Evidence from an eye-tracking experiment supports our hypothesis. Coercion costs are restricted to aspectual predicates while no such effects are found with psychological predicates. These findings have implications for how these two kinds of predicates might be lexically encoded as well as for whether the observed interpolation of eventive meaning can be attributed to type-shifting (e.g., McElree, Traxler, Pickering, Seely, & Jackendoff, 2001) or to pragmatic-inferential processes (e.g., De Almeida, 2004). |
Albrecht W. Inhoff; Bradley A. Seymour; Ralph Radach Use of colour for language processing during reading Journal Article In: Visual Cognition, vol. 20, no. 10, pp. 1254–1265, 2012. @article{Inhoff2012, The study examined whether literal correspondence is necessary for the use of visual features during word recognition and text comprehension. Eye movements were recorded during reading and used to change the colour of dialogue when it was fixated. In symbolically congruent colour conditions, dialogue of female and male characters was shown in orchid and blue, respectively. The reversed assignment was used in incongruent conditions, and no colouring was applied in a control condition. Analyses of oculomotor activity revealed Stroop-type congruency effects during dialogue reading, with shorter viewing durations in congruent than incongruent conditions. Colour influenced oculomotor measures that index the recognition and integration of words, indicating that it influenced multiple stages of language processing. |
Elias B. Issa; James J. DiCarlo Precedence of the eye region in neural processing of faces Journal Article In: Journal of Neuroscience, vol. 32, no. 47, pp. 16666–16682, 2012. @article{Issa2012, Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of "face-selective" cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face-selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full-face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features-consistent with parts-based models-grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy. |
L. A. Issen; David C. Knill Decoupling eye and hand movement control: Visual short-term memory influences reach planning more than saccade planning Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 1–13, 2012. @article{Issen2012, When reaching for objects, humans make saccades to fixate the object at or near the time the hand begins to move. In order to address whether the CNS relies on a common representation of target positions to plan both saccades and hand movements, we quantified the contributions of visual short-term memory (VSTM) to hand and eye movements executed during the same coordinated actions. Subjects performed a sequential movement task in which they picked up one of two objects on the right side of a virtual display (the "weapon"), moved it to the left side of the display (to a "reloading station") and then moved it back to the right side to hit the other object (the target). On some trials, the target was perturbed by 1° of visual angle while subjects moved the weapon to the reloading station. Although subjects did not notice the change, the original position of the target, encoded in VSTM, influenced the motor plans for both the hand and the eye back to the target. Memory influenced motor plans for distant targets more than for near targets, indicating that sensorimotor planning is sensitive to the reliability of available information; however, memory had a larger influence on hand movements than on eye movements. This suggests that spatial planning for coordinated saccades and hand movements are dissociated at the level of processing at which online visual information is integrated with information in short-term memory. |
Carrie N. Jackson; Paola E. Dussias; Adelina Hristova Using eye-tracking to study the on-line processing of case-marking information among intermediate L2 learners of German Journal Article In: International Review of Applied Linguistics in Language Teaching, vol. 50, no. 2, pp. 101–133, 2012. @article{Jackson2012, This study uses eye-tracking to examine the processing of case-marking information in ambiguous subject- and object-first wh-questions in German. The position of the lexical verb was also manipulated via verb tense to investigate whether verb location influences how intermediate L2 learners process L2 sentences. Results show that intermediate L2 German learners were sensitive to case-marking information, exhibiting longer processing times on subject-first than object-first sentences, regardless of verb location. German native speakers exhibited the opposite word order preference, with longer processing times on object-first than subject-first sentences, replicating previous findings. These results are discussed in light of current L2 processing research, highlighting how methodological constraints influence researchers' abilities to measure the on-line processing of morphosyntactic information among intermediate L2 learners. |
Stephanie Jainta; Wolfgang Jaschinski Individual differences in binocular coordination are uncovered by directly comparing monocular and binocular reading conditions Journal Article In: Investigative Ophthalmology & Visual Science, vol. 53, no. 9, pp. 5762–5769, 2012. @article{Jainta2012, PURPOSE: We evaluated systematically binocular coordination during a reading task by comparing binocular and monocular reading, and considering the potential effects of individual heterophoria and eye dominance. METHODS: A total of 13 participants (aged 19-29 years, refractive errors -0.5 to 0.125 diopters [D]) read single sentences in a haploscope while eye movements were measured with an EyeLinkII eyetracker. RESULTS: When reading monocularly, saccade amplitudes increased by 0.04 degrees and first fixation durations became longer by approximately 10 ms. Furthermore, saccade disconjugacies increased, and compensatory vergence drifts during fixation turned into a divergent drift relative to the viewing distance. The vergence angle adjusted for the actual viewing distance became less convergent during monocular reading by 0.5 degrees. Moreover, in participants who were almost orthophoric, only the first fixation duration became longer (by 20 ms) when the reading conditions changed from binocular to monocular. For exophoric participants, all parameters of binocular coordination changed, and first fixation duration decreased by 20 ms. When reading monocularly, no differences between the dominant right eye and the nondominant left eye were found. CONCLUSIONS: Because of obvious differences in binocular coordination between monocular and binocular reading, some vergence adjustments are driven actively by fusional processes. Furthermore, higher demands on these binocular fusional processes can be uncovered only by a detailed evaluation of monocular reading conditions. |
Andrew F. Jarosz; Jennifer Wiley Why does working memory capacity predict RAPM performance? A possible role of distraction Journal Article In: Intelligence, vol. 40, no. 5, pp. 427–438, 2012. @article{Jarosz2012, Current theories concerning individual differences in working memory capacity (WMC) suggest that WMC reflects the ability to control the focus of attention and resist interference and distraction. The current set of experiments tested whether susceptibility to distraction is partially responsible for the established relationship between performance on complex span tasks and the Raven's Advanced Progressive Matrices (RAPM). This hypothesis was examined by manipulating the level of distraction among the incorrect responses contained in RAPM problems, by varying whether the response bank included the most commonly selected incorrect response. When entered hierarchically into a regression predicting a composite score on span tasks, items with highly distracting incorrect answers significantly improved the predictive power of a model predicting an individual's WMC, compared to the model containing only items with less distracting incorrect responses. Additional analyses were performed examining the types of errors that were made. A second experiment used eye-tracking to demonstrate that these effects seem to be rooted in differences in susceptibility to distraction as well as strategy differences between high and low WMC individuals. Results are discussed in terms of current theories about the role of attentional control in performance on general fluid intelligence tasks. |
Ned Jenkinson; John Stuart Brittain; Stephen L. Hicks; Christopher Kennard; Tipu Z. Aziz On the origin of oscillopsia during pedunculopontine stimulation Journal Article In: Stereotactic and Functional Neurosurgery, vol. 90, no. 2, pp. 124–129, 2012. @article{Jenkinson2012, We report a case of induced oscillopsia caused by deep brain stimulation (DBS) of the pedunculopontine nucleus (PPN). Recent reports have described involuntary oscillopsia during DBS of the PPN that patients have described as trembling vision. Here we substantiate this observation using infra-red eye tracking. It has been suggested that this phenomenon might be used as an indicator of accurate targeting of the PPN with DBS. Our observations suggest that this phenomenon may not be related to a constricted anatomical structure and therefore such practise may be unwise. Scrutiny has led us to believe that the oscillopsia in our patient is not caused by direct stimulation of the oculomotor nerve as suggested in a previous report, but by stimulation of fibres in the uncinate fasciculus of the cerebellum and the superior cerebellar peduncle, which in turn stimulate the saccadic pre-motor neurones in the brainstem. |
Trenton A. Jerde; Elisha P. Merriam; Adam C. Riggall; James H. Hedges; Clayton E. Curtis Prioritized maps of space in human frontoparietal cortex Journal Article In: Journal of Neuroscience, vol. 32, no. 48, pp. 17382–17390, 2012. @article{Jerde2012, Priority maps are theorized to be composed of large populations of neurons organized topographically into a map of gaze-centered space whose activity spatially tags salient and behaviorally relevant information. Here, we identified four priority map candidates along human posterior intraparietal sulcus (IPS0-IPS3) and two along the precentral sulcus (PCS) that contained reliable retinotopically organized maps of contralateral visual space. Persistent activity increased from posterior-to-anterior IPS areas and from inferior-to-superior PCS areas during the maintenance of a working memory representation, the maintenance of covert attention, and the maintenance of a saccade plan. Moreover, decoders trained to predict the locations on one task (e.g., working memory) cross-predicted the locations on other tasks (e.g., attention) in superior PCS and IPS2, suggesting that these patterns of maintenance activity may be interchangeable across the tasks. Such properties make these two areas in frontal and parietal cortex viable priority map candidates. |
Beth P. Johnson; Nicole J. Rinehart; Nicole Papadopoulos; Bruce Tonge; Lynette Millist; Owen B. White; Joanne Fielding A closer look at visually guided saccades in autism and Asperger's disorder Journal Article In: Frontiers in Integrative Neuroscience, vol. 6, pp. 99, 2012. @article{Johnson2012a, Motor impairments have been found to be a significant clinical feature associated with autism and Asperger's disorder (AD) in addition to core symptoms of communication and social cognition deficits. Motor deficits in high-functioning autism (HFA) and AD may differentiate these disorders, particularly with respect to the role of the cerebellum in motor functioning. Current neuroimaging and behavioral evidence suggests greater disruption of the cerebellum in HFA than AD. Investigations of ocular motor functioning have previously been used in clinical populations to assess the integrity of the cerebellar networks, through examination of saccade accuracy and the integrity of saccade dynamics. Previous investigations of visually guided saccades in HFA and AD have only assessed basic saccade metrics, such as latency, amplitude, and gain, as well as peak velocity. We used a simple visually guided saccade paradigm to further characterize the profile of visually guided saccade metrics and dynamics in HFA and AD. It was found that children with HFA, but not AD, were more inaccurate across both small (5°) and large (10°) target amplitudes, and final eye position was hypometric at 10°. These findings suggest greater functional disturbance of the cerebellum in HFA than AD, and suggest fundamental difficulties with visual error monitoring in HFA. |
Rebecca L. Johnson; Morgan E. Eisler The importance of the first and last letter in words during sentence reading Journal Article In: Acta Psychologica, vol. 141, no. 3, pp. 336–351, 2012. @article{Johnson2012, Previous research suggests that the first and last letters of words are more important than the interior letters during reading. A question that has yet to be fully studied is why this is so. The current study reports four experiments in which participants read sentences containing words with transposed letters occurring at the beginning of the word, near the middle of the word, or at the end of the word. Experiments 1 and 2 also included some sentences where the spaces were removed and replaced with hash marks (#) to equate all letters on their degree of lateral interference from adjacent letter positions. In Experiment 3, equating was done by adding an additional space between all of the letters, so that no letter position received lateral interference from any letter. In Experiment 4, readers read sentences from right to left so that word-initial letters were presented furthest into the parafovea. The results indicate that although the first letter of a word has a privileged role over interior letters regardless of the degree of lateral interference it receives or its location in the parafovea (suggesting that it is intrinsically related to how we process, store, or access lexical information), the last letter of a word is more important than interior letters only when it receives less lateral interference or when its parafoveal location was close to the fovea (suggesting that it is privileged only due to low-level visual factors). These findings have important implications for current theories and computational models regarding the roles of various letter positions in reading. |
John L. Jones; Michael P. Kaschak Global statistical learning in a visual search task Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 1, pp. 152–160, 2012. @article{Jones2012, Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials, but with a target location bias (i.e., the target appeared on one half of the display twice as often as the other). Participants quickly learned to make more first saccades to the side more likely to contain the target. With item-by-item search first saccades to the target were at chance. With a distributed search strategy first saccades to a target located on the biased side increased above chance. The results confirm that visual search behavior is sensitive to simple global statistics in the absence of trial-to-trial target location repetitions. |
Alex O. Holcombe; Wei-Ying Chen Exhausting attentional tracking resources with a single fast-moving object Journal Article In: Cognition, vol. 123, no. 2, pp. 218–228, 2012. @article{Holcombe2012, Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional pointers available to follow objects. Spatial interference theory proposes that when targets are near each other, their attentional spotlights mutually interfere. Resource theory asserts that a limited resource is divided among targets, and performance reflects the amount available per target. Utilising widely separated objects to avoid spatial interference, the present experiments validated the predictions of resource theory. The fastest target speed at which two targets could be tracked was much slower than the fastest speed at which one target could be tracked. This speed limit for tracking two targets was approximately that predicted if at high speeds, only a single target could be tracked. This result cannot be accommodated by the fixed-limit or interference theories. Evidently a fast target, if it moves fast enough, can exhaust attentional resources. |
Andrew Hollingworth Task specificity and the influence of memory on visual search: Comment on Võ and Wolfe (2012) Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 6, pp. 1596–1603, 2012. @article{Hollingworth2012, Recent results from Võ and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a preview task did not improve later search, but Võ and Wolfe used a relatively insensitive, between-subjects design. Here, we replicated the Võ and Wolfe study using a within-subject manipulation of scene preview. A preview session (focused either on object location memory or on the assessment of object semantics) reliably facilitated later search. In addition, information acquired from distractors in a scene-facilitated search when the distractor later became the target. Instead of being strongly constrained by task, visual memory is applied flexibly to guide attention and gaze during visual search. |
Linus Holm; Stephen A. Engel; Paul Schrater Object learning improves feature extraction but does not improve feature selection Journal Article In: PLoS ONE, vol. 7, no. 12, pp. e51325, 2012. @article{Holm2012, A single glance at your crowded desk is enough to locate your favorite cup. But finding an unfamiliar object requires more effort. This superiority in recognition performance for learned objects has at least two possible sources. For familiar objects observers might: 1) select more informative image locations upon which to fixate their eyes, or 2) extract more information from a given eye fixation. To test these possibilities, we had observers localize fragmented objects embedded in dense displays of random contour fragments. Eight participants searched for objects in 600 images while their eye movements were recorded in three daily sessions. Performance improved as subjects trained with the objects: The number of fixations required to find an object decreased by 64% across the 3 sessions. An ideal observer model that included measures of fragment confusability was used to calculate the information available from a single fixation. Comparing human performance to the model suggested that across sessions information extraction at each eye fixation increased markedly, by an amount roughly equal to the extra information that would be extracted following a 100% increase in functional field of view. Selection of fixation locations, on the other hand, did not improve with practice. |
Tien Ho-Phuoc; N. Guyader; F. Landragin; Anne Guerin-Dugué When viewing natural scenes, do abnormal colors impact on spatial or temporal parameters of eye movements? Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 1–13, 2012. @article{HoPhuoc2012, Since Treisman's theory, it has been generally accepted that color is an elementary feature that guides eye movements when looking at natural scenes. Hence, most computational models of visual attention predict eye movements using color as an important visual feature. In this paper, using experimental data, we show that color does not affect where observers look when viewing natural scene images. Neither colors nor abnormal colors modify observers' fixation locations when compared to the same scenes in grayscale. In the same way, we did not find any significant difference between the scanpaths under grayscale, color, or abnormal color viewing conditions. However, we observed a decrease in fixation duration for color and abnormal color, and this was particularly true at the beginning of scene exploration. Finally, we found that abnormal color modifies saccade amplitude distribution. |
Youyang Hou; Taosheng Liu Neural correlates of object-based attentional selection in human cortex Journal Article In: Neuropsychologia, vol. 50, no. 12, pp. 2916–2925, 2012. @article{Hou2012, Humans can attend to different objects independent of their spatial locations. While selecting an object has been shown to modulate object processing in high-level visual areas in occipitotemporal cortex, where/how behavioral importance (i.e., priority) for objects is represented is unknown. Here we examined the patterns of distributed neural activity during an object-based selection task. We measured brain activity with functional magnetic resonance imaging (fMRI), while participants viewed two superimposed, dynamic objects (left- and right-pointing triangles) and were cued to attend to one of the triangle objects. Enhanced fMRI response was observed for the attention conditions compared to a neutral condition, but no significant difference was found in overall response amplitude between two attention conditions. By using multi-voxel pattern classification (MVPC), however, we were able to distinguish the neural patterns associated with attention to different objects in early visual cortex (V1 to hMT+) and lateral occipital complex (LOC). Furthermore, distinct multi-voxel patterns were also observed in frontal and parietal areas. Our results demonstrate that object-based attention has a wide-spread modulation effect along the visual hierarchy and suggest that object-specific priority information is represented by patterned neural activity in the dorsal frontoparietal network. |
Michael C. Hout; Stephen D. Goldinger Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 1, pp. 90–112, 2012. @article{Hout2012, When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. |
I. S. Howard; James N. Ingram; David W. Franklin; Daniel M. Wolpert Gone in 0.6 seconds: The encoding of motor memories depends on recent sensorimotor states Journal Article In: Journal of Neuroscience, vol. 32, no. 37, pp. 12756–12768, 2012. @article{Howard2012, Real-world tasks often require movements that depend on a previous action or on changes in the state of the world. Here we investigate whether motor memories encode the current action in a manner that depends on previous sensorimotor states. Human subjects performed trials in which they made movements in a randomly selected clockwise or counterclockwise velocity-dependent curl force field. Movements during this adaptation phase were preceded by a contextual phase that determined which of the two fields would be experienced on any given trial. As expected from previous research, when static visual cues were presented in the contextual phase, strong interference (resulting in an inability to learn either field) was observed. In contrast, when the contextual phase involved subjects making a movement that was continuous with the adaptation-phase movement, a substantial reduction in interference was seen. As the time between the contextual and adaptation movement increased, so did the interference, reaching a level similar to that seen for static visual cues for delays >600 ms. This contextual effect generalized to purely visual motion, active movement without vision, passive movement, and isometric force generation. Our results show that sensorimotor states that differ in their recent temporal history can engage distinct representations in motor memory, but this effect decays progressively over time and is abolished by ∼600 ms. This suggests that motor memories are encoded not simply as a mapping from current state to motor command but are encoded in terms of the recent history of sensorimotor states. |
Janet H. Hsiao; Tina T. Liu The optimal viewing position in face recognition Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 1–9, 2012. @article{Hsiao2012a, In English word recognition, the best recognition performance is usually obtained when the initial fixation is directed to the left of the center (optimal viewing position, OVP). This effect has been argued to involve an interplay of left hemisphere lateralization for language processing and the perceptual experience of fixating at word beginnings most often. While both factors predict a left-biased OVP in visual word recognition, in face recognition they predict contrasting biases: People prefer to fixate the left half-face, suggesting that the OVP should be to the left of the center; nevertheless, the right hemisphere lateralization in face processing suggests that the OVP should be to the right of the center in order to project most of the face to the right hemisphere. Here, we show that the OVP in face recognition was to the left of the center, suggesting greater influence from the perceptual experience than hemispheric asymmetry in central vision. In contrast, hemispheric lateralization effects emerged when faces were presented away from the center; there was an interaction between presented visual field and location (center vs. periphery), suggesting differential influence from perceptual experience and hemispheric asymmetry in central and peripheral vision. |
Jhih-Yun Hsiao; Yi-Chuan Chen; Charles Spence; Su-Ling Yeh Assessing the effects of audiovisual semantic congruency on the perception of a bistable figure Journal Article In: Consciousness and Cognition, vol. 21, no. 2, pp. 775–787, 2012. @article{Hsiao2012, Bistable figures provide a fascinating window through which to explore human visual awareness. Here we demonstrate for the first time that the semantic context provided by a background auditory soundtrack (the voice of a young or old female) can modulate an observer's predominant percept while watching the bistable "my wife or my mother-in-law" figure (Experiment 1). The possibility of a response-bias account-that participants simply reported the percept that happened to be congruent with the soundtrack that they were listening to-was excluded in Experiment 2. We further demonstrate that this crossmodal semantic effect was additive with the manipulation of participants' visual fixation (Experiment 3), while it interacted with participants' voluntary attention (Experiment 4). These results indicate that audiovisual semantic congruency constrains the visual processing that gives rise to the conscious perception of bistable visual figures. Crossmodal semantic context therefore provides an important mechanism contributing to the emergence of visual awareness. |
Timothy R. Jordan; Victoria A. McGowan; Kevin B. Paterson Reading with a filtered fovea: The influence of visual quality at the point of fixation during reading Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1078–1084, 2012. @article{Jordan2012, Reading relies critically on processing text in foveal vision during brief fixational pauses, and high-quality visual input from foveal text is fundamental to theories of reading. However, the quality of visual input from foveal text that is actually functional for reading and the effects of this input on reading performance are unclear. To investigate these issues, a moving, gaze-contingent foveal filtering technique was developed to display areas of text within foveal vision that provided only coarse, medium, or fine scale visual input during each fixational pause during reading. Normal reading times were unaffected when foveal text up to three characters wide at the point of fixation provided any one visual input (coarse, medium, or fine). Wider areas of coarse visual input lengthened reading times, but reading still occurred, and normal reading times were completely unaffected when only medium or fine visual input extended across the entire fovea. Further analyses revealed that each visual input had no effect on the number of fixations made when normal text was read, that adjusting fixation durations helped preserve reading efficiency for different visual inputs, and that each visual input had virtually no effect on normal saccades. These findings indicate that, despite the resolving power of foveal vision and the emphasis placed on high-quality foveal visual input by theories of reading, normal reading functions with similar success using a range of restricted visual inputs from foveal text, even at the point of fixation. Some implications of these findings for theories of reading are discussed. |
Zoran Josipovic; Ilan Dinstein; Jochen Weber; David J. Heeger Influence of meditation on anti-correlated networks in the brain Journal Article In: Frontiers in Human Neuroscience, vol. 5, pp. 183, 2012. @article{Josipovic2012, Human experiences can be broadly divided into those that are external and related to interaction with the environment, and experiences that are internal and self-related. The cerebral cortex appears to be divided into two corresponding systems: an "extrinsic" system composed of brain areas that respond more to external stimuli and tasks and an "intrinsic" system composed of brain areas that respond less to external stimuli and tasks. These two broad brain systems seem to compete with each other, such that their activity levels over time is usually anti-correlated, even when subjects are "at rest" and not performing any task. This study used meditation as an experimental manipulation to test whether this competition (anti-correlation) can be modulated by cognitive strategy. Participants either fixated without meditation (fixation), or engaged in non-dual awareness (NDA) or focused attention (FA) meditations. We computed inter-area correlations ("functional connectivity") between pairs of brain regions within each system, and between the entire extrinsic and intrinsic systems. Anti-correlation between extrinsic vs. intrinsic systems was stronger during FA meditation and weaker during NDA meditation in comparison to fixation (without mediation). However, correlation between areas within each system did not change across conditions. These results suggest that the anti-correlation found between extrinsic and intrinsic systems is not an immutable property of brain organization and that practicing different forms of meditation can modulate this gross functional organization in profoundly different ways. |
Solène Kalénine; Daniel Mirman; Laurel J. Buxbaum A combination of thematic and similarity-based semantic processes confers resistance to deficit following left hemisphere stroke Journal Article In: Frontiers in Human Neuroscience, vol. 6, pp. 106, 2012. @article{Kalenine2012, Semantic knowledge may be organized in terms of similarity relations based on shared features and/or complementary relations based on co-occurrence in events. Thus, relationships between manipulable objects such as tools may be defined by their functional properties (what the objects are used for) or thematic properties (e.g., what the objects are used with or on). A recent study from our laboratory used eye-tracking to examine incidental activation of semantic relations in a word-picture matching task and found relatively early activation of thematic relations (e.g., broom-dustpan), later activation of general functional relations (e.g., broom-sponge), and an intermediate pattern for specific functional relations (e.g., broom-vacuum cleaner). Combined with other recent studies, these results suggest that there are distinct semantic systems for thematic and similarity-based knowledge and that the "specific function" condition drew on both systems. This predicts that left hemisphere stroke that damages either system (but not both) may spare specific function processing. The present experiment tested these hypotheses using the same experimental paradigm with participants with left hemisphere lesions (N = 17). The results revealed that, compared to neurologically intact controls (N = 12), stroke participants showed later activation of thematic and general function relations, but activation of specific function relations was spared and was significantly earlier for stroke participants than controls. Across the stroke participants, activation of thematic and general function relations was negatively correlated, further suggesting that damage tended to affect either one semantic system or the other. These results support the distinction between similarity-based and complementarity-based semantic relations and suggest that relations that draw on both systems are relatively more robust to damage. |
Solène Kalénine; Daniel Mirman; Erica L. Middleton; Laurel J. Buxbaum Temporal dynamics of activation of thematic and functional knowledge during conceptual processing of manipulable artifacts Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 38, no. 5, pp. 1274–1295, 2012. @article{Kalenine2012a, The current research aimed at specifying the activation time course of different types of semantic information during object conceptual processing and the effect of context on this time course. We distinguished between thematic and functional knowledge and the specificity of functional similarity. Two experiments were conducted with healthy older adults using eye tracking in a word-to-picture matching task. The time course of gaze fixations was used to assess activation of distractor objects during the identification of manipulable artifact targets (e.g., broom). Distractors were (a) thematically related (e.g., dustpan), (b) related by a specific function (e.g., vacuum cleaner), or (c) related by a general function (e.g., sponge). Growth curve analyses were used to assess competition effects when target words were presented in isolation (Experiment 1) and embedded in contextual sentences of different generality levels (Experiment 2). In the absence of context, there was earlier and shorter lasting activation of thematically related as compared to functionally related objects. The time course difference was more pronounced for general functions than specific functions. When contexts were provided, functional similarities that were congruent with context generality level increased in salience with earlier activation of those objects. Context had little impact on thematic activation time course. These data demonstrate that processing a single manipulable artifact concept implicitly activates thematic and functional knowledge with different time courses and that context speeds activation of context-congruent functional similarity. |
Yuki Kamide Learning individual talkers' structural preferences Journal Article In: Cognition, vol. 124, no. 1, pp. 66–71, 2012. @article{Kamide2012, Listeners are often capable of adjusting to the variability contained in individual talkers' (speakers') speech. The vast majority of findings on talker adaptation are concerned with learning the contingency between . phonological characteristics and talker identity. In contrast, the present study investigates representations at a more abstract level - the contingency between . syntactic attachment style and talker identity. In a 'visual-world' experiment, participants were exposed to semi-realistic scenes depicting several objects (e.g., an adult man, a young girl, a motorbike, a carousel, and other objects) accompanied by a spoken sentence with a structurally ambiguous relative clause (e.g., 'The uncle of the girl who will ride the motorbike/carousel is from France.' In the context of the scene, 'motorbike' suggested the uncle as the agent of the riding, whereas 'carousel' suggested the girl as the agent). For half the experimental items, one version of the sentence was read by one talker, who . always uttered sentences that resolved, pragmatically, to the high attachment (the uncle as the agent), and the other by another talker, who . always uttered sentences resolving to the low attachment (the girl as the agent). For the other half of the experimental items, both versions were read by a third talker who produced both high and low attachments. It was found that, after exposure to these stimuli, and for new sentences not heard previously, participants learnt to anticipate the 'appropriate' attachment depending on talker identity (with no attachment preference for the talker who produced both attachment types). The data suggest that listeners can learn the relationship between talker identity and abstract, structural, properties of their speech, and that syntactic attachment decisions in comprehension can reflect sensitivity to talker-specific syntactic style. |
Juan E. Kamienkowski; Matias J. Ison; Rodrigo Quian Quiroga; Mariano Sigman Fixation-related potentials in visual search: A combined EEG and eye tracking study Journal Article In: Journal of Vision, vol. 12, no. 7, pp. 1–20, 2012. @article{Kamienkowski2012, We report a study of concurrent eye movements and electroencephalographic (EEG) recordings while subjects freely explored a search array looking for hidden targets. We describe a sequence of fixation-event related potentials (fERPs) that unfolds during ; 400 ms following each fixation. This sequence highly resembles the event-related responses in a replay experiment, in which subjects kept fixation while a sequence of images occurred around the fovea simulating the spatial and temporal patterns during the free viewing experiment. Similar responses were also observed in a second control experiment where the appearance of stimuli was controlled by the experimenters and presented at the center of the screen. We also observed a relatively early component (;150 ms) that distinguished between targets and distractors only in the freeviewing condition. We present a novel approach to match the critical properties of two conditions (targets/distractors), which can be readily adapted to other paradigms to investigate EEG components during free eye-movements. |
Juan E. Kamienkowski; Joaquin Navajas; Mariano Sigman Eye movements blink the attentional blink Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 3, pp. 555–560, 2012. @article{Kamienkowski2012a, When presented with a sequence of visual stimuli in rapid succession, participants often fail to detect a second salient target, a phenomenon referred as the attentional blink (AB; Raymond, Shapiro, & Arnell, 1992; Shapiro, Raymond, & Arnell, 1997). On the basis of a vast corpus of experiments, several cognitive theories suggest that the blink results from a discrete structuring of attention, sampling information from temporal episodes during which several items can access encoding process (Wyble, Bowman, & Nieuwenstein, 2009; Wyble, Potter, Bowman, & Nieuwenstein, 2011). The objective of this work is to explore the AB when multiple items are presented at the fovea during ocular movements. The authors reasoned that each fixation may cohesively form an episode and hence expected that the blink may vanish within a single fixation. In turn, they expected saccades to accentuate episodic borders and hence shorten the regime of interference when 2 targets are presented fovealy in successive fixations. Evidence is provided in favor of this hypothesis, showing that the blink vanishes when both targets are presented in the core of a single fixation (far from the saccadic boundaries) and that it recovers more rapidly in successive fixations. These studies support current views that episodes should have an effect on the AB and provide evidence that eye movements play an important role in the formation of episodes. |
Marc R. Kamke; Michelle G. Hall; Harley F. Lye; Martin V. Sale; Laura R. Fenlon; Timothy J. Carroll; Stephan Riek; Jason B. Mattingley Visual attentional load influences plasticity in the human motor cortex Journal Article In: Journal of Neuroscience, vol. 32, no. 20, pp. 7001–7008, 2012. @article{Kamke2012, Neural plasticity plays a critical role in learning, memory, and recovery from injury to the nervous system. Although much is known about the physical and physiological determinants of plasticity, little is known about the influence of cognitive factors. In this study, we investigated whether selective attention plays a role in modifying changes in neural excitability reflecting long-term potentiation (LTP)-like plasticity. We induced LTP-like effects in the hand area of the human motor cortex using transcranial magnetic stimulation (TMS). During the induction of plasticity, participants engaged in a visual detection task with either low or high attentional demands. Changes in neural excitability were assessed by measuring motor-evoked potentials in a small hand muscle before and after the TMS procedures. In separate experiments plasticity was induced either by paired associative stimulation (PAS) or intermittent theta-burst stimulation (iTBS). Because these procedures induce different forms of LTP-like effects, they allowed us to investigate the generality of any attentional influence on plasticity. In both experiments reliable changes in motor cortex excitability were evident under low-load conditions, but this effect was eliminated under high-attentional load. In a third experiment we investigated whether the attentional task was associated with ongoing changes in the excitability of motor cortex, but found no difference in evoked potentials across the levels of attentional load. Our findings indicate that in addition to their role in modifying sensory processing, mechanisms of attention can also be a potent modulator of cortical plasticity. |
Marc R. Kamke; Harrison E. Vieth; David Cottrell; Jason B. Mattingley Parietal disruption alters audiovisual binding in the sound-induced flash illusion Journal Article In: NeuroImage, vol. 62, no. 3, pp. 1334–1341, 2012. @article{Kamke2012a, Selective attention and multisensory integration are fundamental to perception, but little is known about whether, or under what circumstances, these processes interact to shape conscious awareness. Here, we used transcranial magnetic stimulation (TMS) to investigate the causal role of attention-related brain networks in multisensory integration between visual and auditory stimuli in the sound-induced flash illusion. The flash illusion is a widely studied multisensory phenomenon in which a single flash of light is falsely perceived as multiple flashes in the presence of irrelevant sounds. We investigated the hypothesis that extrastriate regions involved in selective attention, specifically within the right parietal cortex, exert an influence on the multisensory integrative processes that cause the flash illusion. We found that disruption of the right angular gyrus, but not of the adjacent supramarginal gyrus or of a sensory control site, enhanced participants' veridical perception of the multisensory events, thereby reducing their susceptibility to the illusion. Our findings suggest that the same parietal networks that normally act to enhance perception of attended events also play a role in the binding of auditory and visual stimuli in the sound-induced flash illusion. |
Janis Y. Y. Kan; Ullanda Niel; Michael C. Dorris Evidence for a link between the experiential allocation of saccade preparation and visuospatial attention Journal Article In: Journal of Neurophysiology, vol. 107, no. 5, pp. 1413–1420, 2012. @article{Kan2012, Whether a link exists between the two orienting processes of saccade preparation and visuospatial attention has typically been studied by using either sensory cues or predetermined rules that instruct subjects where to allocate these limited resources. In the real world, explicit instructions are not always available and presumably expectations shaped by previous experience play an important role in the allocation of these processes. Here we examined whether manipulating two experiential factors that clearly influence saccade preparation–the probability and timing of saccadic responses–also influences the allocation of visuospatial attention. Occasionally, a visual probe was presented whose spatial location and time of presentation varied relative to those of the saccade target. The proportion of erroneous saccades directed toward this probe indexed saccade preparation, and the proportion of correct discriminations of probe orientation indexed visuospatial attention. Overall, preparation and attention were significantly correlated to each other across these manipulations of saccade probability and timing. Saccade probability influenced both preparation and attention processes, whereas saccade timing influenced only preparation processes. Unexpectedly, discrimination ability was not improved in those trials in which the probe triggered an erroneous saccade despite particularly heightened levels of saccade preparation. To account for our results, we propose a conceptual dual-purpose threshold model based on neurophysiological considerations that link the processes of saccade preparation and visuospatial attention. The threshold acts both as the minimum activity level required for eliciting saccades and a maximum level for which neural activity can provide attentional benefits. |
Ryota Kanai; Neil G. Muggleton; Vincent Walsh Transcranial direct current stimulation of the frontal eye fields during pro- and antisaccade tasks Journal Article In: Frontiers in Psychiatry, vol. 3, pp. 45, 2012. @article{Kanai2012, Transcranial direct current stimulation (tDCS) has been successfully applied to cortical areas such as the motor cortex and visual cortex. In the present study, we examined whether tDCS can reach and selectively modulate the excitability of the frontal eye field (FEF). In order to assess potential effects of tDCS, we measured saccade latency, landing point, and its variability in a simple prosaccade task and in an antisaccade task. In the prosaccade task, we found that anodal tDCS shortened the latency of saccades to a contralateral visual cue. However, cathodal tDCS did not show a significant modulation of saccade latency. In the antisaccade task, on the other hand, we found that the latency for ipisilateral antisaccades was prolonged during the stimulation, whereas anodal stimulation did not modulate the latency of antisaccades. In addition, anodal tDCS reduced the erroneous saccades toward the contralateral visual cue. These results in the antisaccade task suggest that tDCS modulates the function of FEF to suppress reflexive saccades to the contralateral visual cue. Both in the prosaccade and antisaccade tasks, we did not find any effect of tDCS on saccade landing point or its variability. Our present study is the first to show effects of tDCS over FEF and opens the possibility of applying tDCS for studying the functions of FEF in oculomotor and attentional performance. |
Loes T. E. Kessels; Robert A. C. Ruiter Eye movement responses to health messages on cigarette packages Journal Article In: BMC Public Health, vol. 12, no. 1, pp. 1–9, 2012. @article{Kessels2012, BACKGROUND: While the majority of the health messages on cigarette packages contain threatening health information, previous studies indicate that risk information can trigger defensive reactions, especially when the information is self-relevant (i.e., smokers). Providing coping information, information that provides help for quitting smoking, might increase attention to health messages instead of triggering defensive reactions.$backslash$n$backslash$nMETHODS: Eye-movement registration can detect attention preferences for different health education messages over a longer period of time during message exposure. In a randomized, experimental study with 23 smoking and 41 non-smoking student volunteers, eye-movements were recorded for sixteen self-created cigarette packages containing health texts that presented either high risk or coping information combined with a high threat or a low threat smoking-related photo.$backslash$n$backslash$nRESULTS: Results of the eye movement data showed that smokers tend to spend more time looking (i.e., more unique fixations and longer dwell time) at the coping information than at the high risk information irrespective of the content of the smoking-related photo. Non-smokers tend to spend more time looking at the high risk information than at the coping information when the information was presented in combination with a high threat smoking photo. When a low threat photo was presented, non-smokers paid more attention to the coping information than to the high risk information. Results for the smoking photos showed more attention allocation for low threat photos that were presented in combination with high risk information than for low threat photos in combination with coping information. No attention differences were found for the high threat photos.$backslash$n$backslash$nCONCLUSIONS: Non-smokers demonstrated an attention preference for high risk information as opposed to coping information, but only when text information was presented in combination with a high threat photo. For smokers, however, our findings suggest more attention allocation for coping information than for health risk information. This preference for coping information is not reflected in current health messages to motivate smokers to quit smoking. Coping information should be more frequently implemented in health message design to increase attention for these messages and thus contribute to effective persuasion. |
Betty E. Kim; Darryl Seligman; Joseph W. Kable Preference reversals in decision making under risk are accompanied by changes in attention to different attributes Journal Article In: Frontiers in Neuroscience, vol. 6, pp. 109, 2012. @article{Kim2012, Recent work has shown that visual fixations reflect and influence trial-to-trial variability in people's preferences between goods. Here we extend this principle to attribute weights during decision making under risk. We measured eye movements while people chose between two risky gambles or bid on a single gamble. Consistent with previous work, we found that people exhibited systematic preference reversals between choices and bids. For two gambles matched in expected value, people systematically chose the higher probability option but provided a higher bid for the option that offered the greater amount to win. This effect was accompanied by a shift in fixations of the two attributes, with people fixating on probabilities more during choices and on amounts more during bids. Our results suggest that the construction of value during decision making under risk depends on task context partly because the task differentially directs attention at probabilities vs. amounts. Since recent work demonstrates that neural correlates of value vary with visual fixations, our results also suggest testable hypotheses regarding how task context modulates the neural computation of value to generate preference reversals. |
Young-Suk Kim; Ralph Radach; Christian Vorstius Eye movements and parafoveal processing during reading in Korean Journal Article In: Reading and Writing, vol. 25, no. 5, pp. 1053–1078, 2012. @article{Kim2012a, Parafoveal word processing was examined during Korean reading. Twenty-four native speakers of Korean read sentences in two conditions while their eye movements were being monitored. The boundary paradigm (Rayner, 1975) was used to create a mismatch between characters displayed before and after an eye movement contingent display change. In the first condition, the critical previews were correct case markers in terms of syntactic category (e.g., object marker for an object noun) but with a phonologically incorrect form (e.g., using [Korean character omitted] instead of [Korean character omitted] when the preceding noun ends with a consonant). In the second condition, incorrect case markers in terms of syntactic category were used, creating a semantic mismatch between preview and target. Results include a small but significant parafovea-on-fovea effect on the preceding fixation, combined with a large effect on late measures of target word reading when a syntactically incorrect preview was presented. These results indicate that skilled Korean readers are quite sensitive to high-level linguistic information available in the parafovea. |
Daniel L. Kimmel; Dagem Mammo; William T. Newsome Tracking the eye non-invasively: Simultaneous comparison of the scleral search coil and optical tracking techniques in the macaque monkey Journal Article In: Frontiers in Behavioral Neuroscience, vol. 6, pp. 49, 2012. @article{Kimmel2012, From human perception to primate neurophysiology, monitoring eye position is critical to the study of vision, attention, oculomotor control, and behavior. Two principal techniques for the precise measurement of eye position-the long-standing sclera-embedded search coil and more recent optical tracking techniques-are in use in various laboratories, but no published study compares the performance of the two methods simultaneously in the same primates. Here we compare two popular systems-a sclera-embedded search coil from C-N-C Engineering and the EyeLink 1000 optical system from SR Research-by recording simultaneously from the same eye in the macaque monkey while the animal performed a simple oculomotor task. We found broad agreement between the two systems, particularly in positional accuracy during fixation, measurement of saccade amplitude, detection of fixational saccades, and sensitivity to subtle changes in eye position from trial to trial. Nonetheless, certain discrepancies persist, particularly elevated saccade peak velocities, post-saccadic ringing, influence of luminance change on reported position, and greater sample-to-sample variation in the optical system. Our study shows that optical performance now rivals that of the search coil, rendering optical systems appropriate for many if not most applications. This finding is consequential, especially for animal subjects, because the optical systems do not require invasive surgery for implantation and repair of search coils around the eye. Our data also allow laboratories using the optical system in human subjects to assess the strengths and limitations of the technique for their own applications. |
Thomas Ellenbuerger; Arnaud Boutin; Stefan Panzer; Yannick Blandin; Lennart Fischer; Jörg Schorer; Charles H. Shea Observational training in visual half-fields and the coding of movement sequences Journal Article In: Human Movement Science, vol. 31, no. 6, pp. 1436–1448, 2012. @article{Ellenbuerger2012, An experiment was conducted to determine if gating information to different hemispheres during observational training facilitates the development of a movement representation. Participants were randomly assigned to one of three observation groups that differed in terms of the type of visual half-field presentation during observation (right visual half-field (RVF), left visual half-field (LVF), or in central position (CE)), and a control group (CG). On Day 1, visual stimuli indicating the pattern of movement to be produced were projected on the respective hemisphere. The task participants observed was a 1300. ms spatial-temporal pattern of elbow flexions and extensions. On Day 2, participants physically performed the task in an inter-manual transfer paradigm with a retention test, and two contralateral transfer tests; a mirror transfer test which required the same pattern of muscle activation and limb joint angles and a non-mirror transfer test which reinstated the visual-spatial pattern of the sequence. The results demonstrated that participants of the CE, RVF and the LVF groups showed superior retention and transfer performance compared to participants of the CG. Participants of the CE- and LVF-groups demonstrated an advantage when the visual-spatial coordinates were reinstated compared to the motor coordinates, while participants of the RVF-group did not promote specific transfer patterns. These results will be discussed in the context of hemisphere specialization. |
Tim Donovan; Trevor J. Crawford; Damien Litchfield Negative priming for target selection with saccadic eye movements Journal Article In: Experimental Brain Research, vol. 222, no. 4, pp. 483–494, 2012. @article{Donovan2012, We conducted a series of experiments to determine whether negative priming is used in the process of target selection for a saccadic eye movement. The key questions addressed the circumstances in which the negative priming of an object takes place, and the distinction between spatial and object-based effects. Experiment 1 revealed that after fixating a target (cricket ball) amongst an array of semantically related distracters, saccadic eye movements in a subsequent display were faster to the target than to the distracters or new objects, irrespective of location. The main finding was that of the facilitation of a recent target, not the inhibition of a recent distracter or location. Experiment 2 replicated this finding by using silhouettes of objects for selection that is based on feature shape. Error rates were associated with distracters with high target-shape similarity; therefore, Experiment 3 presented silhouettes of animals using distracters with low target-shape similarity. The pattern of results was similar to that of Experiment 2, with clear evidence of target facilitation rather than the inhibition of distracters. Experiment 4 and 5 introduced a distracter together with the target into the probe display, to generate a level of competitive selection in the probe condition. In these circumstances, clear evidence of spatial inhibition at the location of the previous distracters emerged. We discuss the implications for our understanding of selective attention and consider why it is essential to supplement response time data with the analysis of eye movement behaviour in spatial negative priming paradigms. |
Michael Dorr; Eleonora Vig; Erhardt Barth Eye movement prediction and variability on natural video data sets Journal Article In: Visual Cognition, vol. 20, no. 4-5, pp. 495–514, 2012. @article{Dorr2012, We here study the predictability of eye movements when viewing high-resolution natural videos. We use three recently published gaze data sets that contain a wide range of footage, from scenes of almost still-life character to professionally made, fast-paced advertisements and movie trailers. Inter-subject gaze variability differs significantly between data sets, with variability being lowest for the professional movies. We then evaluate three state-of-the-art saliency models on these data sets. A model that is based on the invariants of the structure tensor and that combines very generic, sparse video representations with machine learning techniques outperforms the two reference models; performance is further improved for two data sets when the model is extended to a perceptually inspired colour space. Finally, a combined analysis of gaze variability and predictability shows that eye movements on the professionally made movies are the most coherent (due to implicit gaze-guidance strategies of the movie directors), yet the least predictable (presumably due to the frequent cuts). Our results highlight the need for standardized benchmarks to comparatively evaluate eye movement prediction algorithms. |
Trafton Drew; Corbin Cunningham; Jeremy M. Wolfe When and why might a computer-aided detection (CAD) system interfere with visual search? An eye-tracking study Journal Article In: Academic Radiology, vol. 19, no. 10, pp. 1260–1267, 2012. @article{Drew2012, Rational and Objectives: Computer-aided detection (CAD) systems are intended to improve performance. This study investigates how CAD might actually interfere with a visual search task. This is a laboratory study with implications for clinical use of CAD. Methods: Forty-seven naive observers in two studies were asked to search for a target, embedded in 1/f2.4 noise while we monitored their eye movements. For some observers, a CAD system marked 75% of targets and 10% of distractors, whereas other observers completed the study without CAD. In experiment 1, the CAD system's primary function was to tell observers where the target might be. In experiment 2, CAD provided information about target identity. Results: In experiment 1, there was a significant enhancement of observer sensitivity in the presence of CAD (t(22) = 4.74, P < .001), but there was also a substantial cost. Targets that were not marked by the CAD system were missed more frequently than equivalent targets in no-CAD blocks of the experiment (t(22) = 7.02, P < .001). Experiment 2 showed no behavioral benefit from CAD, but also no significant cost on sensitivity to unmarked targets (t(22) = 0.6 |
Jean Duchesne; Vincent Bouvier; Julien Guillemé; Olivier A. Coubard Maxwellian eye fixation during natural scene perception Journal Article In: The Scientific World Journal, pp. 1–12, 2012. @article{Duchesne2012, When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell’s law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. |
Sarah L. Eagleman; Valentin Dragoi Image sequence reactivation in awake V4 networks Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 47, pp. 19450–19455, 2012. @article{Eagleman2012, In the absence of sensory input, neuronal networks are far from being silent. Whether spontaneous changes in ongoing activity reflect previous sensory experience or stochastic fluctuations in brain activity is not well understood. Here we describe reactivation of stimulus-evoked activity in awake visual cortical networks. We found that continuous exposure to randomly flashed image sequences induces reactivation in macaque V4 cortical networks in the absence of visual stimulation. This reactivation of previously evoked activity is stimulus-specific, occurs only in the same temporal order as the original response, and strengthens with increased stimulus exposures. Importantly, cells exhibiting significant reactivation carry more information about the stimulus than cells that do not reactivate. These results demonstrate a surprising degree of experience-dependent plasticity in visual cortical networks as a result of repeated exposure to unattended information. We suggest that awake reactivation in visual cortex may underlie perceptual learning by passive stimulus exposure. |
Kurt Debono; Alexander C. Schütz; Karl R. Gegenfurtner Illusory bending of pursuit target Journal Article In: Vision Research, vol. 57, pp. 51–60, 2012. @article{Debono2012, To pursue a small target moving in front of a drifting background, motion vectors from the target need to be integrated and segmented from those belonging to the background. Smooth pursuit eye movements typically integrate target and background directions initially and after some time shift towards the veridical target direction. The perceived target direction on the other hand is generally stable over time: the target is perceived to move in the same direction as long as the motion information maintains the same properties over time. If illusory target motion is observed, this tends to be shifted away from the background. Here we investigated how initial motion integration and segmentation of such stimuli are modulated by direction cues. We presented a small pursuit target moving along a straight path, in front of a background moving in a different direction. Without a direction cue, initial pursuit was biased towards the background direction before shifting towards the veridical target direction. The target's perceived direction on the other hand was near veridical. A cue in the background direction increased initial pursuit integration but also caused perception to behave in a similar way: the target initially had an illusory motion component in the background direction and after about 200 ms it was perceived to curve towards its veridical direction. This illusion shows that during the initial process of segmenting the direction of a pursuit target from irrelevant background motion, both pursuit and perception can be erroneously influenced by a direction cue and integrate the cued background motion. Both modalities corrected this initial integration error as more information about the target became available. |
Joost C. Dessing; Patrick A. Byrne; Armin Abadeh; J. Douglas Crawford Hand-related rather than goal-related source of gaze-dependent errors in memory-guided reaching Journal Article In: Journal of Vision, vol. 12, no. 11, pp. 1–8, 2012. @article{Dessing2012, Mechanisms for visuospatial cognition are often inferred directly from errors in behavioral reports of remembered target direction. For example, gaze-centered target representations for reach were first inferred from reach overshoots of target location relative to gaze. Here, we report evidence for the hypothesis that these gaze-dependent reach errors stem predominantly from misestimates of hand rather than target position, as was assumed in all previous studies. Subjects showed typical gaze-dependent overshoots in complete darkness, but these errors were entirely suppressed by continuous visual feedback of the finger. This manipulation could not affect target representations, so the suppressed gaze-dependent errors must have come from misestimates of hand position, likely arising in a gaze-dependent transformation of hand position signals into visual coordinates. This finding has broad implications for any task involving localization of visual targets relative to unseen limbs, in both healthy individuals and patient populations, and shows that response-related transformations cannot be ignored when deducing the sources of gaze-related errors. |
Joost C. Dessing; Frédéric P. Rey; Peter J. Beek Gaze fixation improves the stability of expert juggling Journal Article In: Experimental Brain Research, vol. 216, no. 4, pp. 635–644, 2012. @article{Dessing2012a, Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled sig- nificantly less variable when fixating, compared to uncon- strained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze- centered coding and/or attentional control mechanisms in the brain. |
Christel Devue; Artem V. Belopolsky; Jan Theeuwes Oculomotor guidance and capture by irrelevant faces Journal Article In: PLoS ONE, vol. 7, no. 4, pp. e34598, 2012. @article{Devue2012, Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning. |
Leandro Luigi Di Stasi; Rebekka Renner; Andrés Catena; José J. Cañas; Boris M. Velichkovsky; Sebastian Pannasch Towards a driver fatigue test based on the saccadic main sequence: A partial validation by subjective report data Journal Article In: Transportation Research Part C: Emerging Technologies, vol. 21, no. 1, pp. 122–133, 2012. @article{DiStasi2012, Developing a valid measurement of mental fatigue remains a big challenge and would be beneficial for various application areas, such as the improvement of road traffic safety. In the present study we examined influences of mental fatigue on the dynamics of saccadic eye movements. Based on previous findings, we propose that among amplitude and duration of saccades, the peak velocity of saccadic eye movements is particularly sensitive to changes in mental fatigue. Ten participants completed a fixation task before and after 2. h of driving in a virtual simulation environment as well as after a rest break of fifteen minutes. Driving and rest break were assumed to directly influence the level of mental fatigue and were evaluated using subjective ratings and eye movement indices. According to the subjective ratings, mental fatigue was highest after driving but decreased after the rest break. The peak velocity of saccadic eye movements decreased after driving while the duration of saccades increased, but no effects of the rest break were observed in the saccade parameters. We conclude that saccadic eye movement parameters-particularly the peak velocity-are sensitive indicators for mental fatigue. According to these findings, the peak velocity analysis represents a valid on-line measure for the detection of mental fatigue, providing the basis for the development of new vigilance screening tools to prevent accidents in several application domains. |
Adele Diederich; Annette Schomburg; Hans Colonius Saccadic reaction times to audiovisual stimuli show effects of oscillatory phase reset Journal Article In: PLoS ONE, vol. 7, no. 10, pp. e44910, 2012. @article{Diederich2012, Initiating an eye movement towards a suddenly appearing visual target is faster when an accessory auditory stimulus occurs in close spatiotemporal vicinity. Such facilitation of saccadic reaction time (SRT) is well-documented, but the exact neural mechanisms underlying the crossmodal effect remain to be elucidated. From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multisensory processing. Specifically, it is assumed that the phase of an ongoing neural oscillation is shifted due to the occurrence of a sensory stimulus so that, across trials, phase values become highly consistent (phase reset). If one can identify the phase an oscillation is reset to, it is possible to predict when temporal windows of high and low excitability will occur. However, in behavioral experiments the pre-stimulus phase will be different on successive repetitions of the experimental trial, and average performance over many trials will show no signs of the modulation. Here we circumvent this problem by repeatedly presenting an auditory accessory stimulus followed by a visual target stimulus with a temporal delay varied in steps of 2 ms. Performing a discrete time series analysis on SRT as a function of the delay, we provide statistical evidence for the existence of distinct peak spectral components in the power spectrum. These frequencies, although varying across participants, fall within the beta and gamma range (20 to 40 Hz) of neural oscillatory activity observed in neurophysiological studies of multisensory integration. Some evidence for high-theta/alpha activity was found as well. Our results are consistent with the phase reset hypothesis and demonstrate that it is amenable to testing by purely psychophysical methods. Thus, any theory of multisensory processes that connects specific brain states with patterns of saccadic responses should be able to account for traces of oscillatory activity in observable behavior. |
Michael D. Dodd; Amanda Balzer; Carly M. Jacobs; Michael W. Gruszczynski; Kevin B. Smith; John R. Hibbing The political left rolls with the good and the political right confronts the bad: Connecting physiology and cognition to preferences Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 367, pp. 640–649, 2012. @article{Dodd2012, We report evidence that individual-level variation in people's physiological and attentional responses to aversive and appetitive stimuli are correlated with broad political orientations. Specifically, we find that greater orientation to aversive stimuli tends to be associated with right-of-centre and greater orientation to appetitive (pleasing) stimuli with left-of-centre political inclinations. These findings are consistent with recent evidence that political views are connected to physiological predispositions but are unique in incorporating findings on variation in directed attention that make it possible to understand additional aspects of the link between the physiological and the political. |
Isabel Dombrowe; Mieke Donk; Hayley Wright; Christian N. L. Olivers; Glyn W. Humphreys The contribution of stimulus-driven and goal-driven mechanisms to feature-based selection in patients with spatial attention deficits Journal Article In: Cognitive Neuropsychology, vol. 29, no. 3, pp. 249–274, 2012. @article{Dombrowe2012, When people search a display for a target defined by a unique feature, fast saccades are predominantly stimulus-driven whereas slower saccades are primarily goal-driven. Here we use this dissociative pattern to assess whether feature-based selection in patients with lateralized spatial attention deficits is impaired in stimulus-driven processing, goal-driven processing, or both. A group of patients suffering from extinction or neglect after parietal damage, and a group of healthy, age-matched controls, were instructed to make a saccade to a uniquely oriented target line which was presented simultaneously with a differently oriented distractor line. We systematically varied the salience of the target and distractor by changing the orientation of background elements, and used a time-based model to extract stimulus-driven (salience) and goal-driven (target set) components of selection. The results show that the patients exhibited reduced stimulus-driven processing only in the contralesional hemifield, while goal-driven processing was reduced across both hemifields. |
Nick Donnelly; Katherine Cornes; Tamaryn Menneer An examination of the processing capacity of features in the Thatcher illusion Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 7, pp. 1475–1487, 2012. @article{Donnelly2012, Detection of the Thatcher illusion (Thompson, Perception, 9:483-484, 1980) is widely upheld as being dependent on configural processing (e.g., Bartlett & Searcy, Cognitive Psychology, 25:281-316, 1993; Boutsen, Humphreys, Praamstra, & Warbrick, NeuroImage, 32:352-367, 2006; Donnelly & Hadwin, Visual Cognition, 10:1001-1017, 2003; Leder & Bruce, Quarterly Journal of Experimental Psychology, 53A:513-536, 2000; Lewis, Perception, 30:769-774, 2001; Maurer, Grand, & Mondloch, Trends in Cognitive Sciences, 6:255-260, 2002; Stürzel & Spillmann, Perception, 29:937-942, 2000). Given that supercapacity processing accompanies configural processing (see Wenger & Townsend, 2001), supercapacity processing should occur in the processing of Thatcherised upright faces. The purpose of this study was to test for evidence that the grotesqueness of upright Thatcherised faces results from supercapacity processing. Two tasks were employed: categorisation of a single face as odd or normal, and a same/different task for sequentially presented faces. The stimuli were typical faces, partially Thatcherised faces (either eyes or mouth inverted) and fully Thatcherised faces. All of the faces were presented upright. The data from both experiments were analysed using mean response times and a number of capacity measures (capacity coefficient, the Miller and Grice inequalities, and the proportional-hazards ratio). The results of both experiments demonstrated some evidence of a redundancy gain for the redundant-target condition over the single-target condition, especially in the response times in Experiment 1. However, there was very limited evidence, in either experiment, that the redundancy gains resulted from supercapacity processing. We concluded that the oddity signalled by inversion of eyes and mouths does not arise from positive interdependencies between these features. |
Tom Foulsham; Richard Dewhurst; Marcus Nyström; Halszka Jarodzka; Roger Johansson; Geoffrey Underwood; Kenneth Holmqvist Comparing scanpaths during scene encoding and recognition: A multi-dimensional approach Journal Article In: Journal of Eye Movement Research, vol. 5, no. 3, pp. 1–14, 2012. @article{Foulsham2012, Complex stimuli and tasks elicit particular eye movement sequences. Previous research has focused on comparing between these scanpaths, particularly in memory and imagery research where it has been proposed that observers reproduce their eye movements when recognizing or imagining a stimulus. However, it is not clear whether scanpath similarity is related to memory performance and which particular aspects of the eye movements recur. We therefore compared eye movements in a picture memory task, using a recently proposed comparison method, MultiMatch, which quantifies scanpath similarity across multiple dimensions including shape and fixation duration. Scanpaths were more similar when the same participant's eye movements were compared from two viewings of the same image than between different images or different participants viewing the same image. In addition, fixation durations were similar within a participant and this similarity was associated with memory performance. |
Steven L. Franconeri; Jason M. Scimeca; Jessica C. Roth; Sarah A. Helseth; Lauren E. Kahn Flexible visual processing of spatial relationships Journal Article In: Cognition, vol. 122, no. 2, pp. 210–227, 2012. @article{Franconeri2012, Visual processing breaks the world into parts and objects, allowing us not only to examine the pieces individually, but also to perceive the relationships among them. There is work exploring how we perceive spatial relationships within structures with existing representations, such as faces, common objects, or prototypical scenes. But strikingly, there is little work on the perceptual mechanisms that allow us to flexibly represent arbitrary spatial relationships, e.g., between objects in a novel room, or the elements within a map, graph or diagram. We describe two classes of mechanism that might allow such judgments. In the simultaneous class, both objects are selected concurrently. In contrast, we propose a sequential class, where objects are selected individually over time. We argue that this latter mechanism is more plausible even though it violates our intuitions. We demonstrate that shifts of selection do occur during spatial relationship judgments that feel simultaneous, by tracking selection with an electrophysiological correlate. We speculate that static structure across space may be encoded as a dynamic sequence across time. Flexible visual spatial relationship processing may serve as a case study of more general visual relation processing beyond space, to other dimensions such as size or numerosity. |
Steven Frisson; Mary Wakefield Psychological essentialist reasoning and perspective taking during reading: A donkey is not a zebra, but a plate can be a clock Journal Article In: Memory & Cognition, vol. 40, no. 2, pp. 297–310, 2012. @article{Frisson2012, In an eyetracking study, we examined whether readers use psychological essentialist reasoning and perspective taking online. Stories were presented in which an animal or an artifact was transformed into another animal (e.g., a donkey into a zebra) or artifact (e.g., a plate into a clock). According to psychological essentialism, the essence of the animal did not change in these stories, while the transformed artifact would be thought to have changed categories. We found evidence that readers use this kind of reasoning online: When reference was made to the transformed animal, the nontransformed term ("donkey") was preferred, but the opposite held for the transformed artifact ("clock" was read faster than "plate"). The immediacy of the effect suggests that this kind of reasoning is employed automatically. Perspective taking was examined within the same stories by the introduction of a novel story character. This character, who was naïve about the transformation, commented on the transformed animal or artifact. If the reader were to take this character's perspective immediately and exclusively for reference solving, then only the transformed term ("zebra" or "clock") would be felicitous. However, the results suggested that while this character's perspective could be taken into account, it seems difficult to completely discard one's own perspective at the same time. |
Nathan Faivre; Vincent Berthet; Sid Kouider Nonconscious influences from emotional faces: A comparison of visual crowding, masking, and continuous flash suppression Journal Article In: Frontiers in Psychology, vol. 3, pp. 129, 2012. @article{Faivre2012, In the study of nonconscious processing, different methods have been used in order to render stimuli invisible. While their properties are well described, the level at which they disrupt nonconscious processing remains unclear. Yet, such accurate estimation of the depth of nonconscious processes is crucial for a clear differentiation between conscious and nonconscious cognition. Here, we compared the processing of facial expressions rendered invisible through gaze-contingent crowding (GCC), masking, and continuous flash suppression (CFS), three techniques relying on different properties of the visual system. We found that both pictures and videos of happy faces suppressed from awareness by GCC were processed such as to bias subsequent preference judgments. The same stimuli manipulated with visual masking and CFS did not bias significantly preference judgments, although they were processed such as to elicit perceptual priming. A significant difference in preference bias was found between GCC and CFS, but not between GCC and masking. These results provide new insights regarding the nonconscious impact of emotional features, and highlight the need for rigorous comparisons between the different methods employed to prevent perceptual awareness. |
Nathan Faivre; Sylvain Charron; Paul Roux; Stephane Lehericy; Sid Kouider Nonconscious emotional processing involves distinct neural pathways for pictures and videos Journal Article In: Neuropsychologia, vol. 50, pp. 3736–3744, 2012. @article{Faivre2012a, Facial expressions are known to impact observers' behavior, even when they are not consciously identifiable. Relying on visual crowding, a perceptual phenomenon whereby peripheral faces become undiscriminable, we show that participants exposed to happy vs. neutral crowded faces rated the pleasantness of subsequent neutral targets accordingly to the facial expression's valence. Using functional magnetic resonance imaging (fMRI) along with psychophysiological interaction analysis, we investigated the neural determinants of this nonconscious preference bias, either induced by static (i.e., pictures) or dynamic (i.e., videos) facial expressions. We found that while static expressions activated primarily the ventral visual pathway (including task-related functional connectivity between the fusiform face area and the amygdala), dynamic expressions triggered the dorsal visual pathway (i.e., posterior partietal cortex) and the substantia innominata, a structure that is contiguous with the dorsal amygdala. As temporal cues are known to improve the processing of visible facial expressions, the absence of ventral activation we observed with crowded videos questions the capacity to integrate facial features and facial motions without awareness. Nevertheless, both static and dynamic facial expressions activated the hippocampus and the orbitofrontal cortex, suggesting that nonconscious preference judgments may arise from the evaluation of emotional context and the computation of aesthetic evaluation. |
Claudia Felser; Ian Cunnings Processing reflexives in a second language: The timing of structural and discourse-level constraints Journal Article In: Applied Psycholinguistics, vol. 33, no. 3, pp. 571–603, 2012. @article{Felser2012, We report the results from two eye-movement monitoring experiments examining the processing of reflexive pronouns by proficient German-speaking learners of second language (L2) English. Our results showthat the nonnative speakers initially tried to linkEnglish argument reflexives to a discourse-prominent but structurally inaccessible antecedent, thereby violating binding condition A. Our native speaker controls, in contrast, showed evidence of applying conditionAimmediately during processing. Together, our findings show that L2 learners' initial focusing on a structurally inaccessible antecedent cannot be due to first language influence and is also independent of whether the inaccessible antecedent c-commands the reflexive. This suggests that unlike native speakers, nonnative speakers of English initially attempt to interpret reflexives through discourse-based coreference assignment rather than syntactic binding. |
Claudia Felser; Ian Cunnings; Claire Batterham; Harald Clahsen The timing of island effects in nonnative sentence processing Journal Article In: Studies in Second Language Acquisition, vol. 34, no. 1, pp. 67–98, 2012. @article{Felser2012a, Using the eye-movement monitoring technique in two reading comprehension experiments, this study investigated the timing of constraints on wh-dependencies (so-called island constraints) in fi rst-and second-language (L1 and L2) sentence processing. The results show that both L1 and L2 speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect L1 and L2 comprehenders in essentially the same way. Furthermore, these results show that the timing of island effects in L1 compared to L2 sentence comprehension is affected differently by the type of cue (semantic fi t versus fi lled gaps) signaling whether dependency formation is possible at a potential gap site. Even though L1 English speakers showed immediate sensitivity to fi lled gaps but not to lack of semantic fi t, profi cient German-speaking learners of English as a L2 showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in L2 processing is based on semantic feature matching rather than being structurally mediated as in L1 comprehension. |