All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2016 |
Jeroen Atsma; Femke Maij; Mathieu Koppen; David E. Irwin; W. Pieter Medendorp Causal inference for spatial constancy across saccades Journal Article In: PLoS Computational Biology, vol. 12, no. 3, pp. e1004766, 2016. @article{Atsma2016, Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability. |
Nada Attar; Matthew H. Schneps; Marc Pomplun Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm Journal Article In: Memory & Cognition, vol. 44, no. 7, pp. 1038–1049, 2016. @article{Attar2016, An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process. |
Janice Attard-Johnson; Markus Bindemann; Caoilte Ó Ciardha Pupillary response as an age-specific measure of sexual interest Journal Article In: Archives of Sexual Behavior, vol. 45, no. 4, pp. 855–870, 2016. @article{AttardJohnson2016, In the visual processing of sexual content, pupil dilation is an indicator of arousal that has been linked to observers' sexual orientation. This study investigated whether this measure can be extended to determine age-specific sexual interest. In two experiments, the pupillary responses of heterosexual adults to images of males and females of different ages were related to self-reported sexual interest, sexual appeal to the stimuli, and a child molestation proclivity scale. In both experiments, the pupils of male observers dilated to photographs ofwomen but not men, children, or neutral stimuli. These pupillary responses corresponded with observer's self-reported sexual interests and their sexual appeal ratings of the stimuli. Female observers showed pupil dilation to photographs of men and women but not children. In women,pupillary responses also correlated poorly with sexual appeal ratings ofthe stimuli. These experiments provide initial evidence that eye-tracking could be used as ameasure of sex-specific interest in male observers, and as an age-specific index in male and female observers. |
Bobby Azarian; Elizabeth G. Esser; Matthew S. Peterson Watch out! Directional threat-related postures cue attention and the eyes Journal Article In: Cognition and Emotion, vol. 30, no. 3, pp. 561–569, 2016. @article{Azarian2016a, Previous work indicates that threatening facial expressions with averted eye gaze can act as a signal of imminent danger, enhancing attentional orienting in the gazed-at direction. However, this threat-related gaze-cueing effect is only present in individuals reporting high levels of anxiety. The present study used eye tracking to investigate whether additional directional social cues, such as averted angry and fearful human body postures, not only cue attention, but also the eyes. The data show that although body direction did not predict target location, anxious individuals made faster eye movements when fearful or angry postures were facing towards (congruent condition) rather than away (incongruent condition) from peripheral targets. Our results provide evidence for attentional cueing in response to threat-related directional body postures in those with anxiety. This suggests that for such individuals, attention is guided by threatening social stimuli in ways that can influence and bias eye movement behaviour. |
Bobby Azarian; Elizabeth G. Esser; Matthew S. Peterson Evidence from the eyes: Threatening postures hold attention Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 3, pp. 764–770, 2016. @article{Azarian2016, Efficient detection of threat provides obvious survival advantages and has resulted in a fast and accurate threatdetection system. Although beneficial under normal circumstances, this system may become hypersensitive and cause threat-processing abnormalities. Past research has shown that anxious individuals have difficulty disengaging attention from threatening faces, but it is unknown whether other forms of threatening social stimuli also influence attentional orienting. Much like faces, human body postures are salient social stimuli, because they are informative of one's emotional state and next likely action. Additionally, postures can convey such information in situations in which another's facial expression is not easily visible. Here we investigated whether there is a threat-specific effect for high-anxious individuals, by measuring the time that it takes the eyes to leave the attended stimulus, a task-irrelevant body posture. The results showed that relative to nonthreating postures, threat-related postures hold attention in anxious individuals, providing further evidence of an anxiety-related attentional bias for threatening information. This is the first study to demonstrate that attentional disengagement from threatening postures is affected by emotional valence in those reporting anxiety. |
Mariana Babo-Rebelo; Craig G. Richter; Catherine Tallon-Baudry Neural responses to heartbeats in the default network encode the self in spontaneous thoughts Journal Article In: Journal of Neuroscience, vol. 36, no. 30, pp. 7829–7840, 2016. @article{BaboRebelo2016, The default network (DN) has been consistently associated with self-related cognition, but also to bodily state monitoring and autonomic regulation. We hypothesized that these two seemingly disparate functional roles of the DN are functionally coupled, in line with theories proposing that selfhood is grounded in the neural monitoring of internal organs, such as the heart. We measured with magnetoencephalograhy neural responses evoked by heartbeats while human participants freely mind-wandered. When interrupted by a visual stimulus at random intervals, participants scored the self-relatedness of the interrupted thought. They evaluated their involvement as the first-person perspective subject or agent in the thought ("I"), and on another scale to what degree they were thinking about themselves ("Me"). During the interrupted thought, neural responses to heartbeats in two regions of the DN, the ventral precuneus and the ventromedial prefrontal cortex, covaried, respectively, with the "I" and the "Me" dimensions of the self, even at the single-trial level. No covariation between self-relatedness and peripheral autonomic measures (heart rate, heart rate variability, pupil diameter, electrodermal activity, respiration rate, and phase) or alpha power was observed. Our results reveal a direct link between selfhood and neural responses to heartbeats in the DN and thus directly support theories grounding selfhood in the neural monitoring of visceral inputs. More generally, the tight functional coupling between self-related processing and cardiac monitoring observed here implies that, even in the absence of measured changes in peripheral bodily measures, physiological and cognitive functions have to be considered jointly in the DN. |
Tessel Blom; Sebastiaan Mathôt; Christian N. L. Olivers; Stefan Van der Stigchel The pupillary light response reflects encoding, but not maintenance, in visual working memory Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 11, pp. 1716–1723, 2016. @article{Blom2016, The pupillary light response has been shown not to be a purely reflexive mechanism but to be sensitive to higher order perceptual processes, such as covert visual attention. In the present study we examined whether the pupillary light response is modulated by stimuli that are not physically present but are maintained in visual working memory. In all conditions, displays contained both bright and dark stimuli. Participants were instructed to covertly attend and encode either the bright or the dark stimuli, which then had to be maintained in visual working memory for a subsequent change-detection task. The pupil was smaller in response to encoding bright stimuli compared to dark stimuli. However, this effect did not sustain during the maintenance phase. This was the case even when brightness was directly relevant for the working memory task. These results reveal that the encoding of task-relevant and physically present information in visual working memory is reflected in the pupil. In contrast, the pupil is not sensitive to the maintenance of task-relevant but no longer visible stimuli. One interpretation of our results is that the pupil optimizes its size for perception of stimuli during encoding; however, once stimuli are no longer visible (during maintenance), an “optimal” pupil size no longer serves a purpose, and the pupil may therefore cease to reflect the brightness of the memorized stimuli. |
Indu P. Bodala; Junhua Li; Nitish V. Thakor; Hasan Al-Nashash EEG and eye tracking demonstrate vigilance enhancement with challenge integration Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 273, 2016. @article{Bodala2016, Maintaining vigilance is possibly the first requirement for surveillance tasks where personnel are faced with monotonous yet intensive monitoring tasks. Decrement in vigilance in such situations could result in dangerous consequences such as accidents, loss of life and system failure. In this paper, we investigate the possibility to enhance vigilance or sustained attention using ‘challenge integration', a strategy that integrates a primary task with challenging stimuli. A primary surveillance task (identifying an intruder in a simulated factory environment) and a challenge stimulus (periods of rain obscuring the surveillance scene) were employed to test the changes in vigilance levels. The effect of integrating challenging events (resulting from artificially simulated rain) into the task were compared to the initial monotonous phase. EEG and eye tracking data is collected and analyzed for n = 12 subjects. Frontal midline theta power and frontal theta to parietal alpha power ratio which are used as measures of engagement and attention allocation show an increase due to challenge integration (p < 0.05 in each case). Relative delta band power of EEG also shows statistically significant suppression on the frontoparietal and occipital cortices due to challenge integration (p < 0.05). Saccade amplitude, saccade velocity and blink rate obtained from eye tracking data exhibit statistically significant changes during the challenge phase of the experiment (p < 0.05 in each case). From the correlation analysis between the statistically significant measures of eye tracking and EEG, we infer that saccade amplitude and saccade velocity decrease with vigilance decrement along with frontal midline theta and frontal theta to parietal alpha ratio. Conversely, blink rate and relative delta power increase with vigilance decrement. However, these measures exhibit a reverse trend when challenge stimulus appears in the task suggesting vigilance enhancement. Moreover, the mean reaction time is lower for the challenge integrated phase (RT mean = 3.65 ± 1.4 secs) compared to initial monotonous phase without challenge (RT mean = 4.6 ± 2.7 secs). Our work shows that vigilance level, as assessed by response of these vital signs, is enhanced by challenge integration. |
Jonathan F. G. Boisvert; Neil D. B. Bruce Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features Journal Article In: Neurocomputing, vol. 207, pp. 653–668, 2016. @article{Boisvert2016, Yarbus׳ pioneering work in eye tracking has been influential to methodology and in demonstrating the apparent importance of task in eliciting different fixation patterns. There has been renewed interest in Yarbus׳ assertions on the importance of task in recent years, driven in part by a greater capability to apply quantitative methods to fixation data analysis. A number of recent research efforts have examined the extent to which an observer׳s task may be predicted from recorded fixation data. This body of recent work has raised a number of interesting questions, with some investigations calling for closer examination of the validity of Yarbus׳ claims, and subsequent efforts revealing some of the nuances involved in carrying out this type of analysis including both methodological, and data related considerations. In this paper, we present an overview of prior efforts in task prediction, and assess different types of statistics drawn from fixation data, or images in their ability to predict task from gaze. We also examine the extent to which relatively general task definitions (free-viewing, object-search, saliency-viewing, explicit saliency) may be predicted by spatial positioning of fixations, features co-located with fixation points, fixation dynamics and scene structure. This is accomplished in considering the data of Koehler et al. (2014) [30] affording a larger scale, and qualitatively different corpus of data for task prediction relative to existing efforts. Based on this analysis, we demonstrate that both spatial position, as well as local features are of value in distinguishing general task categories. The methods proposed provide a general framework for highlighting features that distinguish behavioural differences observed across visual tasks, and we relate new task prediction results in this paper to the body of prior work in this domain. Finally, we also comment on the value of task prediction and classification models in general in understanding facets of gaze behaviour. |
Jantina Bolhuis; Thorsten Kolling; Monika Knopf Looking in the eyes to discriminate: Linking infants' habituation speed to looking behaviour using faces Journal Article In: International Journal of Behavioral Development, vol. 40, no. 3, pp. 243–252, 2016. @article{Bolhuis2016, Studies showed that individual differences in encoding speed as well as looking behaviour during the encoding of facial stimuli can relate to differences in subsequent face discrimination. Nevertheless, a direct linkage between encoding speed and looking behaviour during the encoding of facial stimuli and the role of these encoding characteristics for subsequent discrimination has not been investigated yet. In the present habituation study, an eye-tracker was used to investigate how individual differences in encoding speed (number of habituation trials) relate to individual differences in looking behaviour on faces and the internal facial features (eyes, nose, and mouth) during encoding as well as discrimination. Forty infants habituated to a photograph of a female face. In a subsequent dishabituation phase, a new face was followed by the familiar one. As expected, the results showed that most of the infants were able to habituate to the face and that they managed to discriminate between the new and the familiar face. Furthermore, correlations and analyses of variance showed that individual differences in encoding during habituation related to differences in looking behaviour during habituation as well as dishabituation. Slower-habituating infants could better discriminate between the new and the familiar face and showed a higher interest in the eyes during habi-tuation as well as dishabituation than faster-habituating infants. These data underline that individual differences in encoding speed relate to individual differences in looking behaviour and that increased looking behaviour to important social cues might help subsequent discrimination. |
Sabrina Boll; Marie Bartholomaeus; Ulrike Peter; Ulrike Lupke; Matthias Gamer Attentional mechanisms of social perception are biased in social phobia Journal Article In: Journal of Anxiety Disorders, vol. 40, pp. 83–93, 2016. @article{Boll2016a, Previous studies of social phobia have reported an increased vigilance to social threat cues but also an avoidance of socially relevant stimuli such as eye gaze. The primary aim of this study was to examine attentional mechanisms relevant for perceiving social cues by means of abnormalities in scanning of facial features in patients with social phobia. In two novel experimental paradigms, patients with social phobia and healthy controls matched on age, gender and education were compared regarding their gazing behavior towards facial cues. The first experiment was an emotion classification paradigm which allowed for differentiating reflexive attentional shifts from sustained attention towards diagnostically relevant facial features. In the second experiment, attentional orienting by gaze direction was assessed in a gaze-cueing paradigm in which non-predictive gaze cues shifted attention towards or away from subsequently presented targets. We found that patients as compared to controls reflexively oriented their attention more frequently towards the eyes of emotional faces in the emotion classification paradigm. This initial hypervigilance for the eye region was observed at very early attentional stages when faces were presented for 150 ms, and persisted when facial stimuli were shown for 3 s. Moreover, a delayed attentional orienting into the direction of eye gaze was observed in individuals with social phobia suggesting a differential time course of eye gaze processing in patients and controls. Our findings suggest that basic mechanisms of early attentional exploration of social cues are biased in social phobia and might contribute to the development and maintenance of the disorder. |
Sabrina Boll; Matthias Gamer Psychopathic traits affect the visual exploration of facial expressions Journal Article In: Biological Psychology, vol. 117, pp. 194–201, 2016. @article{Boll2016, Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. |
Yoram S. Bonneh; Yael Adini; Uri Polat Contrast sensitivity revealed by spontaneous eyeblinks: Evidence for a common mechanism of oculomotor inhibition Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 1–15, 2016. @article{Bonneh2016, Spontaneous eyeblinks are known to serve important physiological functions, and recent evidence shows that they are also linked to cognitive processes. It is yet unclear whether this link reflects a crude rate modulation or, alternatively, an automatic and precise process, tightly linked to the low-level properties of sensory stimuli. We have recently reported (Y. S. Bonneh, Adini, & Polat, 2015) that, for microsaccades, the onset and release from inhibition in response to transient stimuli depend systematically on the low-level stimulus parameters. Here we reanalyzed our previous data for both microsaccades and eyeblinks for observers with sufficient blinking (.10% of trials, 18 of 23 observers tested) who watched and silently counted sequences of Gabor patches at 1 Hz with varied contrast and spatial frequency. We found that spontaneous eyeblinks, although less frequent, were similar to microsaccades in their modulation pattern in response to transient stimuli, demonstrating inhibition and rebound, which were dependent on the contrast and spatial frequency of the stimuli. The average blink response time, measured as the latency of the first blink following its release from inhibition, was longer for lower contrast and higher spatial frequency. Importantly, it was highly correlated with a similar measure for microsaccades as well as with psychophysical measures of contrast sensitivity. These results suggest that both eyeblinks and microsaccades are linked to the same inhibitory mechanism that presumably turns off oculomotor events while processing previous events and generates a rebound effect upon its release. The onset of both eyeblinks and microsaccades may thus reflect the time course of this mechanism and the associated cognitive process. |
Paul J. Boon; Artem V. Belopolsky; Jan Theeuwes The role of the oculomotor system in updating visual-spatial working memory across saccades Journal Article In: PLoS ONE, vol. 11, no. 9, pp. e0161829, 2016. @article{Boon2016, Visual-spatial workingmemory (VSWM) helps us to maintain and manipulate visual infor- mation in the absence of sensory input. It has been proposed thatVSWMis an emergent property of the oculomotor system. In the present study we investigated the role of the ocu- lomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which went unnoticed by the partici- pants. After executing the saccade, participants had to indicate the memorized location. If memory updating fully relies on cancellation driven by extraretinal oculomotor signals, the displacement should have no effect on the perceived location of thememorized stimulus. However, if postsaccadic retinal information about the location of the saccade target is used, the perceived location will be shifted according to the target displacement. As it has been suggested thatmaintenance of accurate spatial representations across saccades is especially important for action control, we used different ways of reporting the location held in memory; amatch-to-sample task, a mouse click or by making another saccade. The results showed a small systematic target displacement bias in all response modalities. Parametric manipulation of the distance between the to-be-memorized stimulus and sac- cade target revealed that target displacement bias increased over time and changed its spa- tial profile from being initially centered on locations around the saccade target to becoming spatially global. Taken together results suggest that we neither rely exclusively on extraret- inal nor on retinal information in updating working memory representations across sac- cades. The relative contribution of retinal signals is not fixed but depends on both the time available to integrate these signals as well as the distance between the saccade target and the remembered location. |
Ali Borji; James Tanner Reconciling saliency and object center-bias hypotheses in explaining free-viewing fixations Journal Article In: IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 6, pp. 1214–1226, 2016. @article{Borji2016, Predicting where people look in natural scenes has attracted a lot of interest in computer vision and computational neuroscience over the past two decades. Two seemingly contrasting categories of cues have been proposed to influence where people look: $backslash$textitlow-level image saliency and $backslash$textithigh-level semantic information. Our first contribution is to take a detailed look at these cues to confirm the hypothesis proposed by Henderson~$backslash$citehenderson1993eye and Nuthmann $backslash$& Henderson~$backslash$citenuthmann2010object that observers tend to look at the center of objects. We analyzed fixation data for scene free-viewing over 17 observers on 60 fully annotated images with various types of objects. Images contained different types of scenes, such as natural scenes, line drawings, and 3D rendered scenes. Our second contribution is to propose a simple combined model of low-level saliency and object center-bias that outperforms each individual component significantly over our data, as well as on the OSIE dataset by Xu et al.~$backslash$citexu2014predicting. The results reconcile saliency with object center-bias hypotheses and highlight that both types of cues are important in guiding fixations. Our work opens new directions to understand strategies that humans use in observing scenes and objects, and demonstrates the construction of combined models of low-level saliency and high-level object-based information. |
Sabine Born; Hannah M. Krüger; Eckart Zimmermann; Patrick Cavanagh Compression of space for low visibility probes Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 1–13, 2016. @article{Born2016, Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as perisaccadic compression of space (Ross, Morrone, & Burr, 1997). More recently, we have demonstrated that brief probes are attracted towards a visual reference when followed by a mask, even in the absence of saccades (Zimmermann, Born, Fink, & Cavanagh, 2014). Here, we ask whether spatial compression depends on the transient disruptions of the visual input stream caused by either a mask or a saccade. Both of these degrade the probe visibility but we show that low probe visibility alone causes compression in the absence of any disruption. In a first experiment, we varied the regions of the screen covered by a transient mask, including areas where no stimulus was presented and a condition without masking. In all conditions, we adjusted probe contrast to make the probe equally hard to detect. Compression effects were found in all conditions. To obtain compression without a mask, the probe had to be presented at much lower contrasts than with masking. Comparing mislocalizations at different probe detection rates across masking, saccades and low contrast conditions without mask or saccade, Experiment 2 confirmed this observation and showed a strong influence of probe contrast on compression. Finally, in Experiment 3, we found that compression decreased as probe duration increased both for masks and saccades although here we did find some evidence that factors other than simply visibility as we measured it contribute to compression. Our experiments suggest that compression reflects how the visual system localizes weak targets in the context of highly visible stimuli. |
Arielle Borovsky; Erica M. Ellis; Julia L. Evans; Jeffrey L. Elman Semantic structure in vocabulary knowledge interacts with lexical and sentence processing in infancy Journal Article In: Child Development, vol. 87, no. 6, pp. 1893–1908, 2016. @article{Borovsky2016, Although the size of a child's vocabulary associates with language-processing skills, little is understoodregarding how this relation emerges. This investigation asks whether and how the structure of vocabularyknowledge affects language processing in English-learning 24-month-old children (N = 32; 18 F, 14 M). Paren-tal vocabulary report was used to calculate semantic density in several early-acquired semantic categories.Performance on two language-processing tasks (lexical recognition and sentence processing) was compared asa function of semantic density. In both tasks, real-time comprehension was facilitated for higher density items,whereas lower density items experienced more interference. The findings indicate that language-processingskills develop heterogeneously and are influenced by the semantic network surrounding a known word. |
Arielle Borovsky; Erica M. Ellis; Julia L. Evans; Jeffrey L. Elman Lexical leverage: Category knowledge boosts real-time novel word recognition in 2-year-olds Journal Article In: Developmental Science, vol. 19, no. 6, pp. 918–932, 2016. @article{Borovsky2016a, Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label-object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition. |
Chadwick B. Boulay; Florian Pieper; Matthew L. Leavitt; Julio C. Martinez-Trujillo; Adam J. Sachs Single-trial decoding of intended eye movement goals from lateral prefrontal cortex neural ensembles Journal Article In: Journal of Neurophysiology, vol. 115, no. 1, pp. 486–499, 2016. @article{Boulay2016, Neurons in the lateral prefrontal cortex (LPFC) encode sensory and cognitive signals, as well as commands for goal-directed actions. Therefore, the LPFC might be a good signal source for a goal-selection brain-computer interface (BCI) that decodes the intended goal of a motor action previous to its execution. As a first step in the development of a goal-selection BCI, we set out to determine if we could decode simple behavioral intentions to direct gaze to eight different locations in space from single-trial LPFC neural activity. We recorded neuronal spiking activity from microelectrode arrays implanted in area 8A of the LPFC of two adult macaques while they made visually guided saccades to one of eight targets in a center-out task. Neuronal activity encoded target location immediately after target presentation, during a delay epoch, during the execution of the saccade, and every combination thereof. Many (40%) of the neurons that encoded target location during multiple epochs preferred different locations during different epochs. Despite heterogeneous and dynamic responses, the neuronal feature set that best predicted target location was the averaged firing rates from the entire trial and it was best classified using linear discriminant analysis (63.6$backslash$textendash96.9% in 12 sessions, mean 80.3%; information transfer rate: 21$backslash$textendash59, mean 32.8 bits/min). Our results demonstrate that it is possible to decode intended saccade target location from single-trial LPFC activity and suggest that the LPFC is a suitable signal source for a goal-selection cognitive BCI. |
Jeanne Bovet; Junpeng Lao; Océane Bartholomée; Roberto Caldara; Michel Raymond Mapping female bodily features of attractiveness Journal Article In: Scientific Reports, vol. 6, pp. 18551, 2016. @article{Bovet2016, "Beauty is bought by judgment of the eye" (Shakespeare, Love's Labour's Lost), but the bodily features governing this critical biological choice are still debated. Eye movement studies have demonstrated that males sample coarse body regions expanding from the face, the breasts and the midriff, while making female attractiveness judgements with natural vision. However, the visual system ubiquitously extracts diagnostic extra-foveal information in natural conditions, thus the visual information actually used by men is still unknown. We thus used a parametric gaze-contingent design while males rated attractiveness of female front- and back-view bodies. Males used extra-foveal information when available. Critically, when bodily features were only visible through restricted apertures, fixations strongly shifted to the hips, to potentially extract hip-width and curvature, then the breast and face. Our hierarchical mapping suggests that the visual system primary uses hip information to compute the waist-to-hip ratio and the body mass index, the crucial factors in determining sexual attractiveness and mate selection. |
Jeffrey S. Bowers; Ivan I. Vankov; Casimir J. H. Ludwig The visual system supports online translation invariance for object identification Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 432–438, 2016. @article{Bowers2016, The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved "online," such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision. |
Johannes Brand; Marco Piccirelli; Marie Claude Hepp-Reymond; Manfred Morari; Lars Michels; Kynan Eng Virtual hand feedback reduces reaction time in an interactive finger reaching task Journal Article In: PLoS ONE, vol. 11, no. 5, pp. e0154807, 2016. @article{Brand2016, Computer interaction via visually guided hand or finger movements is a ubiquitous part of daily computer usage in work or gaming. Surprisingly, however, little is known about the performance effects of using virtual limb representations versus simpler cursors. In this study 26 healthy right-handed adults performed cued index finger flexion-extension movements towards an on-screen target while wearing a data glove. They received each of four different types of real-time visual feedback: a simple circular cursor, a point light pattern indicating finger joint positions, a cartoon hand and a fully shaded virtual hand. We found that participants initiated the movements faster when receiving feedback in the form of a hand than when receiving circular cursor or point light feedback. This overall difference was robust for three out of four hand versus circle pairwise comparisons. The faster movement initiation for hand feedback was accompanied by a larger movement amplitude and a larger movement error. We suggest that the observed effect may be related to priming of hand information during action perception and execution affecting motor planning and execution. The results may have applications in the use of body representations in virtual reality applications. |
Doris I. Braun; Karl R. Gegenfurtner Dynamics of oculomotor direction discrimination Journal Article In: Journal of Vision, vol. 16, no. 13, pp. 1–26, 2016. @article{Braun2016, Successful foveation of a dynamic target depends on good predictions of its movement direction and speed. We measured and compared the temporal dynamics of directional precision of both saccades and smooth pursuit and their interactions. We also compared the directional precision of both eye movements to psychophysical direction discrimination thresholds. Directional thresholds of pure pursuit responses improved rapidly and reached asymptotic values of 1.5°-3° within 300 ms after target motion onset, both for trained and untrained observers and irrespective of the speed of the stimuli. Psychophysical thresholds were in the same range. Directional thresholds for saccades in the ramp paradigm were just slightly higher, but these occurred significantly earlier in time at around 200 ms after target motion onset. At the equivalent time during pure pursuit initiation, thresholds were typically higher by 2°-3°. The rise in directional precision-or decrease in thresholds-over time was more pronounced for trials with longer latencies. As an effect, precision depended mainly on time since stimulus motion onset rather than pursuit onset. Directional precision for saccades to static targets was slightly better than to moving targets, at even shorter latencies. We conclude that directional precision is higher for the saccadic system at saccade onset than for the pursuit system, presumably due to additional position signals that are not available to the pursuit system at that point in time. The pursuit response improves rapidly due to refined sensory processing and motor planning. The combination of initial saccades and pursuit to track moving targets is a good strategy for the oculomotor system to reduce directional errors during the phase of initiation. The target speed has very little effects on the directional precision of both eye movements. |
Scott L. Brincat; Earl K. Miller Prefrontal cortex networks shift from external to internal modes during learning Journal Article In: Journal of Neuroscience, vol. 36, no. 37, pp. 9739–9754, 2016. @article{Brincat2016, As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with "internal" memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)-regions critical for sensory associations-of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11-27 Hz) oscillatory power and synchrony associated with "top-down" or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. |
James A. Brissenden; Emily J. Levin; David E. Osher; Mark A. Halko; David C. Somers Functional evidence for a cerebellar node of the dorsal attention network Journal Article In: Journal of Neuroscience, vol. 36, no. 22, pp. 6083–6096, 2016. @article{Brissenden2016, The "dorsal attention network" or "frontoparietal network" refers to a network of cortical regions that support sustained attention and working memory. Recent work has demonstrated that cortical nodes of the dorsal attention network possess intrinsic functional connections with a region in ventral cerebellum, in the vicinity of lobules VII/VIII. Here, we performed a series of task-based and resting-state fMRI experiments to investigate cerebellar participation in the dorsal attention network in humans. We observed that visual working memory and visual attention tasks robustly recruit cerebellar lobules VIIb and VIIIa, in addition to canonical cortical dorsal attention network regions. Across the cerebellum, resting-state functional connectivity with the cortical dorsal attention network strongly predicted the level of activation produced by attention and working memory tasks. Critically, cerebellar voxels that were most strongly connected with the dorsal attention network selectively exhibited load-dependent activity, a hallmark of the neural structures that support visual working memory. Finally, we examined intrinsic functional connectivity between task-responsive portions of cerebellar lobules VIIb/VIIIa and cortex. Cerebellum-to-cortex functional connectivity strongly predicted the pattern of cortical activation during task performance. Moreover, resting-state connectivity patterns revealed that cerebellar lobules VIIb/VIIIa group with cortical nodes of the dorsal attention network. This evidence leads us to conclude that the conceptualization of the dorsal attention network should be expanded to include cerebellar lobules VIIb/VIIIa. |
Andreas Brocher; Stephani Foraker; Jean Pierre Koenig Processing of irregular polysemes in sentence reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 11, pp. 1798–1813, 2016. @article{Brocher2016a, The degree to which meanings are related in memory affects ambiguous word processing. We examined irregular polysemes, which have related senses based on similar or shared features rather than a relational rule, like regular polysemy. We tested to what degree the related meanings of irregular polysemes ("wire") are represented with shared semantic information versus unshared information represented separately, like homonyms ("bank"). Monitoring eye fixations, we found that later context supporting the less frequent meaning of an irregular polyseme did not slow down reading compared with control conditions, whereas for homonyms it did. This indicates that in the absence of preceding biasing context, readers access a shared component of an irregular polyseme's representation. Additionally, when the same context words preceded the ambiguous word, both irregular polysemes and homonyms initially elicited longer reading times, but the observed reading slow-down was weaker and less persistent for irregular polysemes than homonyms, indicating less competition between meaning components. We interpret these results as evidence of a shared features representation for irregular polysemes, which additionally incorporates unshared portions of meaning that can compete. When preceding, biasing context is available, readers activate shared and unshared components of the senses, producing a more fully instantiated meaning. |
Andreas Brocher; Tim Graf Pupil old/new effects reflect stimulus encoding and decoding in short-term memory Journal Article In: Psychophysiology, vol. 53, no. 12, pp. 1823–1835, 2016. @article{Brocher2016, We conducted five pupil old/new experiments to examine whether pupil old/new effects can be linked to familiarity and/or recollection processes of recognition memory. In Experiments 1–3, we elicited robust pupil old/new effects for legal words and pseudowords (Experiment 1), positive and negative words (Experiment 2), and low-frequency and high-frequency words (Experiment 3). Importantly, unlike for old/new effects in ERPs, we failed to find any effects of long-term memory representations on pupil old/new effects. In Experiment 4, using the words and pseudowords from Experiment 1, participants made lexical decisions instead of old/new decisions. Pupil old/new effects were restricted to legal words. Additionally requiring participants to make speeded responses (Experiment 5) led to a complete absence of old/new effects. Taken together, these data suggest that pupil old/new effects do not map onto familiarity and recollection processes of recognition memory. They rather seem to reflect strength of memory traces in short-term memory, with little influence of long-term memory representations. Crucially, weakening the memory trace through manipulations in the experimental task significantly reduces pupil/old new effects. |
Andrea Albonico; Manuela Malaspina; Emanuela Bricolo; Marialuisa Martelli; Roberta Daini Temporal dissociation between the focal and orientation components of spatial attention in central and peripheral vision Journal Article In: Acta Psychologica, vol. 171, pp. 85–92, 2016. @article{Albonico2016, Selective attention, i.e. the ability to concentrate one's limited processing resources on one aspect of the environment, is a multifaceted concept that includes different processes like spatial attention and its subcomponents of orienting and focusing. Several studies, indeed, have shown that visual tasks performance is positively influenced not only by attracting attention to the target location (orientation component), but also by the adjustment of the size of the attentional window according to task demands (focal component). Nevertheless, the relative weight of the two components in central and peripheral vision has never been studied. We conducted two experiments to explore whether different components of spatial attention have different effects in central and peripheral vision. In order to do so, participants underwent either a detection (Experiment 1) or a discrimination (Experiment 2) task where different types of cues elicited different components of spatial attention: a red dot, a small square and a big square (an optimal stimulus for the orientation component, an optimal and a sub-optimal stimulus for the focal component respectively). Response times and cue-size effects indicated a stronger effect of the small square or of the dot in different conditions, suggesting the existence of a dissociation in terms of mechanisms between the focal and the orientation components of spatial attention. Specifically, we found that the orientation component was stronger in periphery, while the focal component was noticeable only in central vision and characterized by an exogenous nature. |
Micah Allen; Darya Frank; D. Samuel Schwarzkopf; Francesca Fardo; Joel S. Winston; Tobias U. Hauser; Geraint Rees Unexpected arousal modulates the influence of sensory noise on confidence Journal Article In: eLife, vol. 5, pp. 1–17, 2016. @article{Allen2016, Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states. |
Hosam Al-Samarraie; Samer Muthana Sarsam; Hans Guesgen Predicting user preferences of environment design: A perceptual mechanism of user interface customisation Journal Article In: Behaviour & Information Technology, vol. 35, no. 8, pp. 644–653, 2016. @article{AlSamarraie2016, It is a well-known fact that users vary in their preferences and needs. Therefore, it is very crucial to provide the customisation or personalisation for users in certain usage conditions that are more associated with their preferences. With the current limitation in adopting perceptual processing into user interface personalisation, we introduced the possibility of inferring interface design preferences from the user?s eye-movement behaviour. We firstly captured the user?s preferences of graphic design elements using an eye-tracker. Then we diagnosed these preferences towards the region of interests to build a prediction model for interface customisation. The prediction models from eye-movement behaviour showed a high potential for predicting users? preferences of interface design based on the paralleled relation between their fixation and saccadic movement. This mechanism provides a novel way of user interface design customisation and opens the door for new research in the areas of human?computer interaction and decision-making. |
Agnès Alsius; Rachel V. Wayne; Martin Paré; Kevin G. Munhall High visual resolution matters in audiovisual speech perception, but only for some Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 5, pp. 1472–1487, 2016. @article{Alsius2016, The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect. |
Tatiana A. Amor; Saulo D. S. Reis; Daniel Campos; Hans J. Herrmann; José S. Andrade Persistence in eye movement during visual search Journal Article In: Scientific Reports, vol. 6, pp. 20815, 2016. @article{Amor2016, As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. |
Nicola C. Anderson; Mieke Donk; Martijn Meeter The influence of a scene preview on eye movement behavior in natural scenes Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 6, pp. 1794–1801, 2016. @article{Anderson2016, Rich contextual and semantic information can be extracted from only a brief presentation of a natural scene. This is presumed to be activated quickly enough to guide initial eye movements into a scene. However, early, short-latency eye movements in natural scenes have been shown to be dependent on the salience distribution across the image (Anderson, Ort, Kruijne, Meeter, & Donk, 2015). In the present work, we manipulated the salience distribution across a natural scene by changing the global contrast. We showed participants a brief real or nonsense preview of the scene and examined the time-course of eye movement guidance. A real preview decreased the latency and increased the amplitude of initial saccades into the image, suggesting that the preview allowed observers to obtain additional contextual information that would otherwise not be available. However, the preview did not completely override the initial tendency for short-latency saccades to be guided by the underlying salience distribution of the image. We discuss these findings in the context of oculomotor selection based on the integration of contextual information and low-level features in a natural scene. |
Bernhard Angele; Timothy J. Slattery; Keith Rayner Two stages of parafoveal processing during reading: Evidence from a display change detection task Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 4, pp. 1241–1249, 2016. @article{Angele2016, We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924–1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage. |
Evan G. Antzoulatos; Earl K. Miller Synchronous beta rhythms of frontoparietal networks support only behaviorally relevant representations Journal Article In: eLife, vol. 5, no. NOVEMBER2016, pp. 1–22, 2016. @article{Antzoulatos2016, Categorization has been associated with distributed networks of the primate brain, including the prefrontal (PFC) and posterior parietal cortices (PPC). Although category-selective spiking in PFC and PPC has been established, the frequency-dependent dynamic interactions of frontoparietal networks are largely unexplored. We trained monkeys to perform a delayed-match-to-spatial-category task while recording spikes and local field potentials from the PFC and PPC with multiple electrodes. We found category-selective beta- and delta-band synchrony between and within the areas. However, in addition to the categories, delta synchrony and spiking activity also reflected irrelevant stimulus dimensions. By contrast, beta synchrony only conveyed information about the task-relevant categories. Further, category-selective PFC neurons were synchronized with PPC beta oscillations, while neurons that carried irrelevant information were not. These results suggest that long-range beta-band synchrony could act as a filter that only supports neural representations of the variables relevant to the task at hand. |
Paul L. Aparicio; Elias B. Issa; James J. DiCarlo Neurophysiological organization of the middle face patch in macaque inferior temporal cortex Journal Article In: Journal of Neuroscience, vol. 36, no. 50, pp. 12729–12745, 2016. @article{Aparicio2016, While early cortical visual areas contain fine scale spatial organization of neuronal properties such as orientation preference, the spatial organization of higher-level visual areas is less well understood. The fMRI demonstration of face preferring regions in human ventral cortex (FFA, OFA) and monkey inferior temporal cortex ("face patches") raises the question of how neural selectivity for faces is organized. Here, we targeted hundreds of spatially registered neural recordings to the largest fMRI-identified face selective region in monkeys, the middle face patch (MFP) and show that the MFP contains a graded enrichment of face preferring neurons. At its center, as much as 93% of the sites we sampled responded twice as strongly to faces than to non-face objects. We estimate the maximum neurophysiological size of the MFP to be ∼6 mm in diameter, consistent with its previously reported size under fMRI. Importantly, face selectivity in the MFP varied strongly even between neighboring sites. Additionally, extremely face selective sites were ∼50x more likely to be present inside the MFP than outside. These results provide the first direct quantification of the size and neural composition of the MFP by showing that the cortical tissue localized to the fMRI defined region consists of a very high fraction of face preferring sites near its center, and a monotonic decrease in that fraction along any radial spatial axis. |
Jasper H. Fabius; Alessio Fracasso; Stefan Van Der Stigchel Spatiotopic updating facilitates perception immediately after saccades Journal Article In: Scientific Reports, vol. 6, pp. 34488, 2016. @article{Fabius2016, As the neural representation of visual information is initially coded in retinotopic coordinates, eye movements (saccades) pose a major problem for visual stability. If no visual information were maintained across saccades, retinotopic representations would have to be rebuilt after each saccade. It is currently strongly debated what kind of information (if any at all) is accumulated across saccades, and when this information becomes available after a saccade. Here, we use a motion illusion to examine the accumulation of visual information across saccades. In this illusion, an annulus with a random texture slowly rotates, and is then replaced with a second texture (motion transient). With increasing rotation durations, observers consistently perceive the transient as large rotational jumps in the direction opposite to rotation direction (backward jumps). We first show that accumulated motion information is updated spatiotopically across saccades. Then, we show that this accumulated information is readily available after a saccade, immediately biasing postsaccadic perception. The current findings suggest that presaccadic information is used to facilitate postsaccadic perception and are in support of a forward model of transsaccadic perception, aiming at anticipating the consequences of eye movements and operating within the narrow perisaccadic time window. |
Jasper H. Fabius; Martijn J. Schut; Stefan Van der Stigchel Spatial inhibition of return as a function of fixation history, task, and spatial references Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 6, pp. 1633–1641, 2016. @article{Fabius2016a, In oculomotor selection, each saccade is thought to be automatically biased toward uninspected locations, inhibiting the inefficient behavior of repeatedly refixating the same objects. This automatic bias is related to inhibition of return (IOR). Although IOR seems an appealing property that increases efficiency in visual search, such a mechanism would not be efficient in other tasks. Indeed, evidence for additional, more flexible control over refixations has been provided. Here, we investigated whether task demands implicitly affect the rate of refixations. We measured the probability of refixations after series of six binary saccadic decisions under two conditions: visual search and free viewing. The rate of refixations seems influenced by two effects. One effect is related to the rate of intervening fixations, specifically, more refixations were observed with more intervening fixations. In addition, we observed an effect of task set, with fewer refixations in visual search than in free viewing. Importantly, the history-related effect was more pronounced when sufficient spatial references were provided, suggesting that this effect is dependent on spatiotopic encoding of previously fixated locations. This known history-related bias in gaze direction is not the primary influence on the refixation rate. Instead, multiple factors, such as task set and spatial references, assert strong influences as well. |
Laura Fademrecht; Isabelle Bülthoff; Stephan Rosa Action recognition in the visual periphery Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–14, 2016. @article{Fademrecht2016, Recognizing whether the gestures of somebody mean a greeting or a threat is crucial for social interactions. In real life, action recognition occurs over the entire visual field. In contrast, much of the previous research on action recognition has primarily focused on central vision. Here our goal is to examine what can be perceived about an action outside of foveal vision. Specifically, we probed the valence as well as first level and second level recognition of social actions (handshake, hugging, waving, punching, slapping, and kicking) at 08 (fovea/fixation), 158, 308, 458, and 608 of eccentricity with dynamic (Experiment 1) and dynamic and static (Experiment 2) actions. To assess peripheral vision under conditions of good ecological validity, these actions were carried out by a life-size human stick figure on a large screen. In both experiments, recognition performance was surprisingly high (more than 66% correct) up to 308 of eccentricity for all recognition tasks and followed a nonlinear decline with increasing eccentricities. |
Xiaoxu Fan; Lan Wang; Hanyu Shao; Daniel Kersten; Sheng He Temporally flexible feedback signal to foveal cortex for peripheral object recognition Journal Article In: Proceedings of the National Academy of Sciences, vol. 113, no. 41, pp. 11627–11632, 2016. @article{Fan2016, Recent studies have shown that information from peripherally presented images is present in the human foveal retinotopic cortex, presumably because of feedback signals. We investigated this potential feedback signal by presenting noise in fovea at different object-noise stimulus onset asynchronies (SOAs), whereas subjects performed a discrimination task on peripheral objects. Results revealed a selective impairment of performance when foveal noise was presented at 250-ms SOA, but only for tasks that required comparing objects' spatial details, suggesting a task- and stimulus-dependent foveal processing mechanism. Critically, the temporal window of foveal processing was shifted when mental rotation was required for the peripheral objects, indicating that the foveal retinotopic processing is not automatically engaged at a fixed time following peripheral stimulation; rather, it occurs at a stage when detailed information is required. Moreover, fMRI measurements using multivoxel pattern analysis showed that both image and object category-relevant information of peripheral objects was represented in the foveal cortex. Taken together, our results support the hypothesis of a temporally flexible feedback signal to the foveal retinotopic cortex when discriminating objects in the visual periphery. |
Gerardo Fernández; Salvador Guinjoan; Marcelo Sapognikoff; David Orozco; Osvaldo Agamennoni Contextual predictability enhances reading performance in patients with schizophrenia Journal Article In: Psychiatry Research, vol. 241, pp. 333–339, 2016. @article{Fernandez2016, In the present work we analyzed fixation duration in 40 healthy individuals and 18 patients with chronic, stable SZ during reading of regular sentences and proverbs. While they read, their eye movements were recorded. We used lineal mixed models to analyze fixation durations. The predictability of words N-1, N, and N+1 exerted a strong influence on controls and SZ patients. The influence of the predictabilities of preceding, current, and upcoming words on SZ was clearly reduced for proverbs in comparison to regular sentences. Both controls and SZ readers were able to use highly predictable fixated words for an easier reading. Our results suggest that SZ readers might compensate attentional and working memory deficiencies by using stored information of familiar texts for enhancing their reading performance. The predictabilities of words in proverbs serve as task-appropriate cues that are used by SZ readers. To the best of our knowledge, this is the first study using eyetracking for measuring how patients with SZ process well-defined words embedded in regular sentences and proverbs. Evaluation of the resulting changes in fixation durations might provide a useful tool for understanding how SZ patients could enhance their reading performance. |
Gerardo Fernández; Facundo Manes; Luis E. Politi; David Orozco; Marcela Schumacher; Liliana Castro; Osvaldo Agamennoni; Nora P. Rotstein Patients with mild Alzheimer's disease fail when using their working memory: Evidence from the eye tracking technique Journal Article In: Journal of Alzheimer's Disease, vol. 50, no. 3, pp. 827–838, 2016. @article{Fernandez2016a, Patients with Alzheimer's disease (AD) develop progressive language, visuoperceptual, attentional, and oculomotor changes that can have an impact on their reading comprehension. However, few studies have examined reading behavior in AD, and none have examined the contribution of predictive cueing in reading performance. For this purpose we analyzed the eye movement behavior of 35 healthy readers (Controls) and 35 patients with probable AD during reading of regular and highpredictable sentences. The cloze predictability of words N- 1, and N+ 1 exerted an influence on the reader's gaze duration. The predictabilities of preceding words in high-predictable sentences served as task-appropriate cues that were used by Control readers. In contrast, these effects were not present in AD patients. In Controls, changes in predictability significantly affected fixation duration along the sentence; noteworthy, these changes did not affect fixation durations in AD patients. Hence, only in healthy readers did predictability of upcoming words influence fixation durations via memory retrieval. Our results suggest that Controls used stored information of familiar texts for enhancing their reading performance and imply that contextual-word predictability, whose processing is proposed to require memory retrieval, only affected reading behavior in healthy subjects. In AD patients, this loss reveals impairments in brain areas such as those corresponding to working memory and memory retrieval. These findings might be relevant for expanding the options for the early detection and monitoring in the early stages of AD. Furthermore, evaluation of eye movements during reading could provide a new tool for measuring drug impact on patients' behavior. |
Gerardo Fernández; Marcelo Sapognikoff; Salvador Guinjoan; David Orozco; Osvaldo Agamennoni Word processing during reading sentences in patients with schizophrenia: Evidences from the eyetracking technique Journal Article In: Comprehensive Psychiatry, vol. 68, pp. 193–200, 2016. @article{Fernandez2016b, Purpose: The current study analyze the effect of word properties (i.e., word length, word frequency and word predictability) on the eye movement behavior of patients with schizophrenia (SZ) compared to age-matched controls. Method: 18 SZ patients and 40 age matched controls participated in the study. Eye movements were recorded during reading regular sentences by using the eyetracking technique. Eye movement analyses were performed using linear mixed models. Findings: Analysis of eye movements revealed that patients with SZ decreased the amount of single fixations, increased their total number of second pass fixations compared with healthy individuals (Controls). In addition, SZ patients showed an increase in gaze duration, compared to Controls. Interestingly, the effects of current word frequency and current word length processing were similar in Controls and SZ patients. The high rate of second pass fixations and its low rate in single fixation might reveal impairments in working memory when integrating neighbor words. In contrast, word frequency and length processing might require less complex mechanisms, which were functioning in SZ patients. Conclusion: To the best of our knowledge, this is the first study measuring how patients with SZ process dynamically well-defined words embedded in regular sentences. The findings suggest that evaluation of the resulting changes in eye movement behavior may supplement current symptom-based diagnosis. |
Aline Ferreira; John Wayne Schwieter; Alexandra Gottardo; Jefferey Jones Cognitive effort in direct and inverse translation performance: Insight from eye-tracking technology Journal Article In: Cadernos de Tradução, vol. 36, no. 3, pp. 60–80, 2016. @article{Ferreira2016, This case study examined the translation performance of four professional translators with the aim of exploring the cognitive effort involved in direct and inverse translation. Four professional translators translated two comparable texts from English into Spanish and from Spa- nish into English. Eye-tracking technology was used to analyze the total time spent in each task, fixation time, and average fixation time. Fixation count in three areas of interest was measured including: source text, target text, and browser, used as an external support. Results suggested that although total time and fixation count were indicators of cognitive effort during the tasks, fixation count in the areas of interest data showed that more effort was directed toward the source text in both tasks. Overall, this study demonstrates that while more traditional measures for translation difficulty (e.g., total time) indicate more effort in the inverse translation task, eye-tracking data indicate that differences in the effort applied in both directions must be carefully analyzed, mostly regarding the areas of interest. |
Jamie Ferri; Joseph Schmidt; Greg Hajcak; Turhan Canli Emotion regulation and amygdala-precuneus connectivity: Focusing on attentional deployment Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 16, no. 6, pp. 991–1002, 2016. @article{Ferri2016, Attentional deployment is an emotion regulation strategy that involves shifting attentional focus. Deploying attention to non-arousing, compared to arousing, regions of unpleasant images has been associated with reduced negative affect, reduced amygdala activation, and increased activity in fronto-parietal control networks. The current study examined neural correlates and functional connectivity associated with using attentional deployment to increase negative affect (deploying attention towards arousing unpleasant information) or to decrease negative affect (deploying attention away from arousing unpleasant information), compared to naturally viewing unpleasant images, in 42 individuals while concurrently monitoring eye movements. Directing attention to both arousing and non-arousing regions resulted in enhanced fronto-parietal activation compared to natural viewing, but only directing attention to non-arousing regions was associated with changes in amygdala activation. There were no significant differences in connectivity between naturally viewing unpleasant images and focusing on arousing regions. However, naturally viewing unpleasant images, relative to focusing on non-arousing regions, was associated with increased connectivity between the amygdala and visual cortex, while focusing on non-arousing regions of unpleasant images, compared to natural viewing, was associated with increased connectivity between the amygdala and the precuneus. Amygdala-precuneus connectivity correlated positively with eye-tracking measures of attentional deployment success and with trait reappraisal. Deploying attention away from arousing unpleasant information, then, may depend upon functional relationships between the amygdala and parietal regions implicated in attentional control. Furthermore, these relationships might relate to the ability to successfully implement attentional deployment, and the predisposition to utilize adaptive emotion regulation strategies. |
Nonie J. Finlayson; Julie D. Golomb Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth Journal Article In: Vision Research, vol. 127, pp. 49–56, 2016. @article{Finlayson2016, A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information – not position-in-depth – seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location. |
Petra Fischer; José P. Ossandón; Johannes Keyser; Alessandro Gulberti; Niklas Wilming; Wolfgang Hamel; Johannes Köppen; Carsten Buhmann; Manfred Westphal; Christian Gerloff; Christian K. E. Moll; Andreas K. Engel; Peter König STN-DBS reduces saccadic hypometria but not visuospatial bias in Parkinson's disease patients Journal Article In: Frontiers in Behavioral Neuroscience, vol. 10, pp. 85, 2016. @article{Fischer2016, In contrast to its well-established role in alleviating skeleto-motor symptoms in Parkinson's disease, little is known about the impact of deep brain stimulation (DBS) of the subthalamic nucleus (STN) on oculomotor control and attention. Eye-tracking data of 17 patients with left-hemibody symptom onset was compared with 17 age-matched control subjects. Free-viewing of natural images was assessed without stimulation as baseline and during bilateral DBS. To examine the involvement of ventral STN territories in oculomotion and spatial attention, we employed unilateral stimulation via the left and right ventralmost contacts respectively. When DBS was off, patients showed shorter saccades and a rightward viewing bias compared with controls. Bilateral stimulation in therapeutic settings improved saccadic hypometria but not the visuospatial bias. At a group level, unilateral ventral stimulation yielded no consistent effects. However, the evaluation of electrode position within normalized MNI coordinate space revealed that the extent of early exploration bias correlated with the precise stimulation site within the left subthalamic area. These results suggest that oculomotor impairments "but not higher-level exploration patterns" are effectively ameliorable by DBS in therapeutic settings. Our findings highlight the relevance of the STN topography in selecting contacts for chronic stimulation especially upon appearance of visuospatial attention deficits. |
Phillip D. Fletcher; Jennifer M. Nicholas; Laura E. Downey; Hannah L. Golden; Camilla N. Clark; Carolina Pires; Jennifer L. Agustus; Catherine J. Mummery; Jonathan M. Schott; Jonathan D. Rohrer; Sebastian J. Crutch; Jason D. Warren A physiological signature of sound meaning in dementia Journal Article In: Cortex, vol. 77, pp. 13–23, 2016. @article{Fletcher2016, The meaning of sensory objects is often behaviourally and biologically salient and decoding of semantic salience is potentially vulnerable in dementia. However, it remains unclear how sensory semantic processing is linked to physiological mechanisms for coding object salience and how that linkage is affected by neurodegenerative diseases. Here we addressed this issue using the paradigm of complex sounds. We used pupillometry to compare physiological responses to real versus synthetic nonverbal sounds in patients with canonical dementia syndromes (behavioural variant frontotemporal dementia - bvFTD, semantic dementia - SD; progressive nonfluent aphasia - PNFA; typical Alzheimer's disease - AD) relative to healthy older individuals. Nonverbal auditory semantic competence was assessed using a novel within-modality sound classification task and neuroanatomical associations of pupillary responses were assessed using voxel-based morphometry (VBM) of patients' brain MR images. After taking affective stimulus factors into account, patients with SD and AD showed significantly increased pupil responses to real versus synthetic sounds relative to healthy controls. The bvFTD, SD and AD groups had a nonverbal auditory semantic deficit relative to healthy controls and nonverbal auditory semantic performance was inversely correlated with the magnitude of the enhanced pupil response to real versus synthetic sounds across the patient cohort. A region of interest analysis demonstrated neuroanatomical associations of overall pupil reactivity and differential pupil reactivity to sound semantic content in superior colliculus and left anterior temporal cortex respectively. Our findings suggest that autonomic coding of auditory semantic ambiguity in the setting of a damaged semantic system may constitute a novel physiological signature of neurodegenerative diseases. |
Meg Fluharty; Ines Jentzsch; Manuel Spitschan; Dhanraj Vishwanath Eye fixation during multiple object attention is based on a representation of discrete spatial foci Journal Article In: Scientific Reports, vol. 6, pp. 31832, 2016. @article{Fluharty2016, We often look at and attend to several objects at once. How the brain determines where to point our eyes when we do this is poorly understood. Here we devised a novel paradigm to discriminate between different models of spatial selection guiding fixation. In contrast to standard static attentional tasks where the eye remains fixed at a predefined location, observers selected their own preferred fixation position while they tracked static targets that were arranged in specific geometric configurations and which changed identity over time. Fixations were best predicted by a representation of discrete spatial foci, not a polygonal grouping, simple 2-foci division of attention or a circular spotlight. Moreover, attentional performance was incompatible with serial selection. Together with previous studies, our findings are compatible with a view that attentional selection and fixation rely on shared spatial representations and suggest a more nuanced definition of overt vs. covert attention. |
Rebecca M. Foerster In: Frontiers in Psychology, vol. 7, pp. 1845, 2016. @article{Foerster2016, When performing sequential manual actions (e.g., cooking), visual information is prioritized according to the task determining where and when to attend, look, and act. In well-practiced sequential actions, long-term memory (LTM)-based expectations specify which action targets might be found where and when.We have previously demonstrated (Foerster and Schneider, 2015b) that violations of such expectations that are task- relevant (e.g., target location change) cause a regression from a memory-based mode of attentional selection to visual search. How might task-irrelevant expectation violations in such well-practiced sequential manual actions modify attentional selection? This question was investigated by a computerized version of the number-connection test. Participants clicked on nine spatially distributed numbered target circles in ascending order while eye movements were recorded as proxy for covert attention. Target's visual features and locations stayed constant for 65 prechange-trials, allowing practicing the manual action sequence. Consecutively, a task-irrelevant expectation violation occurred and stayed for 20 change-trials. Specifically, action target number 4 appeared in a different font. In 15 reversion-trials, number 4 returned to the original font. During the first task-irrelevant change trial, manual clicking was slower and eye scanpaths were larger and contained more fixations. The additional fixations were mainly checking fixations on the changed target while acting on later targets. Whereas the eyes repeatedly revisited the task-irrelevant change, cursor-paths remained completely unaffected. Effects lasted for 2–3 change trials and did not reappear during reversion. In conclusion, an unexpected task-irrelevant change on a task-defining feature of a well-practiced manual sequence leads to eye-hand decoupling and a “check-after-surprise” mode of attentional selection. |
Tomas Folke; Catrine Jacobsen; Stephen M. Fleming; Benedetto De Martino Explicit representation of confidence informs future value-based decisions Journal Article In: Nature Human Behaviour, vol. 1, pp. 0002, 2016. @article{Folke2016, Humans can reflect on decisions and report variable levels of confidence. But why maintain an explicit representation of confi- dence for choices that have already been made and therefore cannot be undone? Here we show that an explicit representation of confidence is harnessed for subsequent changes of mind. Specifically, when confidence is low, participants are more likely to change their minds when the same choice is presented again, an effect that is most pronounced in participants with greater fidelity in their confidence reports. Furthermore, we show that choices reported with high confidence follow a more consistent pattern (fewer transitivity violations). Finally, by tracking participants' eye movements, we demonstrate that lower-level gaze dynamics can track uncertainty but do not directly impact changes of mind. These results suggest that an explicit and accurate representation of confidence has a positive impact on the quality of future value-based decisions. |
Tom Foulsham; Dean Wybrow; Neil Cohn Reading without words: Eye movements in the comprehension of comic strips Journal Article In: Applied Cognitive Psychology, vol. 30, no. 4, pp. 566–579, 2016. @article{Foulsham2016, The study of attention in pictures is mostly limited to individual images. When we ‘read' a visual narrative (e.g., a comic strip), the pictures have a coherent sequence, but it is not known how this affects attention. In two experiments, we eyetracked participants in order to investigate how disrupting the visual sequence of a comic strip would affect attention. Both when panels were presented one at a time (Experiment 1) and when a sequence was presented all together (Experiment 2), pictures were understood more quickly and with fewer fixations when in their original order. When order was randomised, the same pictures required more attention and additional ‘regressions'. Fixation distributions also differed when the narrative was intact, showing that context affects where we look. This reveals the role of top-down structures when we attend to pictorial information, as well as providing a springboard for applied research into attention within image sequences. |
Alessio Fracasso; Yvonne Koenraads; Giorgio L. Porro; Serge O. Dumoulin Bilateral population receptive fields in congenital hemihydranencephaly Journal Article In: Ophthalmic and Physiological Optics, vol. 36, no. 3, pp. 324–334, 2016. @article{Fracasso2016a, PURPOSE: Congenital hemihydranencephaly (HH) is a very rare disorder characterised by prenatal near-complete unilateral loss of the cerebral cortex. We investigated a patient affected by congenital right HH whose visual field extended significantly into the both visual hemifields, suggesting a reorganisation of the remaining left visual hemisphere. We examined the early visual cortex reorganisation using functional MRI (7T) and population receptive field (pRF) modelling. METHODS: Data were acquired by means of a 7T MRI while the patient affected by HH viewed conventional population receptive field mapping stimuli. Two possible pRF reorganisation schemes were evaluated: where every cortical location processed information from either (i) a single region of the visual field or (ii) from two bilateral regions of the visual field. RESULTS: In the patient affected by HH, bilateral pRFs in single cortical locations of the remaining hemisphere were found. In addition, using this specific pRF reorganisation scheme, the biologically known relationship between pRF size and eccentricity was found. CONCLUSIONS: Bilateral pRFs were found in the remaining left hemisphere of the patient affected by HH, indicating reorganisation of intra-cortical wiring of the early visual cortex and confirming brain plasticity and reorganisation after an early cerebral damage in humans. |
Alessio Fracasso; David Melcher Saccades influence the visibility of targets in rapid stimulus sequences: The Roles of mislocalization, retinal distance and remapping Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 58, 2016. @article{Fracasso2016, Briefly presented targets around the time of a saccade are mislocalized towards the saccadic landing point. This has been taken as evidence for a remapping mechanism that accompanies each eye movement, helping maintain visual stability across large retinal shifts. Previous studies have shown that spatial mislocalization is greatly diminished when trains of brief stimuli are presented at a high frequency rate, which might help to explain why mislocalization is rarely perceived in everyday viewing. Studies in the laboratory have shown that mislocalization can reduce metacontrast masking by causing target stimuli in a masking sequence to be perceived as shifted in space towards the saccadic target and thus more easily discriminated. We investigated the influence of saccades on target discrimination when target and masks were presented in a rapid serial visual presentation (RSVP), as well as with forward masking and with backward masking. In a series of experiments, we found that performance was influenced by the retinal displacement caused by the saccade itself but that an additional component of un-masking occurred even when the retinal location of target and mask was matched. These results speak in favor of a remapping mechanism that begins before the eyes start moving and continues well beyond saccadic termination. |
Stefan Frässle; Sören Krach; Frieder M. Paulus; Andreas Jansen Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing Journal Article In: Scientific Reports, vol. 6, pp. 27153, 2016. @article{Fraessle2016, While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness- related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization. |
Stefan Frässle; Frieder M. Paulus; Sören Krach; Stefan Robert Schweinberger; Klaas Enno Stephan; Andreas Jansen Mechanisms of hemispheric lateralization: Asymmetric interhemispheric recruitment in the face perception network Journal Article In: NeuroImage, vol. 124, pp. 977–988, 2016. @article{Fraessle2016a, Perceiving human faces constitutes a fundamental ability of the human mind, integrating a wealth of information essential for social interactions in everyday life. Neuroimaging studies have unveiled a distributed neural network consisting of multiple brain regions in both hemispheres. Whereas the individual regions in the face perception network and the right-hemispheric dominance for face processing have been subject to intensive research, the functional integration among these regions and hemispheres has received considerably less attention. Using dynamic causal modeling (DCM) for fMRI, we analyzed the effective connectivity between the core regions in the face perception network of healthy humans to unveil the mechanisms underlying both intra- and interhemispheric integration. Our results suggest that the right-hemispheric lateralization of the network is due to an asymmetric face-specific interhemispheric recruitment at an early processing stage - that is, at the level of the occipital face area (OFA) but not the fusiform face area (FFA). As a structural correlate, we found that OFA gray matter volume was correlated with this asymmetric interhemispheric recruitment. Furthermore, exploratory analyses revealed that interhemispheric connection asymmetries were correlated with the strength of pupil constriction in response to faces, a measure with potential sensitivity to holistic (as opposed to feature-based) processing of faces. Overall, our findings thus provide a mechanistic description for lateralized processes in the core face perception network, point to a decisive role of interhemispheric integration at an early stage of face processing among bilateral OFA, and tentatively indicate a relation to individual variability in processing strategies for faces. These findings provide a promising avenue for systematic investigations of the potential role of interhemispheric integration in future studies. |
Mallory Frayn; Christopher R. Sears; Kristin M. Ranson A sad mood increases attention to unhealthy food images in women with food addiction Journal Article In: Appetite, vol. 100, pp. 55–63, 2016. @article{Frayn2016, Food addiction and emotional eating both influence eating and weight, but little is known of how negative mood affects the attentional processes that may contribute to food addiction. The purpose of this study was to compare attention to food images in adult women (N = 66) with versus without food addiction, before and after a sad mood induction (MI). Participants' eye fixations were tracked and recorded throughout 8-s presentations of displays with healthy food, unhealthy food, and non-food images. Food addiction was self-reported using the Yale Food Addiction Scale. The sad MI involved watching an 8-min video about a young child who passed away from cancer. It was predicted that: (1) participants in the food addiction group would attend to unhealthy food significantly more than participants in the control group, and (2) participants in the food addiction group would increase their attention to unhealthy food images following the sad MI, due to increased emotional reactivity and poorer emotional regulation. As predicted, the sad MI had a different effect for those with versus without food addiction: for participants with food addiction, attention to unhealthy images increased following the sad MI and attention to healthy images decreased, whereas for participants without food addiction the sad MI did not alter attention to food. These findings contribute to researchers' understanding of the cognitive factors underlying food addiction. |
Susannah F. Freebody; Gustav Kuhn Own-age biases in adults' and children's joint attention: Biased face prioritization, but not gaze following! Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 2, pp. 372–379, 2016. @article{Freebody2016, Previous studies have reported own-age biases in younger and older adults in gaze following. We investigated own-age biases in social attentional processes between adults and children by focusing on two aspects of the joint attention process; the extent to which people attend towards an individual's face, and the extent to which they fixate objects that are looked at by this person (i.e., gaze following). Participants viewed images that always contained a child and an adult who either looked towards each other or each looked at objects located to their side. Observers consistently, and rapidly fixated the actor's faces, though the children were faster to fixate the child's face than the adult's faces, whilst the adults were faster to fixate on the adult's face than the child's face. The children also spent significantly more time fixating the child's face than the adult's face, and the opposite pattern of results was found for the adults. Whilst both adults and children prioritized objects when they were looked at by the actor, both groups showed equivalent levels of gaze following, and there was no own-age bias for gaze following. Our results show an own-age bias for prioritizing faces, but not gaze following. |
Melinda Fricke; Judith F. Kroll; Paola E. Dussias Phonetic variation in bilingual speech: A lens for studying the production-comprehension link Journal Article In: Journal of Memory and Language, vol. 89, pp. 110–137, 2016. @article{Fricke2016, We exploit the unique phonetic properties of bilingual speech to ask how processes occurring during planning affect speech articulation, and whether listeners can use the phonetic modulations that occur in anticipation of a codeswitch to help restrict their lexical search to the appropriate language. An analysis of spontaneous bilingual codeswitching in the Bangor Miami Corpus (Deuchar, Davies, Herring, Parafita Couto, & Carter, 2014) reveals that in anticipation of switching languages, Spanish-English bilinguals produce slowed speech rate and cross-language phonological influence on consonant voice onset time. A study of speech comprehension using the visual world paradigm demonstrates that bilingual listeners can indeed exploit these low-level phonetic cues to anticipate that a codeswitch is coming and to suppress activation of the non-target language. We discuss the implications of these results for current theories of bilingual language regulation, and situate them in terms of recent proposals relating the coupling of the production and comprehension systems more generally. |
Benjamin Gagl Blue hypertext is a good design decision: No perceptual disadvantage in reading and successful highlighting of relevant information Journal Article In: PeerJ, vol. 4, pp. 1–11, 2016. @article{Gagl2016, BACKGROUND: Highlighted text in the Internet (i.e., hypertext) is predominantly blue and underlined. The perceptibility of these hypertext characteristics was heavily questioned by applied research and empirical tests resulted in inconclusive results. The ability to recognize blue text in foveal and parafoveal vision was identified as potentially constrained by the low number of foveally centered blue light sensitive retinal cells. The present study investigates if foveal and parafoveal perceptibility of blue hypertext is reduced in comparison to normal black text during reading. METHODS: A silent-sentence reading study with simultaneous eye movement recordings and the invisible boundary paradigm, which allows the investigation of foveal and parafoveal perceptibility, separately, was realized (comparing fixation times after degraded vs. un-degraded parafoveal previews). Target words in sentences were presented in either black or blue and either underlined or normal. RESULTS: No effect of color and underlining, but a preview benefit could be detected for first pass reading measures. Fixation time measures that included re-reading, e.g., total viewing times, showed, in addition to a preview effect, a reduced fixation time for not highlighted (black not underlined) in contrast to highlighted target words (either blue or underlined or both). DISCUSSION: The present pattern reflects no detectable perceptual disadvantage of hyperlink stimuli but increased attraction of attention resources, after first pass reading, through highlighting. Blue or underlined text allows readers to easily perceive hypertext and at the same time readers re-visited highlighted words longer. On the basis of the present evidence, blue hypertext can be safely recommended to web designers for future use. |
Caroline Ego; Lucie Bonhomme; Jean-Jacques Orban de Xivry; David Da Fonseca; Philippe Lefèvre; Guillaume S. Masson; Christine Deruelle Behavioral characterization of prediction and internal models in adolescents with autistic spectrum disorders Journal Article In: Neuropsychologia, vol. 91, pp. 335–345, 2016. @article{Ego2016, Autism has been considered as a deficit in prediction of the upcoming event or of the sensory consequences of our own movements. To test this hypothesis, we recorded eye movements from high-functioning autistic adolescents and from age-matched controls during a blanking paradigm. In this paradigm, adolescents were instructed to follow a moving target with their eyes even during its transient disappearance. Given the absence of visual information during the blanking period, eye movements during this period are solely controlled on the basis of the prediction of the ongoing target motion. Typical markers of predictive eye movements such as the number and accuracy of predictive saccades and the predictive reacceleration before target reappearance were identical in the two populations. In addition, the synergy of predictive saccades and smooth pursuit observed during the blanking periods, which is a marker for the quality of internal models about target/eye motions, was comparable between these two populations. These results suggest that, in our large population of high-functioning autistic adolescent, both predictive abilities and internal models are left intact in Autism, at least for low-level sensorimotor transformations. |
Caroline Ego; Demet Yüksel; Jean-Jacques Orban de Xivry; Philippe Lefèvre Development of internal models and predictive abilities for visual tracking during childhood Journal Article In: Journal of Neurophysiology, vol. 115, no. 1, pp. 301–309, 2016. @article{Ego2016a, The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5-19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5-7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum. |
Krista A. Ehinger; Ruth Rosenholtz A general account of peripheral encoding also predicts scene perception performance Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–19, 2016. @article{Ehinger2016, People are good at rapidly extracting the ‘‘gist'' of a scene at a glance, meaning with a single fixation. It is generally presumed that this performance cannot be mediated by the same encoding that underlies tasks such as visual search, for which researchers have suggested that selective attention may be necessary to bind features from multiple preattentively computed feature maps. This has led to the suggestion that scenes might be special, perhaps utilizing an unlimited capacity channel, perhaps due to brain regions dedicated to this processing. Here we test whether a single encoding might instead underlie all of these tasks. In our study, participants performed various navigation-relevant scene perception tasks while fixating photographs of outdoor scenes. Participants answered questions about scene category, spatial layout, geographic location, or the presence of objects. We then asked whether an encoding model previously shown to predict performance in crowded object recognition and visual search might also underlie the performance on those tasks. We show that this model does a reasonably good job of predicting performance on these scene tasks, suggesting that scene tasks may not be so special; they may rely on the same underlying encoding as search and crowded object recognition.We also demonstrate that a number of alternative ‘‘models'' of the information available in the periphery also do a reasonable job of predicting performance at the scene tasks, suggesting that scene tasks alone may not be ideal for distinguishing between models. |
Wolfgang Einhäuser; Antje Nuthmann Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing Journal Article In: Journal of Vision, vol. 16, no. 11, pp. 1–17, 2016. @article{Einhaeuser2016, During natural scene viewing, humans typically attend and fixate selected locations for about 200–400 ms. Two variables characterize such ‘‘overt'' attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two- step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations. Introduction |
Michelle L. Eisenberg; Jeffrey M. Zacks Ambient and focal visual processing of naturalistic activity Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–12, 2016. @article{Eisenberg2016, When people inspect a picture, they progress through two distinct phases of visual processing: an ambient, or exploratory, phase that emphasizes input from peripheral vision and rapid acquisition of low-frequency information, followed by a focal phase that emphasizes central vision, salient objects, and high-frequency information. Does this qualitative shift occur during dynamic scene viewing? If so, when? One possibility is that shifts to exploratory processing are triggered at subjective event boundaries. This shift would be adaptive, because event boundaries typically occur when activity features change and when activity becomes unpredictable. Here, we used a perceptual event segmentation task, in which people identified boundaries between meaningful units of activity, to test this hypothesis. In two studies, an eye tracker recorded eye movements and pupil size while participants first watched movies of actors engaged in everyday activities and then segmented them into meaningful events. Saccade amplitudes and fixation durations during the initial viewings suggest that event boundaries function much like the onset of a new picture during static picture presentation: Viewers initiate an ambient processing phase and then progress to focal viewing as the event progresses. These studies suggest that this shift in processing mode could play a role in the formation of mental representations of the current environment. |
Yasmine El-Shamayleh; Anitha Pasupathy Contour curvature as an invariant code for objects in visual area V4 Journal Article In: Journal of Neuroscience, vol. 36, no. 20, pp. 5532–5543, 2016. @article{ElShamayleh2016, Size-invariant object recognition-the ability to recognize objects across transformations of scale-is a fundamental feature of biological and artificial vision. To investigate its basis in the primate cerebral cortex, we measured single neuron responses to stimuli of varying size in visual area V4, a cornerstone of the object-processing pathway, in rhesus monkeys (Macaca mulatta). Leveraging two competing models for how neuronal selectivity for the bounding contours of objects may depend on stimulus size, we show that most V4 neurons (∼70%) encode objects in a size-invariant manner, consistent with selectivity for a size-independent parameter of boundary form: for these neurons, "normalized" curvature, rather than "absolute" curvature, provided a better account of responses. Our results demonstrate the suitability of contour curvature as a basis for size-invariant object representation in the visual cortex, and posit V4 as a foundation for behaviorally relevant object codes. |
Joris A. Elshout; Freekje Asten; Carel B. Hoyng; Douwe P. Bergsma; Albert V. Van den Berg Visual rehabilitation in chronic cerebral blindness: A randomized controlled crossover study Journal Article In: Frontiers in Neurology, vol. 7, pp. 92, 2016. @article{Elshout2016, The treatment of patients suffering from cerebral blindness following stroke is a topic of much recent interest. Several types of treatment are under investigation, such as substitution with prisms and compensation training of saccades. A third approach, aimed at vision restitution is controversial, as a proper controlled study design is missing. In the current study, 27 chronic stroke patients with homonymous visual field defects were trained at home with a visual training device. We used a discrimination task for two types of stimuli: a static point stimulus and a new optic flow-discontinuity stimulus. Using a randomized controlled crossover design, each patient received two successive training rounds, one with high contrast stimuli in their affected hemifield (test) and one round with low-contrast stimuli in their intact hemifield (control). Goldmann and Humphrey perimetry were performed at the start of the study and following each training round. In addition, reading performance was measured. Goldmann perimetry revealed a statistically significant reduction of the visual field defect after the test training, but not after the control training or after no intervention. For both training rounds combined, Humphrey perimetry revealed that the effect of a directed training (sensitivity change in trained hemifield) exceeded that of an undirected training (sensitivity change in untrained hemifield). The interaction between trained and tested hemifield was just above the threshold of significance (p = 0.058). Interestingly, reduction of the field defect assessed by Goldmann perimetry increases with the difference between defect size as measured by Humphrey and Goldmann perimetry prior to training. Moreover, improvement of visual sensitivity measured by Humphrey perimetry increases with the fraction of non-responsive elements (i.e., more relative field loss) in Humphrey perimetry prior to training. Reading speed revealed a significant improvement after training. Our findings demonstrate that our training can result in reduction of the visual field. Improved reading performance after defect training further supports the significance of our training for improvement in daily life activities. |
Masaki Emoto; Hideki Fukuda Correlation between peak velocity of saccades and susceptibility to motion blur Journal Article In: Journal of Display Technology, vol. 12, no. 9, pp. 976–981, 2016. @article{Emoto2016, A major problem in the subjective evaluation of TV image quality is individual variability among viewers. If observers are not carefully selected for viewing studies, large individual differences in the susceptibility to image blurring result in imprecise evaluations and loss of power to detect statistically significant differences between experimental conditions. In assessments of the picture quality of traditional television, which has a narrow field of view (FOV), observers' visual acuity (VA) should be screened before the subjective evaluations. For emerging TV systems with wide FOV (Ultra-High-Definition TV: UHDTV), in which objects move quickly relative to the display frame, it is unclear whether screening viewers' VA is sufficient for selecting viewers to subjectively evaluate moving picture quality or sharpness. Here, we evaluated saccadic eye movement parameters to identify adequate methods to screen participants for studies evaluating UHDTV motion image quality. Each participant's evaluations of two moving pictures were highly correlated, suggesting that participants evaluated sharpness consistently. A significant correlation was observed between the average subjective evaluation score and the peak saccade velocity, but not the VA, of each participant. We conclude that each participant has a certain susceptibility to image blur when evaluating moving pictures, and that this susceptibility correlates with the participant's peak saccade velocity. Thus, the objective measure of peak saccade velocity can be used to screen participants for motion picture evaluation studies. |
Ian M. Erkelens; Benjamin Thompson; William R. Bobier Unmasking the linear behaviour of slow motor adaptation to prolonged convergence Journal Article In: European Journal of Neuroscience, vol. 43, no. 12, pp. 1553–1560, 2016. @article{Erkelens2016, Adaptation to changing environmental demands is central to maintaining optimal motor system function. Current theories suggest that adaptation in both the skeletal-motor and oculomotor systems involves a combination of fast (reflexive) and slow (recalibration) mechanisms. Here we used the oculomotor vergence system as a model to investigate the mechanisms underlying slow motor adaptation. Unlike reaching with the upper limbs, vergence is less susceptible to changes in cognitive strategy that can affect the behaviour of motor adaptation. We tested the hypothesis that mechanisms of slow motor adaptation reflect early neural processing by assessing the linearity of adaptive responses over a large range of stimuli. Using varied disparity stimuli in conflict with accommodation, the slow adaptation of tonic vergence was found to exhibit a linear response whereby the rate (R(2) = 0.85, p < 0.0001) and amplitude (R(2) = 0.65, p < 0.0001) of the adaptive effects increased proportionally with stimulus amplitude. These results suggest that this slow adaptive mechanism is an early neural process, implying its fundamental physiological nature that is potentially dominated by subcortical and cerebellar substrates. |
Leandro L. Di Stasi; Michael B. McCamy; Susana Martinez-Conde; Ellis Gayles; Chad Hoare; Michael Foster; Andrés Catena; Stephen L. Macknik Effects of long and short simulated flights on the saccadic eye movement velocity of aviators Journal Article In: Physiology and Behavior, vol. 153, pp. 91–96, 2016. @article{DiStasi2016, Aircrew fatigue is a major contributor to operational errors in civil and military aviation. Objective detection of pilot fatigue is thus critical to prevent aviation catastrophes. Previous work has linked fatigue to changes in oculomotor dynamics, but few studies have studied this relationship in critical safety environments. Here we measured the eye movements of US Marine Corps combat helicopter pilots before and after simulated flight missions of different durations. We found a decrease in saccadic velocities after long simulated flights compared to short simulated flights. These results suggest that saccadic velocity could serve as a biomarker of aviator fatigue. |
Carolina Diaz-Piedra; Héctor Rieiro; Juan Suárez; Francisco Rios-Tejada; Andrés Catena; Leandro Luigi Di Stasi Fatigue in the military: Towards a fatigue detection test based on the saccadic velocity Journal Article In: Physiological Measurement, vol. 37, no. 9, pp. N62–N75, 2016. @article{DiazPiedra2016, Fatigue is a major contributing factor to operational errors. Therefore, the validation of objective and sensitive indices to detect fatigue is critical to prevent accidents and catastrophes. Whereas tests based on saccadic velocity (SV) have become popular, their sensitivity in the military is not yet clear, since most research has been conducted in laboratory settings using not fully validated instruments. Field studies remain scarce, especially in extreme conditions such as real flights. Here, we investigated the effects of real, long flights on SV. We assessed five newly commissioned military helicopter pilots during their naviation training. Pilots flew Sikorsky S-76C helicopters, under instrumental flight rules, for more than 2 h (ca. 150 min). Eye movements were recorded before and after the flight with an eye tracker using a standard guided-saccade task. We also collected subjective ratings of fatigue. SV significantly decreased from the Pre-Flight to the Post-Flight session in all pilots by around 3% (range: 1-4%). Subjective ratings showed the same tendency. We provide conclusive evidence about the high sensitivity of fatigue tests based on SV in real flight conditions, even in small samples. This result might offer military medical departments a valid and useful biomarker of warfighter physiological state. |
Adele Diederich; Hans Colonius; Farid I. Kandil Prior knowledge of spatiotemporal configuration facilitates crossmodal saccadic response: A TWIN analysis Journal Article In: Experimental Brain Research, vol. 234, no. 7, pp. 2059–2076, 2016. @article{Diederich2016, Saccadic reaction times from a focused-attention task with a visual target and an acoustic nontarget support the hypothesis that the amount of saccadic facilitation in the presence of a nontarget increases with the prior knowledge of alignment with the target across different blocks of trials. The time-window-of-integration model can account for the size of the effect by having window size depend on the prior knowledge of alignment. Some efforts to identify the neural correlates of the effect are discussed. |
Pia Dietze; Eric D. Knowles Social class and the motivational relevance of other human beings: Evidence from visual attention Journal Article In: Psychological Science, vol. 27, no. 11, pp. 1517–1527, 2016. @article{Dietze2016, We theorize that people's social class affects their appraisals of others' motivational relevance—the degree to which others are seen as potentially rewarding, threatening, or otherwise worth attending to. Supporting this account, three studies indicate that social classes differ in the amount of attention their members direct toward other human beings. In Study 1, wearable technology was used to film the visual fields of pedestrians on city streets; higher-class participants looked less at other people than did lower-class participants. In Studies 2a and 2b, participants' eye movements were tracked while they viewed street scenes; higher class was associated with reduced attention to people in the images. In Study 3, a change-detection procedure assessed the degree to which human faces spontaneously attract visual attention; faces proved less effective at drawing the attention of high-class than low-class participants, which implies that class affects spontaneous relevance appraisals. The measurement and conceptualization of social class are discussed. |
Gregory J. Digirolamo; Neha Patel; Clare L. Blaukopf Arousal facilitates involuntary eye movements Journal Article In: Experimental Brain Research, vol. 234, pp. 1967–1976, 2016. @article{Digirolamo2016, Attention plays a critical role in action selection. However, the role of attention in eye movements is complicated as these movements can be either voluntary or involuntary, with, in some circumstances (antisaccades), these two actions competing with each other for execution. But attending to the location of an impending eye movement is only one facet of attention that may play a role in eye movement selection. In two experiments, we investigated the effect of arousal on voluntary eye movements (antisaccades) and involuntary eye movements (prosaccadic errors) in an antisaccade task. Arousal, as caused by brief loud sounds and indexed by changes in pupil diameter, had a facilitation effect on involuntary eye movements. Involuntary eye movements were both significantly more likely to be executed and significantly faster under arousal conditions (Experiments 1 and 2), and the influence of arousal had a specific time course (Experiment 2). Arousal, one form of attention, can produce significant costs for human movement selection as potent but unplanned actions are benefited more than planned ones. |
Gregory J. DiGirolamo; Ellen J. Sophis; Jennifer L. Daffron; Gerardo Gonzalez; Mauricio Romero-Gonzalez; Sean A. Gillespie Breakdowns of eye movement control toward smoking cues in young adult light smokers Journal Article In: Addictive Behaviors, vol. 52, pp. 98–102, 2016. @article{DiGirolamo2016b, Background: Many studies suggest that dependent smokers have a preference or attentional bias toward smoking cues. The purpose of this study was to test the ability of infrequent non-dependent light smokers to control their eye movements by look away from smoking cues. Poor control in the lightest of smokers would suggest nicotine cue-elicited behavior occurring even prior to nicotine dependency as measured by daily smoking. Methods: 17 infrequent non-dependent light smokers and 17 lifetime non-smokers performed an antisaccade task (look away from suddenly appearing cue) on smoking, alcohol, neutral, and dot cues. Results: The light smokers, who were confirmed light smokers and non-dependent (MFaegerström Dependency Score=0.35), were significantly worse at controlling their eye movements to smoking cues relative to both neutral cues (p<.04) and alcohol cues (p<.02). Light smokers made significantly more errors to smoking cues than non-smokers (p<.004). Conclusions: These data suggest that prior to developing clinical symptoms of severe dependence or progressing to heavier smoking (e.g., daily smoking), the lightest of smokers are showing a specific deficit in control of nicotine cue-elicited behavior. |
Yun Ding; Tao He; Jason Satel; Zhiguo Wang Inhibitory cueing effects following manual and saccadic responses to arrow cues Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 4, pp. 1020–1029, 2016. @article{Ding2016, With two cueing tasks, in the present study we examined output-based inhibitory cueing effects (ICEs) with manual responses to arrow targets following manual or saccadic responses to arrow cues. In all experiments, ICEs were observed when manual localization responses were required to both the cues and targets, but only when the cue-target onset asynchrony (CTOA) was 2,000 ms or longer. In contrast, when saccadic responses were made in response to the cues, ICEs were only observed with CTOAs of 2,000 ms or less-and only when an auditory cue-back signal was used. The present study also showed that the magnitude of ICEs following saccadic responses to arrow cues decreased with time, much like traditional inhibition-of-return effects. The magnitude of ICEs following manual responses to arrow cues, however, appeared later in time and had no sign of decreasing even 3 s after cue onset. These findings suggest that ICEs linked to skeletomotor activation do exist and that the ICEs evoked by oculomotor activation can carry over to the skeletomotor system. |
Yun Ding; Jing Zhao; Tao He; Yufei Tan; Lingshuang Zheng; Zhiguo Wang Selective impairments in covert shifts of attention in Chinese dyslexic children Journal Article In: Dyslexia, vol. 22, no. 4, pp. 362–378, 2016. @article{Ding2016a, Reading depends heavily on the efficient shift of attention. Mounting evidence has suggested that dyslexics have deficits in covert attentional shift. However, it remains unclear whether dyslexics also have deficits in overt attentional shift. With the majority of relevant studies carried out in alphabetic writing systems, it is also unknown whether the attentional deficits observed in dyslexics are restricted to a particular writing system. The present study examined inhibition of return (IOR)-a major driving force of attentional shifts-in dyslexic children learning to read a logographic script (i.e., Chinese). Robust IOR effects were observed in both covert and overt attentional tasks in two groups of typically developing children, who were age- or reading ability-matched to the dyslexic children. In contrast, the dyslexic children showed IOR in the overt but not in the covert attentional task. We conclude that covert attentional shift is selectively impaired in dyslexic children. This impairment is not restricted to alphabetic writing systems, and it could be a significant contributor to the difficulties encountered by children learning to read. |
Pascasie L. Dombert; Gereon R. Fink; Simone Vossel The impact of probabilistic feature cueing depends on the level of cue abstraction Journal Article In: Experimental Brain Research, vol. 234, no. 3, pp. 685–694, 2016. @article{Dombert2016, Allocation of attentional resources rests on predictions about the likelihood of events. While this effect has been extensively studied in the spatial attention domain where the location of a target stimulus is pre-cued, less is known about the cueing of stimulus features such as the color of a behaviorally relevant target. Moreover, there is disagreement about which types of color cues are effective for biasing attention. Here we investigated the effects of probabilistic context (percentage of cue validity, %CV) for different levels of cue abstraction to elucidate how feature-based search information is processed and used to direct attention. The color of a target was cued by presenting the perceptual color, the color word, or two-letter abbreviations. %CV, i.e., the probability that the cue indicated the color correctly, changed unpredictably between 50, 70, and 90%. Response times (RTs) for valid and invalid trials in each %CV condition were recorded in 60 datasets and analyzed with analyses of variance. The results showed that all cues were associated with comparable RT costs after invalid cueing. The modulation of RT costs by probabilities, however, depended upon level of cue abstraction and time on task: While a strong, immediate impact of %CV was found for two-letter cueing, the effect was solely observed in the second half of the experiment for perceptual and word cues. These results demonstrate that probabilistic feature-based information is processed differently for different levels of cue abstraction. Moreover, the modulatory effect of the environmental statistics differentially depends on the time on task for different feature cues.; |
Pascasie L. Dombert; Anna B. Kuhns; Paola Mengotti; Gereon R. Fink; Simone Vossel Functional mechanisms of probabilistic inference in feature- and space-based attentional systems Journal Article In: NeuroImage, vol. 142, pp. 553–564, 2016. @article{Dombert2016a, Humans flexibly attend to features or locations and these processes are influenced by the probability of sensory events. We combined computational modeling of response times with fMRI to compare the functional correlates of (re-)orienting, and the modulation by probabilistic inference in spatial and feature-based attention systems. Twenty-four volunteers performed two task versions with spatial or color cues. Percentage of cue validity changed unpredictably. A hierarchical Bayesian model was used to derive trial-wise estimates of probability-dependent attention, entering the fMRI analysis as parametric regressors. Attentional orienting activated a dorsal frontoparietal network in both tasks, without significant parametric modulation. Spatially invalid trials activated a bilateral frontoparietal network and the precuneus, while invalid feature trials activated the left intraparietal sulcus (IPS). Probability-dependent attention modulated activity in the precuneus, left posterior IPS, middle occipital gyrus, and right temporoparietal junction for spatial attention, and in the left anterior IPS for feature-based and spatial attention. These findings provide novel insights into the generality and specificity of the functional basis of attentional control. They suggest that probabilistic inference can distinctively affect each attentional subsystem, but that there is an overlap in the left IPS, which responds to both spatial and feature-based expectancy violations. |
Jan Drewes; G. Goren; W. Zhu; J. H. Elder Recurrent processing in the formation of shape percepts Journal Article In: Journal of Neuroscience, vol. 36, no. 1, pp. 185–192, 2016. @article{Drewes2016, The human visual system must extract reliable object information from cluttered visual scenes several times per second, and this temporal constraint has been taken as evidence that the underlying cortical processing must be strictly feedforward. Here we use a novel rapid reinforcement paradigm to probe the temporal dynamics of the neural circuit underlying rapid object shape perception and thus test this feedforward assumption. Our results show that two shape stimuli are optimally reinforcing when separated in time by approximately 60 ms, suggesting an underlying recurrent circuit with a time constant (feedforward + feedback) of 60 ms. A control experiment demonstrates that this is not an attentional cueing effect. Instead, it appears to reflect the time course of feedback processing underlying the rapid perceptual organization of shape. |
Ilse C. Van Dromme; Elsie Premereur; Bram-Ernst Verhoef; Wim Vanduffel Posterior parietal cortex drives inferotemporal activations during three- dimensional object vision Journal Article In: PloS Biology, vol. 14, no. 4, pp. e1002445, 2016. @article{Dromme2016, The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-micro-stimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. |
Zhaohui Duan; Fuxing Wang; Jianzhong Hong Culture shapes how we look: Comparison between Chinese and African university students Journal Article In: Journal of Eye Movement Research, vol. 9, no. 6, pp. 1–10, 2016. @article{Duan2016, Previous cross-cultural studies have found that cultures can shape eye movement during scene perception, but those researches have been limited to the West. This study recruited Chinese and African students to document cultural effects on two phases of scene perception. In the free-viewing phase, Africans fixated more on the focal objects than Chinese, while Chinese paid more attention to the backgrounds than Africans especially on the fourth and fifth fixations. In the recognition phase, there was no cultural difference in perception, but Chinese recognized more objects than Africans. We conclude that cultural differences exist in scene perception when there is no explicit task and more clearly in its later period, and that some differences may be hidden in deeper processes (e.g., memory) during an explicit task. |
Emmanuel Ducrocq; Mark Wilson; Sam Vine; Nazanin Derakshan Training attentional control improves cognitive and motor task performance Journal Article In: Journal of Sport and Exercise Psychology, vol. 38, no. 5, pp. 521–533, 2016. @article{Ducrocq2016, Attentional control is a necessary function for the regulation of goal-directed behavior. In three experiments we investigated whether training inhibitory control using a visual search task could improve task specific measures of attentional control and performance. In experiment 1 results revealed that training elicited a near-transfer effect; improving performance on a cognitive (antisaccade) task assessing inhibitory control. In Experiment 2 an initial far-transfer effect of training was observed on an index of attentional control validated for tennis. The principal aim of Experiment 3 was to expand on these findings by assessing objective, gaze measures of inhibitory control during the performance of a tennis task. Training improved inhibitory control and performance when pressure was elevated, confirming the mechanisms by which cognitive anxiety impacts upon performance. These results suggest that attentional control training can improve inhibition and reduce task-specific distractibility with promise of transfer to more efficient sporting performance in competitive contexts. |
Jacob Duijnhouwer; Bart Krekelberg Evidence and counterevidence in motion perception Journal Article In: Cerebral Cortex, vol. 26, pp. 4602–4612, 2016. @article{Duijnhouwer2016, Sensory neurons gather evidence in favor ofthe specific stimuli to which they are tuned, but they could improve their sensitivity by also taking counterevidence into account. The Bours–Lankheet model for motion detection uses counterevidence that relies on a specific combination ofthe ON and OFF channels in the early visual system. Specifically, the model detects pairs offlashes that occur separated in space and time. Ifthe flashes have the same contrast polarity, they are interpreted as evidence in favorof the corresponding motion. But if they have opposite contrasts, they are interpreted as evidence against it. This mechanism provides an explanation for reverse-phi (the perceived reversal of an apparent motion stimulus due to periodic contrast-inversions) that is a conceptual departure from the standard explanations of the effect. Here, we investigate this counterevidence mechanism by measuring directional tuning curves of neurons in the primary visual and middle temporal cortex areas of awake, behaving macaques using constant-contrast and inverting-contrast moving dot stimuli. Our electrophysiological data support the Bours–Lankheet model and suggest that the counterevidence computation occurs at an early stage of neural processing not captured by the standard models. |
Carolyn M. Dunifon; Samuel Rivera; Christopher W. Robinson Auditory stimuli automatically grab attention: Evidence from eye tracking and attentional manipulations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 12, pp. 1947–1958, 2016. @article{Dunifon2016, Simultaneously presenting auditory and visual stimuli can hinder performance for one modality while the other dominates. For approximately 40 years, research with adults has primarily indicated visual dominance, while recent research with infants and young children has revealed auditory dominance. The current study further investigates modality dominance with adults, finding evidence for both auditory and visual dominance across 3 experiments. Using a simple discrimination task, Experiment 1 revealed that cross-modal presentation attenuated discrimination of auditory input, while at the same time, also slowed down visual processing. Even when participants were instructed to only pay attention to the visual stimuli, both spoken nonsense words and nonlinguistic sounds slowed down visual processing (Exper- iment 2). Experiment 3 used a similar discrimination task while utilizing an eye tracker to examine how auditory input affects visual fixations. Cross-modal presentation attenuated auditory discrimination; however, it also slowed down visual response times. In addition, adults also made longer fixations and were slower to make their first fixation when images were paired with sounds. The latter finding is novel and consistent with a proposed mechanism of auditory dominance: auditory stimuli automatically engage attention and attenuate or delay visual processing. |
Marianne Duyck; Thérèse Collins; Mark Wexler Masking the saccadic smear Journal Article In: Journal of Vision, vol. 16, no. 10, pp. 1–13, 2016. @article{Duyck2016, Static visual stimuli are smeared across the retina during saccades, but in normal conditions this smear is not perceived. Instead, we perceive the visual scene as static and sharp. However, retinal smear is perceived if stimuli are shown only intrasaccadically, but not if the stimulus is additionally shown before a saccade begins, or after the saccade ends (Campbell & Wurtz, 1978). This inhibition has been compared to forward and backward metacontrast masking, but with spatial relations between stimulus and mask that are different from ordinary metacontrast during fixation. Previous studies of smear masking have used subjective measures of smear perception. Here we develop a new, objective technique for measuring smear masking, based on the spatial localization of a gap in the smear created by very quickly blanking the stimulus at various points during the saccade. We apply this technique to show that smear masking survives dichoptic presentation (suggesting that it is therefore cortical in origin), as well as separations of as much as 6° between smear and mask. |
Muriel Dysli; Mathias Abegg Nystagmus does not limit reading ability in albinism Journal Article In: PLoS ONE, vol. 11, no. 7, pp. e0158815, 2016. @article{Dysli2016, PURPOSE: Subjects with albinism usually suffer from nystagmus and reduced visual acuity, which may impair reading performance. The contribution of nystagmus to decreased reading ability is not known. Low vision and nystagmus may have an additive effect. We aimed to address this question by motion compensation of the nystagmus in affected subjects and by simulating nystagmus in healthy controls. METHODS: Reading speed and eye movements were assessed in 9 subjects with nystagmus associated with albinism and in 12 healthy controls. We compared the reading ability with steady word presentation and with words presented on a gaze contingent display where words move in parallel to the nystagmus and thus correct for the nystagmus. As the control, healthy subjects were asked to read words and texts in steady reading conditions as well as text passages that moved in a pattern similar to nystagmus. RESULTS: Correcting nystagmus with a gaze contingent display neither improved nor reduced the reading speed for single words. Subjects with nystagmus and healthy participants achieved comparable reading speed when reading steady texts. However, movement of text in healthy controls caused a significantly reduced reading speed and more regressive saccades. CONCLUSIONS: Our results argue against nystagmus as the rate limiting factor for reading speed when words were presented in high enough magnification and support the notion that other sensory visual impairments associated with albinism (for example reduced visual acuity) might be the primary causes for reading impairment. |
Muriel Dysli; Mathias Abegg Gaze-dependent phoria and vergence adaptation Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–12, 2016. @article{Dysli2016a, Incomitance is a condition with gaze-dependent deviations of ocular alignment and is common in strabismus patients. The physiological mechanisms that maintain equal horizontal ocular alignment in all gaze directions (concomitance) in healthy individuals are poorly explored. We investigate adaptive processes in the vergence system that are induced by horizontal incomitant vergence stimuli (stimuli that require a gaze- dependent vergence response in order to re-establish binocular single vision). We measured horizontal vergence responses elicited after healthy subjects shifted their gaze from a position that required no vergence to a position that required convergence. Repetitive saccades into a position with a convergence stimulus rapidly decreased phoria (defined as the deviation of ocular alignment in the absence of a binocular stimulus). This change of phoria was present in all viewing directions (from 08 to 0.8686 0.408, p , 0.001) but was more pronounced in the gaze direction with a convergence stimulus (from 0.2686 0.138 to 1.3986 0.338, p , 0.001). We also found that vergence velocity rapidly increased (p ¼ 0.015) and vergence latency promptly decreased (p , 0.001). We found gaze-dependent modulation of phoria in combined saccade–vergence eye movements and also in pursuit–vergence eye movements. Thus, acute horizontal, gaze-dependent changes of vergence, such as may be encountered in new onset strabismus due to paralysis, can rapidly increase vergence velocity and decrease latency. Gaze- specific (concomitant) and gaze-independent (incomitant) phoria levels will adapt. These early adaptive processes increase the efficacy of binocular vision and maintain good ocular alignment in all directions of gaze. |
Archy O. Berker; Robb B. Rutledge; Christoph Mathys; Louise Marshall; Gemma F. Cross; Raymond J. Dolan; Sven Bestmann Computations of uncertainty mediate acute stress responses in humans Journal Article In: Nature Communications, vol. 7, pp. 10996, 2016. @article{Berker2016, The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function. |
Archy O. Berker; Margot Tirole; Robb B. Rutledge; Gemma F. Cross; Raymond J. Dolan; Sven Bestmann Acute stress selectively impairs learning to act Journal Article In: Scientific Reports, vol. 6, pp. 29816, 2016. @article{Berker2016a, Stress interferes with instrumental learning. However, choice is also influenced by non-instrumental factors, most strikingly by biases arising from Pavlovian associations that facilitate action in pursuit of rewards and inaction in the face of punishment. Whether stress impacts on instrumental learning via these Pavlovian associations is unknown. Here, in a task where valence (reward or punishment) and action (go or no-go) were orthogonalised, we asked whether the impact of stress on learning was action or valence specific. We exposed 60 human participants either to stress (socially-evaluated cold pressor test) or a control condition (room temperature water). We contrasted two hypotheses: that stress would lead to a non-selective increase in the expression of Pavlovian biases; or that stress, as an aversive state, might specifically impact action production due to the Pavlovian linkage between inaction and aversive states. We found support for the second of these hypotheses. Stress specifically impaired learning to produce an action, irrespective of the valence of the outcome, an effect consistent with a Pavlovian linkage between punishment and inaction. This deficit in action-learning was also reflected in pupillary responses; stressed individuals showed attenuated pupillary responses to action, hinting at a noradrenergic contribution to impaired action-learning under stress. |
Anouk J. Brouwer; Eli Brenner; Jeroen B. J. Smeets Keeping a target in memory does not increase the effect of the Müller-Lyer illusion on saccades Journal Article In: Experimental Brain Research, vol. 234, no. 4, pp. 977–983, 2016. @article{Brouwer2016a, The effects of visual contextual illusions on motor behaviour vary largely between experimental con- ditions. Whereas it has often been reported that the effects of illusions on pointing and grasping are largest when the movement is performed some time after the stimulus has disappeared, the effect of a delay has hardly been studied for saccadic eye movements. In this experiment, partici- pants viewed a briefly presented Müller-Lyer illusion with a target at its endpoint and made a saccade to the remem- bered position of this target after a delay of 0, 0.6, 1.2 or 1.8 s. We found that horizontal saccade amplitudes were shorter for the perceptually shorter than for the perceptu- ally longer configuration of the illusion. Most importantly, although the delay clearly affected saccade amplitude, resulting in shorter saccades for longer delays, the illu- sion effect did not depend on the duration of the delay. We argue that visually guided and memory-guided saccades are likely based on a common visual representation. |
Anouk J. Brouwer; W. Pieter Medendorp; Jeroen B. J. Smeets Contributions of gaze-centered and object-centered coding in a double-step saccade task Journal Article In: Journal of Vision, vol. 16, no. 14, pp. 1–12, 2016. @article{Brouwer2016b, The position of a saccade target can be encoded in gaze-centered coordinates, that is, relative to the current gaze position, or in object-centered coordinates, that is, relative to an object in the environment. We tested the role of gaze-centered and object-centered coding in a double-step saccade task involving the Brentano version of the Müller-Lyer illusion. The two visual targets were presented either sequentially, requiring gaze-centered coding of the second saccade target, or simultaneously, thereby providing additional object-centered information about the location of the second target relative to the first. We found that the endpoint of the second saccade was affected by the illusion, irrespective of whether the targets were presented sequentially or simultaneously, suggesting that participants used a gaze-centered updating strategy. We found that variability in saccade endpoints was reduced when object-centered information was consistently available but not when its presence varied from trial to trial. Our results suggest that gaze-centered coding is dominant in the planning of sequential saccades, whereas object-centered information plays a relatively small role. |
Alex Carvalho; Isabelle Dautriche; Anne Christophe Preschoolers use phrasal prosody online to constrain syntactic analysis Journal Article In: Developmental Science, vol. 19, no. 2, pp. 235–250, 2016. @article{Carvalho2016, Two experiments were conducted to investigate whether young children are able to take into account phrasal prosody when computing the syntactic structure of a sentence. Pairs of French noun/verb homophones were selected to create locally ambiguous sentences ([la petite ferme] [est tr? es jolie] ‘the small farm is very nice' vs. [la petite] [ferme la fen^ etre] ‘the little girl closes the window' – brackets indicate prosodic boundaries). Although these sentences start with the same three words, ferme is a noun (farm) in the former but a verb (to close) in the latter case. The only difference between these sentence beginnings is the prosodic structure, that reflects the syntactic structure (with a prosodic boundary just before the criticalwordwhen it is a verb, and just after it when it is a noun). Crucially, all words following the homophone were masked, such that prosodic cues were the only disambiguating information. Children successfully exploited prosodic information to assign the appropriate syntactic category to the target word, in both an oral completion task (4.5-year-olds, Experiment 1) and in a preferential looking paradigm with an eye- tracker (3.5-year-olds and 4.5-year-olds, Experiment 2). These results show that both groups of children exploit the position of a word within the prosodic structure when computing its syntactic category. In other words, even younger children of 3.5 years old exploit phrasal prosody online to constrain their syntactic analysis. This ability to exploit phrasal prosody to compute syntactic structure may help children parse sentences containing unknown words, and facilitate the acquisition of word meanings. |
Olivier Condappa; Jan M. Wiener Human place and response learning: navigation strategy selection, pupil size and gaze behavior Journal Article In: Psychological Research, vol. 80, no. 1, pp. 82–93, 2016. @article{Condappa2016, In this study, we examined the cognitive processes and ocular behavior associated with on-going navigation strategy choice using a route learning paradigm that distinguishes between three different wayfinding strategies: an allocentric place strategy, and the egocentric associative cue and beacon response strategies. Participants approached intersections of a known route from a variety of directions, and were asked to indicate the direction in which the original route continued. Their responses in a subset of these test trials allowed the assessment of strategy choice over the course of six experimental blocks. The behavioral data revealed an initial maladaptive bias for a beacon response strategy, with shifts in favor of the optimal configuration place strategy occurring over the course of the experiment. Response time analysis suggests that the configuration strategy relied on spatial transformations applied to a viewpoint-dependent spatial representation, rather than direct access to an allocentric representation. Furthermore, pupillary measures reflected the employment of place and response strategies throughout the experiment, with increasing use of the more cognitively demanding configuration strategy associated with increases in pupil dilation. During test trials in which known intersections were approached from different directions, visual attention was directed to the landmark encoded during learning as well as the intended movement direction. Interestingly, the encoded landmark did not differ between the three navigation strategies, which is discussed in the context of initial strategy choice and the parallel acquisition of place and response knowledge. |
Julian De Freitas; Nicholas E. Myers; Anna C. Nobre Tracking the changing feature of a moving object Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–21, 2016. @article{DeFreitas2016, The mind can track not only the changing locations of moving objects, but also their changing features, which are often meaningful for guiding action. How does the mind track such features? Using a task in which observers tracked the changing orientation of a rolling wheel's spoke, we found that this ability is enabled by a highly feature-specific process which continuously tracks the orientation feature itself—even during occlusion, when the feature is completely invisible. This suggests that the mental representation of a changing orientation feature and its moving object are continuously transformed and updated, akin to studies showing continuous tracking of an object's boundaries alone.We also found a systematic error in performance, whereby the orientation was reliably perceived to be further ahead than it truly was. This effect appears to occur because during occlusion the mental representation of the feature is transformed beyond the veridical position, perhaps in order to conservatively anticipate future feature states. |
Floor Groot; Falk Huettig; Christian N. L. Olivers When meaning matters: The temporal dynamics of semantic influences on visual attention Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 2, pp. 180–196, 2016. @article{Groot2016, An important question is, to what extent is visual attention driven by the semantics of individual objects, rather than by their visual appearance? This study investigates the hypothesis that timing is a crucial factor in the occurrence and strength of semantic influences on visual orienting. To assess the dynamics of such influences, the authors presented the target instruction either before or after visual stimulus onset, while eye movements were continuously recorded throughout the search. The results show a substantial but delayed bias in orienting toward semantically related objects compared with visually related objects when target instruction is presented before visual stimulus onset. However, this delay can be completely undone by presenting the visual information before the target instruction (Experiment 1). Moreover, the absence or presence of visual competition does not change the temporal dynamics of the semantic bias (Experiment 2). Visual orienting is thus driven by priority settings that dynamically shift between visual and semantic representations, with each of these types of bias operating largely independently. The findings bridge the divide between the visual attention and the psycholinguistic literature. |
Floor Groot; Falk Huettig; Christian N. L. Olivers Revisiting the looking at nothing phenomenon: Visual and semantic biases in memory search Journal Article In: Visual Cognition, vol. 24, no. 3, pp. 226–245, 2016. @article{Groot2016a, When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than looking at unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present. [ABSTRACT FROM AUTHOR] |
Jelmer P. De Vries; R. Azadi; Mark R. Harwood The saccadic size-latency phenomenon explored: Proximal target size is a determining factor in the saccade latency Journal Article In: Vision Research, vol. 129, pp. 87–97, 2016. @article{DeVries2016a, Saccade latencies are known to increase for targets presented close to fixation. Recently, it was shown that not only target eccentricity, but the size of a proximal saccade target also plays a crucial role: latencies increase rapidly with increasing target size. Interestingly, these latency increases are greater than those typically found for other supra-threshold manipulations of target properties. Here we evaluate to what extent this phenomenon is distinct from known delays in saccade initiation and whether the phenomenon is truly related to the size of a proximal target. In Experiment 1 we focus on the importance of the required amplitude. Employing a saccade adaptation paradigm we find that the required amplitude is not a determining factor. Focusing on the role of size, in Experiment 2, we find that while latency increases are strongest for targets elongated in the direction of the fovea, elongations perpendicular to this direction also lead to an increase in latencies. Finally, in Experiment 3 we verify that the latency increases are driven by the properties of the saccade target rather than visual input in general. Together these experiments provide converging evidence that the current phenomenon is both novel and a consequence of the relation between proximal target size and its eccentricity. |
Jelmer P. De Vries; Britta K. Ischebeck; L. P. Voogt; Malou Janssen; Maarten A. Frens; Gert Jan Kleinrensink; Josef N. Geest Cervico-ocular reflex is increased in people with nonspecific neck pain Journal Article In: Physical Therapy, vol. 96, no. 8, pp. 1190–1195, 2016. @article{DeVries2016, Background: Neck pain is a widespread complaint. People experiencing neck pain often present an altered timing in contraction of cervical muscles. This altered afferent information elicits the cervico-ocular reflex (COR), which stabilizes the eye in response to trunk-to-head movements. The vestibulo-ocular reflex (VOR) elicited by the vestibulum is thought to be unaffected by afferent information from the cervical spine. Objective The aim of the study was to measure the COR and VOR in people with nonspecific neck pain. Design: This study utilized a cross-sectional design in accordance with the STROBE statement. Methods: An infrared eye-tracking device was used to record the COR and the VOR while the participant was sitting on a rotating chair in darkness. Eye velocity was calculated by taking the derivative of the horizontal eye position. Parametric statistics were performed. Results: The mean COR gain in the control group (n=30) was 0.26 (SD=0.15) compared with 0.38 (SD=0.16) in the nonspecific neck pain group (n=37). Analyses of covariance were performed to analyze differences in COR and VOR gains, with age and sex as covariates. Analyses of covariance showed a significantly increased COR in participants with neck pain. The VOR between the control group, with a mean VOR of 0.67 (SD=0.17), and the nonspecific neck pain group, with a mean VOR of 0.66 (SD=0.22), was not significantly different. Limitations: Measuring eye movements while the participant is sitting on a rotating chair in complete darkness is technically complicated. Conclusions: This study suggests that people with nonspecific neck pain have an increased COR. The COR is an objective, nonvoluntary eye reflex and an unaltered VOR. This study shows that an increased COR is not restricted to patients with traumatic neck pain. |
Jelmer P. De Vries; Stefan Van der Stigchel; Ignace T. C. Hooge; Frans A. J. Verstraten Revisiting the global effect and inhibition of return Journal Article In: Experimental Brain Research, vol. 234, no. 10, pp. 2999–3009, 2016. @article{DeVries2016b, Saccades toward previously cued locations have longer latencies than saccades toward other locations, a phenomenon known as inhibition of return (IOR). Watanabe (Exp Brain Res 138:330?342. doi:10.1007/s002210100709, 2001) combined IOR with the global effect (where saccade landing points fall in between neighboring objects) to investigate whether IOR can also have a spatial component. When one of two neighboring targets was cued, there was a clear bias away from the cued location. In a condition where both targets were cued, it appeared that the global effect magnitude was similar to the condition without any cues. However, as the latencies in the double cue condition were shorter compared to the no cue condition, it is still an open question whether these results are representative for IOR. Considering the double cue condition can provide valuable insight into the interaction of the mechanisms underlying the two phenomena, here, we revisit this condition in an adapted paradigm. Our paradigm does result in longer latencies for the cued locations, and we find that the magnitude of the global effect is reduced significantly. Unexpectedly, this holds even when only including saccades with the same latencies for both conditions. Thus, the increased latencies associated with IOR cannot directly explain the reduction in global effect. The global effect reduction can likely best be seen as either a result of short-term depression of exogenous visual signals or a result of IOR established at the center of gravity of cues. |