All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2014 |
Eckart Zimmermann; S. Born; Gereon R. Fink; P. Cavanagh Masking produces compression of space and time in the absence of eye movements Journal Article In: Journal of Neurophysiology, vol. 112, no. 12, pp. 3066–3076, 2014. @article{Zimmermann2014a, Whenever the visual stream is abruptly disturbed by eye movements, blinks, masks, or flashes of light, the visual system needs to retrieve the new locations of current targets and to reconstruct the timing of events to straddle the interruption. This process may introduce position and timing errors. We here report that very similar errors are seen in human subjects across three different paradigms when disturbances are caused by either eye movements, as is well known, or, as we now show, masking. We suggest that the characteristic effects of eye movements on position and time, spatial and temporal compression and saccadic suppression of displacement, are consequences of the interruption and the subsequent reconnection and are seen also when visual input is masked without any eye movements. Our data show that compression and suppression effects are not solely a product of ocular motor activity but instead can be properties of a correspondence process that links the targets of interest across interruptions in visual input, no matter what their source. |
A. Zenon; M. Sidibe; Etienne Olivier Pupil size variations correlate with physical effort perception Journal Article In: Frontiers in Behavioral Neuroscience, vol. 8, pp. 286, 2014. @article{Zenon2014, It has long been established that the pupil diameter increases during mental activities in proportion to the difficulty of the task at hand. However, it is still unclear whether this relationship between the pupil size and effort applies also to physical effort. In order to address this issue, we asked healthy volunteers to perform a power grip task, at varied intensity, while evaluating their effort both implicitly and explicitly, and while concurrently monitoring their pupil size. Each trial started with a contraction of imposed intensity, under the control of a continuous visual feedback. Upon completion of the contraction, participants had to choose whether to replicate, without feedback, the first contraction for a variable monetary reward, or whether to skip this step and go directly to the next trial. The rate of acceptance of effort replication and the amount of force exerted during the replication were used as implicit measures of the perception of the effort exerted during the first contraction. In addition, the participants were asked to rate on an analog scale, their explicit perception of the effort for each intensity condition. We found that pupil diameter increased during physical effort and that the magnitude of this response reflected not only the actual intensity of the contraction but also the subjects' perception of the effort. This finding indicates that the pupil size signals the level of effort invested in a task, irrespective of whether it is physical or mental. It also helps refining the potential brain circuits involved since the results of the current study imply a convergence of mental and physical effort information at some level along this pathway. |
Alexandre Zénon; Brian D. Corneil; Andrea Alamia; Nabil Filali-Sadouk; Etienne Olivier Counterproductive effect of saccadic suppression during attention shifts Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e86633, 2014. @article{Zenon2014a, During saccadic eye movements, the processing of visual information is transiently interrupted by a mechanism known as "saccadic suppression" [1] that is thought to ensure perceptual stability [2]. If, as proposed in the premotor theory of attention [3], covert shifts of attention rely on sub-threshold recruitment of oculomotor circuits, then saccadic suppression should also occur during covert shifts. In order to test this prediction, we designed two experiments in which participants had to orient towards a cued letter, with or without saccades. We analyzed the time course of letter identification score in an "attention" task performed without saccades, using the saccadic latencies measured in the "saccade" task as a marker of covert saccadic preparation. Visual conditions were identical in all tasks. In the "attention" task, we found a drop in perceptual performance around the predicted onset time of saccades that were never performed. Importantly, this decrease in letter identification score cannot be explained by any known mechanism aligned on cue onset such as inhibition of return, masking, or microsaccades. These results show that attentional allocation triggers the same suppression mechanisms as during saccades, which is relevant during eye movements but detrimental in the context of covert orienting. |
Luming Zhang; Yue Gao; Rongrong Ji; Yingjie Xia; Qionghai Dai; Xuelong Li Actively learning human gaze shifting paths for semantics-aware photo cropping Journal Article In: IEEE Transactions on Image Processing, vol. 23, no. 5, pp. 2235–2245, 2014. @article{Zhang2014, Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons. |
Caitlin A. Wright; Keith S. Dobson; Christopher R. Sears Does a high working memory capacity attenuate the negative impact of trait anxiety on attentional control? Evidence from the antisaccade task Journal Article In: Journal of Cognitive Psychology, vol. 26, no. 4, pp. 400–412, 2014. @article{Wright2014, According to attentional control theory, high trait anxious individuals experience reduced attentional control as compared to low trait anxious individuals due to the imbalance between goal-directed and stimulus-driven attentional systems. One consequence is that high trait anxious individuals have difficulty resisting distraction, as compared to low trait anxious individuals. A separate line of research on individual differences in working memory capacity (WMC) has shown that individuals with higher WMC have better attentional control and thus are better able to resist distraction. The present study investigated the hypothesis that high WMC compensates for high trait anxiety in a task that evaluates the ability to resist distraction, the antisaccade task. Participants completed the State-Trait Anxiety Inventory to measure trait anxiety and the Operation Span and Reading Span tasks to measure WMC. As hypothesised, individuals who were high trait anxious exhibited increased attentional control on the antisaccade task when they had high WMC. Theoretical implications and directions for future research are discussed. |
Jessica M. Wright; Bart Krekelberg Transcranial direct current stimulation over posterior parietal cortex modulates visuospatial localization Journal Article In: Journal of Vision, vol. 14, no. 9, pp. 5–5, 2014. @article{Wright2014a, Visual localization is based on the complex interplay of bottom-up and top-down processing. Based on previous work, the posterior parietal cortex (PPC) is assumed to play an essential role in this interplay. In this study, we investigated the causal role of the PPC in visual localization. Specifically, our goal was to determine whether modulation of the PPC via transcranial direct current stimulation (tDCS) could induce visual mislocalization similar to that induced by an exogenous attentional cue (Wright, Morris, & Krekelberg, 2011). We placed one stimulation electrode over the right PPC and the other over the left PPC (dual tDCS) and varied the polarity of the stimulation. We found that this manipulation altered visual localization; this supports the causal involvement of the PPC in visual localization. Notably, mislocalization was more rightward when the cathode was placed over the right PPC than when the anode was placed over the right PPC. This mislocalization was found within a few minutes of stimulation onset, it dissipated during stimulation, but then resurfaced after stimulation offset and lasted for another 10-15 min. On the assumption that excitability is reduced beneath the cathode and increased beneath the anode, these findings support the view that each hemisphere biases processing to the contralateral hemifield and that the balance of activation between the hemispheres contributes to position perception (Kinsbourne, 1977; Szczepanski, Konen, & Kastner, 2010). |
Chia-Chien Wu; Hsueh-Cheng Wang; Marc Pomplun The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes Journal Article In: Vision Research, vol. 105, pp. 10–20, 2014. @article{Wu2014, A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task. |
David W. -L. Wu; Nicola C. Anderson; Walter F. Bischof; Alan Kingstone Temporal dynamics of eye movements are related to differences in scene complexity and clutter Journal Article In: Journal of Vision, vol. 14, no. 9, pp. 8–8, 2014. @article{Wu2014a, Recent research has begun to explore not just the spatial distribution of eye fixations but also the temporal dynamics of how we look at the world. In this investigation, we assess how scene characteristics contribute to these fixation dynamics. In a free-viewing task, participants viewed three scene types: fractal, landscape, and social scenes. We used a relatively new method, recurrence quantification analysis (RQA), to quantify eye movement dynamics. RQA revealed that eye movement dynamics were dependent on the scene type viewed. To understand the underlying cause for these differences we applied a technique known as fractal analysis and discovered that complexity and clutter are two scene characteristics that affect fixation dynamics, but only in scenes with meaningful content. Critically, scene primitives—revealed by saliency analysis—had no impact on performance. In addition, we explored how RQA differs from the first half of the trial to the second half, as well as the potential to investigate the precision of fixation targeting by changing RQA radius values. Collectively, our results suggest that eye movement dynamics result from top- down viewing strategies that vary according to the meaning of a scene and its associated visual complexity and clutter. |
David W. -L. Wu; Walter F. Bischof; Nicola C. Anderson; Tanya Jakobsen; Alan Kingstone The influence of personality on social attention Journal Article In: Personality and Individual Differences, vol. 60, pp. 25–29, 2014. @article{Wu2014b, The intersection between personality psychology and the study of social attention has been relatively untouched. We present an initial study that investigates the influence of the Big Five personality traits on eye movement behaviour towards social stimuli. By combining a free-viewing eye-tracking paradigm with canonical correlation and regression analyses, we discover that personality relates to fixations towards eye regions. Specifically, Extraversion and Agreeableness were related to greater gaze selection, while Openness to Experience was related to diminished gaze selection. The results demonstrate that who a person is affects how they move their eyes to social stimuli. The results also indicate that personality is a stronger factor in predicting social attention than past studies have suggested. Critical to the influence of personality on attention is the social situations viewers are placed in. |
Fuyun Wu; Yingyi Luo; Xiaolin Zhou Building Chinese relative clause structures with lexical and syntactic cues: Evidence from visual world eye-tracking and reading times Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 10, pp. 1205–1226, 2014. @article{Wu2014c, Relative clauses (RCs) in Chinese are prenominal. In object-modifying, object-extracted RCs (e.g. Click on [RC the ball broke] window, meaning 'Click on the window [RC that the ball broke]), the ambiguous status of the local noun ball and the long-distance attachment of the head noun window into the main verb appear to make online parsing of Chinese RCs particularly difficult. By interposing mismatching classifiers and the passive marker BEI into the RC sentences, we investigated whether the presence of incomplete heads would add storage costs, as predicted by the Dependency Locality Theory (DLT), or would serve as retrieval cues to help pre-build the RC structure, as predicted by the cue-based retrieval theory. Results from a visual world eye-tracking experiment and a self-paced reading showed that Chinese comprehenders are able to use BEI cues and the mismatching classifier (albeit to a less extent) to pre-build RC structure, providing support for the cue-based retrieval theory. |
Jianbo Xiao; Yu-Qiong Niu; Steven Wiesner; Xin Huang Normalization of neuronal responses in cortical area MT across signal strengths and motion directions Journal Article In: Journal of Neurophysiology, vol. 112, no. 6, pp. 1291–1306, 2014. @article{Xiao2014, Multiple visual stimuli are common in natural scenes, yet it remains unclear how multiple stimuli interact to influence neuronal responses. We investigated this question by manipulating relative signal strengths of two stimuli moving simultaneously within the receptive fields (RFs) of neurons in the extrastriate middle temporal (MT) cortex. Visual stimuli were overlapping random-dot patterns moving in two directions separated by 90°. We first varied the motion coherence of each random-dot pattern and characterized, across the direction tuning curve, the relationship between neuronal responses elicited by bidirectional stimuli and by the constituent motion components. The tuning curve for bidirectional stimuli showed response normalization and can be accounted for by a weighted sum of the responses to the motion components. Allowing nonlinear, multiplicative interaction between the two component responses significantly improved the data fit for some neurons, and the interaction mainly had a suppressive effect on the neuronal response. The weighting of the component responses was not fixed but dependent on relative signal strengths. When two stimulus components moved at different coherence levels, the response weight for the higher-coherence component was significantly greater than that for the lower-coherence component. We also varied relative luminance levels of two coherently moving stimuli and found that MT response weight for the higher-luminance component was also greater. These results suggest that competition between multiple stimuli within a neuron's RF depends on relative signal strengths of the stimuli and that multiplicative nonlinearity may play an important role in shaping the response tuning for multiple stimuli. |
Juan Xu; Ming Jiang; Shuo Wang; Mohan S. Kankanhalli; Qi Zhao Predicting human gaze beyond pixels Journal Article In: Journal of Vision, vol. 14, no. 1, pp. 1–20, 2014. @article{Xu2014, A large body of previous models to predict where people look in natural scenes focused on pixel-level image attributes. To bridge the semantic gap between the predictive power of computational saliency models and human behavior, we propose a new saliency architecture that incorporates information at three layers: pixel-level image attributes, object-level attributes, and semantic- level attributes. Object- and semantic-level information is frequently ignored, or only a few sample object categories are discussed where scaling to a large number of object categories is not feasible nor neurally plausible. To address this problem, this work constructs a principled vocabulary of basic attributes to describe object- and semantic-level information thus not restricting to a limited number of object categories. We build a new dataset of 700 images with eye-tracking data of 15 viewers and annotation data of 5,551 segmented objects with fine contours and 12 semantic attributes (publicly available with the paper). Experimental results demonstrate the importance of the object- and semantic-level information in the prediction of visual attention. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr The visual component to saccadic compression Journal Article In: Journal of Vision, vol. 14, no. 12, pp. 13–, 2014. @article{Zimmermann2014, Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after. |
Keir X. X. Yong; Timothy J. Shakespeare; Dave Cash; Susie M. D. Henley; Jennifer M. Nicholas; Gerard R. Ridgway; Hannah L. Golden; Elizabeth K. Warrington; Amelia M. Carton; Diego Kaski; Jonathan M. Schott; Jason D. Warren; Sebastian J. Crutch Prominent effects and neural correlates of visual crowding in a neurodegenerative disease population Journal Article In: Brain, vol. 137, no. 12, pp. 3284–3299, 2014. @article{Yong2014, Crowding is a breakdown in the ability to identify objects in clutter, and is a major constraint on object recognition. Crowding particularly impairs object perception in peripheral, amblyopic and possibly developing vision. Here we argue that crowding is also a critical factor limiting object perception in central vision of individuals with neurodegeneration of the occipital cortices. In the current study, individuals with posterior cortical atrophy (n=26), typical Alzheimer's disease (n=17) and healthy control subjects (n=14) completed centrally-presented tests of letter identification under six different flanking conditions (unflanked, and with letter, shape, number, same polarity and reverse polarity flankers) with two different target-flanker spacings (condensed, spaced). Patients with posterior cortical atrophy were significantly less accurate and slower to identify targets in the condensed than spaced condition even when the target letters were surrounded by flankers of a different category. Importantly, this spacing effect was observed for same, but not reverse, polarity flankers. The difference in accuracy between spaced and condensed stimuli was significantly associated with lower grey matter volume in the right collateral sulcus, in a region lying between the fusiform and lingual gyri. Detailed error analysis also revealed that similarity between the error response and the averaged target and flanker stimuli (but not individual target or flanker stimuli) was a significant predictor of error rate, more consistent with averaging than substitution accounts of crowding. Our findings suggest that crowding in posterior cortical atrophy can be regarded as a pre-attentive process that uses averaging to regularize the pathologically noisy representation of letter feature position in central vision. These results also help to clarify the cortical localization of feature integration components of crowding. More broadly, we suggest that posterior cortical atrophy provides a neurodegenerative disease model for exploring the basis of crowding. These data have significant implications for patients with, or who will go on to develop, dementia-related visual impairment, in whom acquired excessive crowding likely contributes to deficits in word, object, face and scene perception. |
Si On Yoon; Sarah Brown-Schmidt Adjusting conceptual pacts in three-party conversation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 4, pp. 919–937, 2014. @article{Yoon2014, During conversation, partners develop representations of jointly known information-the common ground-and use this knowledge to guide subsequent linguistic exchanges. Extensive research on 2-party conversation has offered key insights into this process, in particular, its partner-specificity: Common ground that is shared with 1 partner is not always assumed to be shared with other partners. Conversation often involves multiple pairs of individuals who differ in common ground. Yet, little is known about common ground processes in multi-party conversation. Here, we take a 1st step toward understanding this problem by examining situations in which simple dyadic representations of common ground might cause difficulty-situations in which dialogue partners develop shared labels (entrained terms), and then a 3rd (naïve) party joins the conversation. Experiment 1 examined unscripted, task-based conversation in which 2 partners entrained on terms. At test, speakers referenced game-pieces in a dialogue with their partner, or in a 3-party conversation including a new, naïve listener. Speakers were sensitive to the 3rd party, using longer, disfluent expressions when additionally addressing the new partner. By contrast, analysis of listener eye-fixations did not suggest sensitivity. Experiment 2 provided a stronger test of sensitivity and revealed that listeners do cancel expectations for terms that had been entrained before when a 3rd, naïve party joins the conversation. These findings shed light on the mechanisms underlying common ground, showing that rather than a unitary construct, common ground is flexibly adapted to the needs of a naïve 3rd party. |
Angela J. Yu; He Huang; Pradeep Shenoy Maximizing masquerading as matching in human visuosaccadic choice Journal Article In: Cognitive Science, vol. 1, no. 4, pp. 1–23, 2014. @article{Yu2014, There has been a long-running debate over whether humans match or maximize when faced with differentially rewarding options under conditions of uncertainty. While maximizing, that is, consistently choosing the most rewarding option, is theoretically optimal, humans have often been observed to match, that is, allocating choices stochastically in proportion to the underlying reward rates. Previous models assumed matching behavior to arise from biological limitations or heuristic decision strategies; this, however, would stand in curious contrast to the accumulating evidence that humans have sophisticated machinery for tracking environmental statistics. It begs the questions of why the brain would build sophisticated representations of environmental statistics, only then to adopt a heuristic decision policy that fails to take full advantage of that information. Here, we revisit this debate by presenting data from a novel visual search task, which are shown to favor a particular Bayesian inference and decision- making account over other heuristic and normative models. Specifically, while sub- jects' first-fixation strategy appears to indicate matching in aggregate data, they actually maximize on a finer, trial-by-trial timescale, based on continuously updated internal beliefs about the spatial distribution of potential target locations. In other words, matching-like stochasticity in human visual search is neither random nor heuristics- based, but attributable specifically to fluctuating beliefs about stimulus statistics. These results not only shed light on the matching versus maximizing debate, but also more broadly on human decision-making strategies under conditions of uncertainty. |
Shlomit Yuval-Greenberg; Elisha P. Merriam; David J. Heeger Spontaneous microsaccades reflect shifts in covert attention Journal Article In: Journal of Neuroscience, vol. 34, no. 41, pp. 13693–13700, 2014. @article{YuvalGreenberg2014, Microsaccade rate during fixation is modulated by the presentation of a visual stimulus. When the stimulus is an endogenous attention cue, the ensuing microsaccades tend to be directed toward the cue. This finding has been taken as evidence that microsaccades index the locus of spatial attention. But the vast majority of microsaccades that subjects make are not triggered by visual stimuli. Under natural viewing conditions, spontaneous microsaccades occur frequently (2-3 Hz), even in the absence of a stimulus or a task. While spontaneous microsaccades may depend on low-level visual demands, such as retinal fatigue, image fading, or fixation shifts, it is unknown whether their occurrence corresponds to changes in the attentional state. We developed a protocol to measure whether spontaneous microsaccades reflect shifts in spatial attention. Human subjects fixated a cross while microsaccades were detected from streaming eye-position data. Detection of a microsaccade triggered the appearance of a peripheral ring of grating patches, which were followed by an arrow (a postcue) indicating one of them as the target. The target was either congruent or incongruent (opposite) with respect to the direction of the microsaccade (which preceded the stimulus). Subjects reported the tilt of the target (clockwise or counterclockwise relative to vertical). We found that accuracy was higher for congruent than for incongruent trials. We conclude that the direction of spontaneous microsaccades is inherently linked to shifts in spatial attention. |
Juha M. Lahnakoski; Enrico Glerean; Iiro P. Jääskeläinen; Jukka Hyönä; Riitta Hari; Mikko Sams; Lauri Nummenmaa Synchronous brain activity across individuals underlies shared psychological perspectives Journal Article In: NeuroImage, vol. 100, pp. 316–324, 2014. @article{Lahnakoski2014, For successful communication, we need to understand the external world consistently with others. This task requires sufficiently similar cognitive schemas or psychological perspectives that act as filters to guide the selection, interpretation and storage of sensory information, perceptual objects and events. Here we show that when individuals adopt a similar psychological perspective during natural viewing, their brain activity becomes synchronized in specific brain regions. We measured brain activity with functional magnetic resonance imaging (fMRI) from 33 healthy participants who viewed a 10-min movie twice, assuming once a 'social' (detective) and once a 'non-social' (interior decorator) perspective to the movie events. Pearson's correlation coefficient was used to derive multisubject voxelwise similarity measures (inter-subject correlations; ISCs) of functional MRI data. We used k-nearest-neighbor and support vector machine classifiers as well as a Mantel test on the ISC matrices to reveal brain areas wherein ISC predicted the participants' current perspective. ISC was stronger in several brain regions-most robustly in the parahippocampal gyrus, posterior parietal cortex and lateral occipital cortex-when the participants viewed the movie with similar rather than different perspectives. Synchronization was not explained by differences in visual sampling of the movies, as estimated by eye gaze. We propose that synchronous brain activity across individuals adopting similar psychological perspectives could be an important neural mechanism supporting shared understanding of the environment. |
Rogier Landman; Jitendra Sharma; Mriganka Sur; Robert Desimone Effect of distracting faces on visual selective attention in the monkey Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 50, pp. 18037–18042, 2014. @article{Landman2014, In primates, visual stimuli with social and emotional content tend to attract attention. Attention might be captured through rapid, automatic, subcortical processing or guided by slower, more voluntary cortical processing. Here we examined whether irrelevant faces with varied emotional expressions interfere with a covert attention task in macaque monkeys. In the task, the monkeys monitored a target grating in the periphery for a subtle color change while ignoring distracters that included faces appearing elsewhere on the screen. The onset time of distracter faces before the target change, as well as their spatial proximity to the target, was varied from trial to trial. The presence of faces, especially faces with emotional expressions interfered with the task, indicating a competition for attentional resources between the task and the face stimuli. However, this interference was significant only when faces were presented for greater than 200 ms. Emotional faces also affected saccade velocity and reduced pupillary reflex. Our results indicate that the attraction of attention by emotional faces in the monkey takes a considerable amount of processing time, possibly involving cortical–subcortical interactions. Intranasal application of the hormone oxytocin ameliorated the interfering effects of faces. Together these results provide evidence for slow modulation of attention by emotional distracters, which likely involves oxytocinergic brain circuits. |
Alexandre Lang; Chrystal Gaertner; Elham Ghassemi; Qing Yang; Christophe Orssaud; Zoï Kapoula Saccade-vergence properties remain more stable over short-time repetition under overlap than under gap task: A preliminary study Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 372, 2014. @article{Lang2014, Under natural circumstances, saccade-vergence eye movements are among the most frequently occurring. This study examines the properties of such movements focusing on short-term repetition effects. Are such movements robust over time or are they subject to tiredness? 12 healthy adults performed convergent and divergent combined eye movements either in a gap task (i.e., 200 ms between the end of the fixation stimulus and the beginning of the target stimulus) or in an overlap task (i.e., the peripheral target begins 200 ms before the end of the fixation stimulus). Latencies were shorter in the gap task than in the overlap task for both saccade and vergence components. Repetition had no effect on latency, which is a novel result. In both tasks, saccades were initiated later and executed faster (mean and peak velocities) than the vergence component. The mean and peak velocities of both components decreased over trials in the gap task but remained constant in the overlap task. This result is also novel and has some clinical implications. Another novel result concerns the accuracy of the saccade component that was better in the gap than in the overlap task. The accuracy also decreased over trials in the gap task but remained constant in the overlap task. The major result of this study is that under a controlled mode of initiation (overlap task) properties of combined eye movements are more stable than under automatic triggering (gap task). These results are discussed in terms of saccade-vergence interactions, convergence-divergence specificities and repetition versus adaptation protocols. |
Nicholas D. Lange; Daniel R. Buttaccio; Eddy J. Davelaar; Rick P. Thomas Using the memory activation capture (MAC) procedure to investigate the temporal dynamics of hypothesis generation Journal Article In: Memory & Cognition, vol. 42, no. 2, pp. 264–274, 2014. @article{Lange2014, Research investigating top-down capture has demonstrated a coupling of working memory content with attention and eye movements. By capitalizing on this relationship, we have developed a novel methodology, called the memory activation capture (MAC) procedure, for measuring the dynamics of working memory content supporting complex cognitive tasks (e.g., decision making, problem solving). The MAC procedure employs briefly presented visual arrays containing task-relevant information at critical points in a task. By observing which items are preferentially fixated, we gain a measure of working memory content as the task evolves through time. The efficacy of the MAC procedure was demonstrated in a dynamic hypothesis generation task in which some of its advantages over existing methods for measuring changes in the contents of working memory over time are highlighted. In two experiments, the MAC procedure was able to detect the hypothesis that was retrieved and placed into working memory. Moreover, the results from Experiment 2 suggest a two-stage process following hypothesis retrieval, whereby the hypothesis undergoes a brief period of heightened activation before entering a lower activation state in which it is maintained for output. The results of both experiments are of additional general interest, as they represent the first demonstrations of top-down capture driven by participant-established WM content retrieved from long-term memory. |
K. Lankinen; Jukka Saari; Riitta Hari; Miika Koskinen Intersubject consistency of cortical MEG signals during movie viewing Journal Article In: NeuroImage, vol. 92, pp. 217–224, 2014. @article{Lankinen2014, According to recent functional magnetic resonance imaging (fMRI) studies, spectators of a movie may share similar spatiotemporal patterns of brain activity. We aimed to extend these findings of intersubject correlation to temporally accurate single-trial magnetoencephalography (MEG). A silent 15-min black-and-white movie was shown to eight subjects twice. We adopted a spatial filtering model and estimated its parameter values by using multi-set canonical correlation analysis (M-CCA) so that the intersubject correlation was maximized. The procedure resulted in multiple (mutually uncorrelated) time-courses with statistically significant intersubject correlations at frequencies below 10 Hz; the maximum correlation was 0.28 ± 0.075 in the ≤1 Hz band. Moreover, the 24-Hz frame rate elicited steady-state responses with statistically significant intersubject correlations up to 0.29 ± 0.12. To assess the brain origin of the across-subjects correlated signals, the time-courses were correlated with minimum-norm source current estimates (MNEs) projected to the cortex. The time series implied across-subjects synchronous activity in the early visual, posterior and inferior parietal, lateral temporooccipital, and motor cortices, and in the superior temporal sulcus (STS) bilaterally. These findings demonstrate the capability of the proposed methodology to uncover cortical MEG signatures from single-trial signals that are consistent across spectators of a movie. |
Axel Larsen Deconstructing mental rotation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 3, pp. 1072–1091, 2014. @article{Larsen2014, A random walk model of the classical mental rotation task is explored in two experiments. By assuming that a mental rotation is repeated until sufficient evidence for a match/mismatch is obtained, the model accounts for the approximately linearly increasing reaction times (RTs) on positive trials, flat RTs on negative trials, false alarms and miss rates, effects of complexity, and for the number of eye movement switches between stimuli as functions of angular difference in orientation. Analysis of eye movements supports key aspects of the model and shows that initial processing time is roughly constant until the first saccade switch between stimulus objects, while the duration of the remaining trial increases approximately linearly as a function of angular discrepancy. The increment results from additive effects of (a) a linear increase in the number of saccade switches between stimulus objects, (b) a linear increase in the number of saccades on a stimulus, and (c) a linear increase in the number and in the duration of fixations on a stimulus object. The fixation duration increment was the same on simple and complex trials (about 15 ms per 60°), which suggests that the critical orientation alignment take place during fixations at very high speed. |
Adam M. Larson; Tyler E. Freeman; Ryan V. Ringer; Lester C. Loschky The spatiotemporal dynamics of scene gist recognition Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 2, pp. 471–487, 2014. @article{Larson2014, Viewers can rapidly extract a holistic semantic representation of a real-world scene within a single eye fixation, an ability called recognizing the gist of a scene, and operationally defined here as recognizing an image's basic-level scene category. However, it is unknown how scene gist recognition unfolds over both time and space-within a fixation and across the visual field. Thus, in 3 experiments, the current study investigated the spatiotemporal dynamics of basic-level scene categorization from central vision to peripheral vision over the time course of the critical first fixation on a novel scene. The method used a window/scotoma paradigm in which images were briefly presented and processing times were varied using visual masking. The results of Experiments 1 and 2 showed that during the first 100 ms of processing, there was an advantage for processing the scene category from central vision, with the relative contributions of peripheral vision increasing thereafter. Experiment 3 tested whether this pattern could be explained by spatiotemporal changes in selective attention. The results showed that manipulating the probability of information being presented centrally or peripherally selectively maintained or eliminated the early central vision advantage. Across the 3 experiments, the results are consistent with a zoom-out hypothesis, in which, during the first fixation on a scene, gist extraction extends from central vision to peripheral vision as covert attention expands outward. |
Nida Latif; Arlene Gehmacher; Monica S. Castelhano; Kevin G. Munhall The art of gaze guidance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 33–39, 2014. @article{Latif2014, An ongoing challenge in scene perception is identifying the factors that influence how we explore our visual world. By using multiple versions of paintings as a tool to control for high-level influences, we show that variation in the visual details of a painting causes differences in observers' gaze despite constant task and content. Further, we show that by switching locations of highly salient regions through textural manipulation, a corresponding switch in eye movement patterns is observed. Our results present the finding that salient regions and gaze behavior are not simply correlated; variation in saliency through textural differences causes an observer to direct their viewing accordingly. This work demonstrates the direct contribution of low-level factors in visual exploration by showing that examination of a scene, even for aesthetic purposes, can be easily manipulated by altering the low-level properties and hence, the saliency of the scene. |
Claudio Lavín; René San Martín; Eduardo Rosales Jubal Pupil dilation signals uncertainty and surprise in a learning gambling task Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 218, 2014. @article{Lavin2014, Pupil dilation under constant illumination is a physiological marker where modulation is related to several cognitive functions involved in daily decision making. There is evidence for a role of pupil dilation change during decision-making tasks associated with uncertainty, reward-prediction errors and surprise. However, while some work suggests that pupil dilation is mainly modulated by reward predictions, others point out that this marker is related to uncertainty signaling and surprise. Supporting the latter hypothesis, the neural substrate of this marker is related to noradrenaline (NA) activity which has been also related to uncertainty signaling. In this work we aimed to test whether pupil dilation is a marker for uncertainty and surprise in a learning task. We recorded pupil dilation responses in 10 participants performing the Iowa Gambling Task (IGT), a decision-making task that requires learning and constant monitoring of outcomes' feedback, which are important variables within the traditional study of human decision making. Results showed that pupil dilation changes were modulated by learned uncertainty and surprise regardless of feedback magnitudes. Interestingly, greater pupil dilation changes were found during positive feedback (PF) presentation when there was lower uncertainty about a future negative feedback (NF); and by surprise during NF presentation. These results support the hypothesis that pupil dilation is a marker of learned uncertainty, and may be used as a marker of NA activity facing unfamiliar situations in humans. |
Rebecca P. Lawson; Ben Seymour; Eleanor Loh; Antoine Lutti; Raymond J. Dolan; Peter Dayan; Nikolaus Weiskopf; Jonathan P. Roiser The habenula encodes negative motivational value associated with primary punishment in humans Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 32, pp. 11858–11863, 2014. @article{Lawson2014, Learning what to approach, and what to avoid, involves assigning value to environmental cues that predict positive and negative events. Studies in animals indicate that the lateral habenula encodes the previously learned negative motivational value of stimuli. However, involvement of the habenula in dynamic trial-by-trial aversive learning has not been assessed, and the functional role of this structure in humans remains poorly characterized, in part, due to its small size. Using high-resolution functional neuroimaging and computational modeling of reinforcement learning, we demonstrate positive habenula responses to the dynamically changing values of cues signaling painful electric shocks, which predict behavioral suppression of responses to those cues across individuals. By contrast, negative habenula responses to monetary reward cue values predict behavioral invigoration. Our findings show that the habenula plays a key role in an online aversive learning system and in generating associated motivated behavior in humans. |
Stephen Layfield; Wesley Burge; William G. Mitchell; Lesley A. Ross; Christine Denning; Frank Amthor; Kristina M. Visscher The effect of speed of processing training on microsaccade amplitude Journal Article In: PLoS ONE, vol. 9, no. 9, pp. e107808, 2014. @article{Layfield2014, Older adults experience cognitive deficits that can lead to driving errors and a loss of mobility. Fortunately, some of these deficits can be ameliorated with targeted interventions which improve the speed and accuracy of simultaneous attention to a central and a peripheral stimulus called Speed of Processing training. To date, the mechanisms behind this effective training are unknown. We hypothesized that one potential mechanism underlying this training is a change in distribution of eye movements of different amplitudes. Microsaccades are small amplitude eye movements made when fixating on a stimulus, and are thought to counteract the "visual fading" that occurs when static stimuli are presented. Due to retinal anatomy, larger microsaccadic eye movements are needed to move a peripheral stimulus between receptive fields and counteract visual fading. Alternatively, larger microsaccades may decrease performance due to neural suppression. Because larger microsaccades could aid or hinder peripheral vision, we examine the distribution of microsaccades during stimulus presentation. Our results indicate that there is no statistically significant change in the proportion of large amplitude microsaccades during a Useful Field of View-like task after training in a small sample of older adults. Speed of Processing training does not appear to result in changes in microsaccade amplitude, suggesting that the mechanism underlying Speed of Processing training is unlikely to rely on microsaccades. |
Ada Le; Matthias Niemeier Visual field preferences of object analysis for grasping with one hand Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 782, 2014. @article{Le2014, When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al., 2007; Rice et al., 2007). However, it is unclear whether visual object analysis for grasp control relies more on inputs (a) from the contralateral than the ipsilateral visual field, (b) from one dominant visual field regardless of the grasping hand, or (c) from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier, 2013a,b), consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2014). But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs) were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling with the left hand showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields. |
Chia-lin Lee; Daniel Mirman; Laurel J. Buxbaum Abnormal dynamics of activation of object use information in apraxia: Evidence from eyetracking Journal Article In: Neuropsychologia, vol. 59, no. 1, pp. 13–26, 2014. @article{Lee2014, Action representations associated with object use may be incidentally activated during visual object processing, and the time course of such activations may be influenced by lexical-semantic context (e.g., Lee, Middleton, Mirman, Kalénine, & Buxbaum (2012). Journal of Experimental Psychology: Human Perception and Performance, 39(1), 257-270). In this study we used the "visual world" eye-tracking paradigm to examine whether a deficit in producing skilled object-use actions (apraxia) is associated with abnormalities in incidental activation of action information, and assessed the neuroanatomical substrates of any such deficits. Twenty left hemisphere stroke patients, ten of whom were apraxic, performed a task requiring identification of a named object in a visual display containing manipulation-related and unrelated distractor objects. Manipulation relationships among objects were not relevant to the identification task. Objects were cued with neutral ("S/he saw the. . .."), or action-relevant ("S/he used the. . ..") sentences. Non-apraxic participants looked at use-related non-target objects significantly more than at unrelated non-target objects when cued both by neutral and action-relevant sentences, indicating that action information is incidentally activated. In contrast, apraxic participants showed delayed activation of manipulation-based action information during object identification when cued by neutral sentences. The magnitude of delayed activation in the neutral sentence condition was reliably predicted by lower scores on a test of gesture production to viewed objects, as well as by lesion loci in the inferior parietal and posterior temporal lobes. However, when cued by a sentence containing an action verb, apraxic participants showed fixation patterns that were statistically indistinguishable from non-apraxic controls. In support of grounded theories of cognition, these results suggest that apraxia and temporal-parietal lesions may be associated with abnormalities in incidental activation of action information from objects. Further, they suggest that the previously-observed facilitative role of action verbs in the retrieval of object-related action information extends to participants with apraxia. |
Dongpyo Lee; Howard Poizner; Daniel M. Corcos; Denise Y. P. Henriques Unconstrained reaching modulates eye-hand coupling Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 211–223, 2014. @article{Lee2014b, Eye–hand coordination is a crucial element of goal-directed movements. However, few studies have looked at the extent to which unconstrained movements of the eyes and hand made to targets influence each other. We studied human participants who moved either their eyes or both their eyes and hand to one of three static or flashed targets presented in 3D space. The eyes were directed, and hand was located at a common start position on either the right or left side of the body. We found that the velocity and scatter of memory-guided saccades (flashed targets) differed significantly when produced in combination with a reaching movement than when produced alone. Specifically, when accompanied by a reach, peak saccadic velocities were lower than when the eye moved alone. Peak saccade velocities, as well as latencies, were also highly correlated with those for reaching movements, especially for the briefly flashed targets compared to the continuous visible target. The scatter of saccade endpoints was greater when the saccades were produced with the reaching movement than when produced without, and the size of the scatter for both saccades and reaches was weakly correlated. These findings suggest that the saccades and reaches made to 3D targets are weakly to moderately coupled both temporally and spatially and that this is partly the result of the arm movement influencing the eye movement. Taken together, this study provides further evidence that the oculomotor and arm motor systems interact above and beyond any common target representations shared by the two motor systems. |
Kang Woo Lee; Yubu Lee Scanpath generated by cue-driven activation and spatial strategy: A comparative study Journal Article In: Cognitive Computation, vol. 6, no. 3, pp. 585–594, 2014. @article{Lee2014a, A comparative study of a cued face search task is presented in this paper. Human participants and a computer model carried out a task in which they were required to locate a color-cued target face. Human-generated eye fixations and scanpaths were compared with those generated by the computational model. Throughout the comparison, we considered the similarities and dissimilarities between the two systems' performances. Their results show that the eye fixations in a valid cue search are highly correlated with the computer-generated fixation points in a valid cue search but not to those in random and invalid cue searches. Moreover, the comparison between human- and computer-generated scanpaths showed that the scanpath that links the fixation points is not randomly generated. Our results imply that eye movement is accomplished not only by cue-driven activation, but also by a spatial strategy. |
Guojie Ma; Xingshan Li; Keith Rayner Word segmentation of overlapping ambiguous strings during Chinese reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 3, pp. 1046–1059, 2014. @article{Ma2014, In 3 experiments, we tested 3 possible mechanisms for segmenting overlapping ambiguous strings in Chinese reading. The first 2 characters and the last 2 characters in a 3-character ambiguous string could both constitute a word in the reported studies. The left-priority hypothesis assumes that the word on the left has an advantage in the competition and the other word cannot be processed until the word on the left is recognized. The independent processing hypothesis assumes that words in different positions are processed simultaneously and independently, and the word segmentation ambiguity cannot be settled without the help of sentence context. The competition hypothesis assumes that all of the words compete for a single winner. The results support a competition account that the characters in the perceptual span activate all of the words they can constitute, and any word can win the competition if its activation is high enough. |
Christine Macare; Thomas Meindl; Igor Nenadic; Dan Rujescu; Ulrich Ettinger Preliminary findings on the heritability of the neural correlates of response inhibition Journal Article In: Biological Psychology, vol. 103, no. 1, pp. 19–23, 2014. @article{Macare2014, Imaging genetics examines genetic influences on brain structure and function. This preliminary study tested a fundamental assumption of that approach by estimating the heritability of the blood oxygen level dependent (BOLD) signal during antisaccades, a measure of response inhibition impaired in different psychiatric conditions. One hundred thirty-two healthy same-sex reared-together twins (90 monozygotic (MZ; 32 male) and 42 dizygotic (DZ; 24 male)) performed antisaccades in the laboratory. Of these, 96 twins (60 MZ, 28 male; 36 DZ, 22 male) subsequently underwent functional magnetic resonance imaging (fMRI) during antisaccades. Variation in antisaccade direction errors in the laboratory showed significant heritability (47%; 95% confidence interval (CI) 22-65). In fMRI, the contrast of antisaccades with prosaccades yielded BOLD signal in fronto-parietal-subcortical networks. Twin modelling provided tentative evidence of significant heritability (50%, 95% CI: 18-72) of BOLD in the left thalamus only. However, due to the limited power to detect heritability in this study, replications in larger samples are needed. |
Bart Machilsen; Johan Wagemans Both predictability and familiarity facilitate contour integration Journal Article In: Journal of Vision, vol. 14, no. 5, pp. 1–15, 2014. @article{Machilsen2014, Research has shown that contour detection is impaired in the visual periphery for snake-shaped Gabor contours but not for circular and elliptical contours. This discrepancy in findings could be due to differences in intrinsic shape properties, including shape closure and curvature variation, as well as to differences in stimulus predictability and familiarity. In a detection task using only circular contours, the target shape is both more familiar and more predictable to the observer compared with a detection task in which a different snake-shaped contour is presented on each trial. In this study, we investigated the effects of stimulus familiarity and predictability on contour integration by manipulating and disentangling the familiarity and predictability of snake-like stimuli. We manipulated stimulus familiarity by extensively training observers with one particular snake shape. Predictability was varied by alternating trial blocks with only a single target shape and trial blocks with multiple target shapes. Our results show that both predictability and familiarity facilitated contour integration, which constitutes novel behavioral evidence for the adaptivity of the contour integration mechanism in humans. If familiarity or predictability facilitated contour integration in the periphery specifically, this could explain the discrepant findings obtained with snake contours as compared with circles or ellipses. However, we found that their facilitatory effects did not differ between central and peripheral vision and thus cannot explain that particular discrepancy in the literature. |
W. Joseph MacInnes; Amelia R. Hunt Attentional load interferes with target localization across saccades Journal Article In: Experimental Brain Research, vol. 232, no. 12, pp. 3737–3748, 2014. @article{MacInnes2014, The retinal positions of objects in the world change with each eye movement, but we seem to have little trouble keeping track of spatial information from one fixation to the next. We examined the role of attention in trans-saccadic localization by asking participants to localize targets while performing an attentionally demanding secondary task. In the first experiment, attentional load decreased localization precision for a remembered target, but only when a saccade intervened between target presentation and report. We then repeated the experiment and included a salient landmark that shifted on half the trials. The shifting landmark had a larger effect on localization under high load, indicating that observers rely more on landmarks to make localization judgments under high than under low attentional load. The results suggest that attention facilitates trans-saccadic localization judgments based on spatial updating of gaze-centered coordinates when visual landmarks are not available. The availability of reliable landmarks (present in most natural circumstances) can compensate for the effects of scarce attentional resources on trans-saccadic localization. |
W. Joseph MacInnes; Amelia R. Hunt; Matthew D. Hilchey; Raymond M. Klein Driving forces in free visual search: An ethology Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 2, pp. 280–295, 2014. @article{MacInnes2014a, Visual search typically involves sequences of eye movements under the constraints of a specific scene and specific goals. Visual search has been used as an experimental paradigm to study the interplay of scene salience and top-down goals, as well as various aspects of vision, attention, and memory, usually by introducing a secondary task or by controlling and manipulating the search environment. An ethology is a study of an animal in its natural environment, and here we examine the fixation patterns of the human animal searching a series of challenging illustrated scenes that are well-known in popular culture. The search was free of secondary tasks, probes, and other distractions. Our goal was to describe saccadic behavior, including patterns of fixation duration, saccade amplitude, and angular direction. In particular, we employed both new and established techniques for identifying top-down strategies, any influences of bottom-up image salience, and the midlevel attentional effects of saccadic momentum and inhibition of return. The visual search dynamics that we observed and quantified demonstrate that saccades are not independently generated and incorporate distinct influences from strategy, salience, and attention. Sequential dependencies consistent with inhibition of return also emerged from our analyses. |
Indra T. Mahayana; Chia-Lun Liu; Chi Fu Chang; Daisy L. Hung; Ovid J. L. Tzeng; Chi-Hung Juan; Neil G. Muggleton Far-space neglect in conjunction but not feature search following transcranial magnetic stimulation over right posterior parietal cortex Journal Article In: Journal of Neurophysiology, vol. 111, no. 4, pp. 705–714, 2014. @article{Mahayana2014, Near- and far-space coding in the human brain is a dynamic process. Areas in dorsal, as well as ventral visual association cortex, including right posterior parietal cortex (rPPC), right frontal eye field (rFEF), and right ventral occipital cortex (rVO), have been shown to be important in visuospatial processing, but the involvement of these areas when the information is in near or far space remains unclear. There is a need for investigations of these representations to help explain the pathophysiology of hemispatial neglect, and the role of near and far space is crucial to this. We used a conjunction visual search task using an elliptical array to investigate the effects of transcranial magnetic stimulation delivered over rFEF, rPPC, and rVO on the processing of targets in near and far space and at a range of horizontal eccentricities. As in previous studies, we found that rVO was involved in far-space search, and rFEF was involved regardless of the distance to the array. It was found that rPPC was involved in search only in far space, with a neglect-like effect when the target was located in the most eccentric locations. No effects were seen for any site for a feature search task. As the search arrays had higher predictability with respect to target location than is often the case, these data may form a basis for clarifying both the role of PPC in visual search and its contribution to neglect, as well as the importance of near and far space in these. |
Michaela Mahlberg; Kathy Conklin; Marie-Josée Bisson Reading Dickens's characters: Employing psycholinguistic methods to investigate the cognitive reality of patterns in texts Journal Article In: Language and Literature, vol. 23, no. 4, pp. 369–388, 2014. @article{Mahlberg2014, This article reports the findings of an empirical study that uses eye-tracking and follow-up interviews as methods to investigate how participants read body language clusters in novels by Charles Dickens. The study builds on previous corpus stylistic work that has identified patterns of body language presentation as techniques of characterisation in Dickens (Mahlberg, 2013). The article focuses on the reading of 'clusters', that is, repeated sequences of words. It is set in a research context that brings together observations from both corpus linguistics and psycholinguistics on the processing of repeated patterns. The results show that the body language clusters are read significantly faster than the overall sample extracts which suggests that the clusters are stored as units in the brain. This finding is complemented by the results of the follow-up questions which indicate that readers do not seem to refer to the clusters when talking about character information, although they are able to refer to clusters when biased prompts are used to elicit information. Beyond the specific results of the study, this article makes a contribution to the development of complementary methods in literary stylistics and it points to directions for further subclassifications of clusters that could not be achieved on the basis of corpus data alone. |
Guido Maiello; Manuela Chessa; Fabio Solari; Peter J. Bex Simulated disparity and peripheral blur interact during binocular fusion Journal Article In: Journal of Vision, vol. 14, no. 8, pp. 1–14, 2014. @article{Maiello2014, We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. |
Timothy Leffel; Miriam Lauter; Masha Westerlund; Liina Pylkkänen Restrictive vs. non-restrictive composition: A magnetoencephalography study Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 10, pp. 1191–1204, 2014. @article{Leffel2014, Recent research on the brain mechanisms underlying language processing has implicated the left anterior temporal lobe (LATL) as a central region for the composition of simple phrases. Because these studies typically present their critical stimuli without contextual information, the sensitivity of LATL responses to contextual factors is unknown. In this magnetoencephalography (MEG) study, we employed a simple question-answer paradigm to manipulate whether a prenominal adjective or determiner is interpreted restrictively, i.e., as limiting the set of entities under discussion. Our results show that the LATL is sensitive to restriction, with restrictive composition eliciting higher responses than non-restrictive composition. However, this effect was only observed when the restricting element was a determiner, adjectival stimuli showing the opposite pattern, which we hypothesise to be driven by the special pragmatic properties of non-restrictive adjectives. Overall, our results demonstrate a robust sensitivity of the LATL to high level contextual and potentially also pragmatic factors. |
Carly J. Leonard; Benjamin M. Robinson; Britta Hahn; James M. Gold; Steven J. Luck Enhanced distraction by magnocellular salience signals in schizophrenia Journal Article In: Neuropsychologia, vol. 56, no. 1, pp. 359–366, 2014. @article{Leonard2014, Research on schizophrenia has provided evidence of both impaired attentional control and dysfunctional magnocellular sensory processing. The present study tested the hypothesis that these impairments may be related, such that people with schizophrenia would be differentially distracted by stimuli that strongly activate the magnocellular pathway. To accomplish this, we used a visual attention paradigm from the basic cognitive neuroscience literature designed to assess the capture of attention by salient but irrelevant stimuli. Participants searched for a target shape in an array of non-target shapes. On some trials, a salient distractor was presented that either selectively activated the parvocellular system (parvo-biased distractors) or activated both the magnocellular and parvocellular systems (magno+parvo distractors). For both manual reaction times and eye movement measures, the magno+parvo distractors captured attention more strongly than the parvo-biased distractors in people with schizophrenia, but the opposite pattern was observed in matched healthy control participants. These results indicate that attentional control deficits in schizophrenia may arise, at least in part, by means of an interaction with magnocellular sensory dysfunction. |
Benjamin D. Lester; Paul Dassonville The role of the right superior parietal lobule in processing visual context for the establishment of the egocentric reference frame Journal Article In: Journal of Cognitive Neuroscience, vol. 26, no. 10, pp. 2201–2209, 2014. @article{Lester2014, Visual cues contribute to the creation of an observerʼs ego- centric reference frame, within which the locations and orien- tations of objects can be judged. However, these cues can also be misleading. In the rod-and-frame illusion, for example, a large tilted frame distorts the observerʼs sense of vertical, caus- ing an enclosed rod to appear tilted in the opposite direction. To determine the brain region responsible for processing these spatial cues, we used TMS to suppress neural activity in the superior parietal lobule of healthy observers. Stimulation of the right hemisphere, but not the left, caused a significant reduc- tion in rod-and-frame susceptibility. In contrast, a tilt illusion caused by a mechanism that does not involve a distortion of the observerʼs egocentric reference frame was unaffected. These results demonstrate that the right superior parietal lobule is actively involved in processing the contextual cues that contribute to our perception of egocentric space. |
Chi Yui Leung; Masatoshi Sugiura; Daisuke Abe; Lisa Yoshikawa The perceptual span in second language reading: An eye-tracking study using a gaze-contingent moving window paradigm Journal Article In: Open Journal of Modern Linguistics, vol. 4, pp. 585–594, 2014. @article{Leung2014, The perceptual span, which is the visual area providing useful information to a reader during eye fixation, has been well investigated among native or first language (L1) readers, but not among second language (L2) readers. Our goal was to investigate the size of the perceptual span among Japanese university students who learn English as a foreign language (EFL) to investigate parafoveal processing during L2 reading. In an experiment using the gaze-contingent moving window paradigm, we compared perceptual span between Japanese EFL readers (N = 42) and native English L1 readers (N = 14). Our results showed that (1) the EFL readers had a smaller perceptual span than the L1 readers did, and (2) the facilitating effect of parafoveal information was greater for faster EFL readers than it was for slower EFL readers. These findings provide evidence that EFL readers can only utilize little parafoveal information during fixation when compared with L1 readers. |
Delphine Lévy-Bencheton; Denis Pélisson; Muriel T. N. Panouillères; Christian Urquizar; Caroline Tilikete; Laure Pisella Adaptation of scanning saccades co-occurs in different coordinate systems Journal Article In: Journal of Neurophysiology, vol. 111, no. 12, pp. 2505–2515, 2014. @article{LevyBencheton2014, Plastic changes of saccades (i.e., following saccadic adaptation) do not transfer between oppositely directed saccades, except when multiple directions are trained simultaneously, suggesting a saccadic planning in retinotopic coordinates. Interestingly, a recent study in human healthy subjects revealed that after an adaptive increase of rightward-scanning saccades, both leftward and rightward double-step, memory-guided saccades, triggered toward the adapted endpoint, were modified, revealing that target location was coded in spatial coordinates (Zimmermann et al. 2011). However, as the computer screen provided a visual frame, one alternative hypothesis could be a coding in allocentric coordinates. Here, we questioned whether adaptive modifications of saccadic planning occur in multiple coordinate systems. We reproduced the paradigm of Zimmermann et al. (2011) using target light-emitting diodes in the dark, with and without a visual frame, and tested different saccades before and after adaptation. With double-step, memory-guided saccades, we reproduced the transfer of adaptation to leftward saccades with the visual frame but not without, suggesting that the coordinate system used for saccade planning, when the frame is visible, is allocentric rather than spatiotopic. With single-step, memory-guided saccades, adaptation transferred to leftward saccades, both with and without the visual frame, revealing a target localization in a coordinate system that is neither retinotopic nor allocentric. Finally, with single-step, visually guided saccades, the classical, unidirectional pattern of amplitude change was reproduced, revealing retinotopic coordinate coding. These experiments indicate that the same procedure of adaptation modifies saccadic planning in multiple coordinate systems in parallel-each of them revealed by the use of different saccade tasks in postadaptation. |
George L. Malcolm; Antje Nuthmann; Philippe G. Schyns Beyond gist: Strategic and incremental information accumulation for scene categorization Journal Article In: Psychological Science, vol. 25, no. 5, pp. 1087–1097, 2014. @article{Malcolm2014, Research on scene categorization generally concentrates on gist processing, particularly the speed and minimal features with which the "story" of a scene can be extracted. However, this focus has led to a paucity of research into how scenes are categorized at specific hierarchical levels (e.g., a scene could be a road or more specifically a highway); consequently, research has disregarded a potential diagnostically driven feedback process. We presented participants with scenes that were low-pass filtered so only their gist was revealed, while a gaze-contingent window provided the fovea with full-resolution details. By recording where in a scene participants fixated prior to making a basic- or subordinate-level judgment, we identified the scene information accrued when participants made either categorization. We observed a feedback process, dependent on categorization level, that systematically accrues sufficient and detailed diagnostic information from the same scene. Our results demonstrate that during scene processing, a diagnostically driven bidirectional interplay between top-down and bottom-up information facilitates relevant category processing. |
Jonathan T. Mall; Candice C. Morey; Michael J. Wolff; Franziska Lehnert In: Attention, Perception, & Psychophysics, vol. 76, no. 7, pp. 1998–2014, 2014. @article{Mall2014, Selective attention and working memory capacity (WMC) are related constructs, but debate about the manner in which they are related remains active. One elegant explanation of variance in WMC is that the efficiency of filtering irrelevant information is the crucial determining factor, rather than differences in capacity per se. We examined this hypothesis by relating WMC (as measured by complex span tasks) to accuracy and eye movements during visual change detection tasks with different degrees of attentional filtering and allocation requirements. Our results did not indicate strong filtering differences between high- and low-WMC groups, and where differences were observed, they were counter to those predicted by the strongest attentional filtering hypothesis. Bayes factors indicated evidence favoring positive or null relationships between WMC and correct responses to unemphasized information, as well as between WMC and the time spent looking at unemphasized information. These findings are consistent with the hypothesis that individual differences in storage capacity, not only filtering efficiency, underlie individual differences in working memory. |
Ryan T. Maloney; Tamara L. Watson; Colin W. G. Clifford Determinants of motion response anisotropies in human early visual cortex: The role of configuration and eccentricity Journal Article In: NeuroImage, vol. 100, pp. 564–579, 2014. @article{Maloney2014, Anisotropies in the cortical representation of various stimulus parameters can reveal the fundamental mechanisms by which sensory properties are analysed and coded by the brain. One example is the preference for motion radial to the point of fixation (i.e. centripetal or centrifugal) exhibited in mammalian visual cortex. In two experiments, this study used functional magnetic resonance imaging (fMRI) to explore the determinants of these radial biases for motion in functionally-defined areas of human early visual cortex, and in particular their dependence upon eccentricity which has been indicated in recent reports. In one experiment, the cortical response to wide-field random dot kinematograms forming 16 different complex motion patterns (including centrifugal, centripetal, rotational and spiral motion) was measured. The response was analysed according to preferred eccentricity within four different eccentricity ranges. Response anisotropies were characterised by enhanced activity for centripetal or centrifugal patterns that changed systematically with eccentricity in visual areas V1-V3 and hV4 (but not V3A/B or V5/MT. +). Responses evolved from a preference for centrifugal over centripetal patterns close to the fovea, to a preference for centripetal over centrifugal at the most peripheral region stimulated, in agreement with previous work. These effects were strongest in V2 and V3. In a second experiment, the stimuli were restricted to within narrow annuli either close to the fovea (0.75-1.88°) or further in the periphery (4.82-6.28°), in a way that preserved the local motion information available in the first experiment. In this configuration a preference for radial motion (centripetal or centrifugal) persisted but the dependence upon eccentricity disappeared. Again this was clearest in V2 and V3. A novel interpretation of the dependence upon eccentricity of motion anisotropies in early visual cortex is offered that takes into account the spatiotemporal "predictability" of the moving pattern. Such stimulus predictability, and its relationship to models of predictive coding, has found considerable support in recent years in accounting for a number of other perceptual and neural phenomena. |
Simona Mancini; Nicola Molinaro; Douglas J. Davidson; Alberto Avilés; Manuel Carreiras Person and the syntax-discourse interface: An eye-tracking study of agreement Journal Article In: Journal of Memory and Language, vol. 76, pp. 141–157, 2014. @article{Mancini2014, The time-course of agreement processing was investigated through three eye-tracking experiments and one grammaticality judgment task by making use of the Spanish Unagreement pattern, which allows the presence of a 3rd person plural subject followed by a 1st person plural verb, as in Los manifestantes anunciamos una huelga (The protesters3.plannounced1.pla strike). Grammaticality is ensured by re-interpreting the subject as 1st person plural, thereby changing the underlying discourse composition of the sentence (We protesters announced a strike). The comparison of Unagreement with structurally similar sentences (Experiment 1), truly person-anomalous sentences (Experiments 2 and 3) and discourse-incongruent sentences (Experiment 4) revealed a clear dissociation between morphosyntactic-related and discourse-related analysis in agreement comprehension. The constant first-pass effect elicited by Unagreement with respect to structurally similar (grammatical and ungrammatical) sentences across the four experiments evidences the sensitivity of early stages to morphosyntactic evaluation, while the differential effect for discourse-congruous and discourse-incongruous sentences in later measures suggests that discourse-related analyses are dealt with by the parser in subsequent stages of processing. |
Anne Mandel; Siiri Helokunnas; Elina Pihko; Riitta Hari Neuromagnetic brain responses to other person's eye blinks seen on video Journal Article In: European Journal of Neuroscience, vol. 40, pp. 2576–2580, 2014. @article{Mandel2014, Eye blinks, typically occurring 15–20 times per minute, rarely capture attention during face-to-face interaction. To determine the extent to which eye blinks affect the viewer's brain activity, we recorded magnetoencephalographic brain responses to natural blinks, and to the same blinks slowed down to 38% of the original speed. The stimuli were presented on video once every 2.3–6.2 s. As a control, we presented two horizontal black bars moving with the same time courses and the same extent as the eyelids in the blink video. Both types of blinks and bars elicited clear responses peaking at about 200 ms in the occipital areas, with no systematic differences between hemispheres. For the bars, these main responses were (as expected) weaker (by 24%) and later (by 33 ms) to slowmotion than normal-speed stimuli. For blinks, however, the responses to both normal-speed and slow-motion stimuli were of the same amplitude and latency. Our results demonstrate that the brain not only responds to other persons' eye blinks, but that the responses are as fast and of equal size even when the blinks are considerably slowed down. We interpret this finding to reflect the increased social salience of the slowed-down blinks that counteracted the general tendency of the brain to react more weakly and more slowly to slowly- vs. quickly-changing stimuli. This finding may relate to the social importance of facial gestures, including eye blinks. |
Klara Marečková; Jennifer S. Perrin; Irum Nawaz Khan; Claire Lawrence; Erin Dickie; Douglas A. McQuiggan; Tomáš Paus Hormonal contraceptives, menstrual cycle and brain response to faces Journal Article In: Social Cognitive and Affective Neuroscience, vol. 9, no. 2, pp. 191–200, 2014. @article{Mareckova2014, Both behavioral and neuroimaging evidence support a female advantage in the perception of human faces. Here we explored the possibility that this relationship may be partially mediated by female sex hormones by investigating the relationship between the brain's response to faces and the use of oral contraceptives, as well as the phase of the menstrual cycle. First, functional magnetic resonance images were acquired in 20 young women [10 freely cycling and 10 taking oral contraception (OC)] during two phases of their cycle: mid-cycle and menstruation. We found stronger neural responses to faces in the right fusiform face area (FFA) in women taking oral contraceptives (vs freely cycling women) and during mid-cycle (vs menstruation) in both groups. Mean blood oxygenation level-dependent response in both left and right FFA increased as function of the duration of OC use. Next, this relationship between the use of OC and FFA response was replicated in an independent sample of 110 adolescent girls. Finally in a parallel behavioral study carried out in another sample of women, we found no evidence of differences in the pattern of eye movements while viewing faces between freely cycling women vs those taking oral contraceptives. The imaging findings might indicate enhanced processing of social cues in women taking OC and women during mid-cycle. |
Jun Maruta; Kristin J. Heaton; Alexis L. Maule; Jamshid Ghajar Predictive visual tracking: Specificity in nild traumatic brain injury and sleep deprivation Journal Article In: Military Medicine, vol. 179, no. 6, pp. 619–625, 2014. @article{Maruta2014, We tested whether reduced cognitive function associated with mild traumatic brain injury (mTBI) and sleep deprivation can be detected and distinguished using indices of predictive visual tracking. A circular visual tracking test was given to 13 patients with acute mTBI (recruited within 2 weeks of injury), 127 normal control subjects, and 43 healthy subjects who were fatigued by 26-hour sleep deprivation. Eye movement was monitored with video-oculography. In the mTBI-related portion of the study, visual tracking performance of acute mTBI patients was significantly worse than normal subjects (p < 0.001). In the sleep-deprivation-related portion of the study, no change was detected between the two baseline measures separated by 2 to 3 weeks, but the 26-hour sleep deprivation significantly degraded the visual tracking performance (p < 0.001). The mTBI subjects had substantially worse visual tracking than sleep-deprived subjects that could also be identified with different visual tracking indices, indicating possible different neurophysiological mechanisms. Results suggest that cognitive impairment associated with mTBI and fatigue may be triaged with the aid of visual tracking measures. |
Alexandra List; Lucica Iordanescu; Marcia Grabowecky; Satoru Suzuki Haptic guidance of overt visual attention Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 8, pp. 2221–2228, 2014. @article{List2014, Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention. |
Pingping Liu; Weijun Li; Buxin Han; Xingshan Li Effects of anomalous characters and small stroke omissions on eye movements during the reading of Chinese sentences Journal Article In: Ergonomics, vol. 57, no. 11, pp. 1659–1669, 2014. @article{Liu2014, We investigated the influence of typographical errors (typos) on eye movements and word recognition in Chinese reading. Participants' eye movements were tracked as they read sentences in which the target words were presented (1) normally, (2) with the initial stroke of the first characters removed (the omitted stroke condition) or (3) the first characters replaced by anomalous characters (the anomalous character condition). The results indicated that anomalous characters caused longer fixation durations and shorter outgoing forward saccade lengths than the correct words. This finding is consistent with the prediction of the theory of the processing-based strategy. Additionally, anomalous characters strongly disrupted lexical processing and whole sentence comprehension, but small stroke omissions did not. Implications of the effect of processing difficulty on forward saccade targeting for models of eye movement control during Chinese reading are discussed. |
Pingping Liu; Xingshan Li Inserting spaces before and after words affects word processing differently in Chinese: Evidence from eye movements Journal Article In: British Journal of Psychology, vol. 105, no. 1, pp. 57–68, 2014. @article{Liu2014a, Unlike in English, there are no spaces between printed words in Chinese. In this study, we explored how inserting a space before or after a word affects the processing of that word in Chinese reading. Native Chinese readers' eye movements were monitored as they read sentences with different presentation conditions. The results show that inserting a space after a word facilitates its processing, but inserting a space before a word does not show this effect and inhibits the processing of that word in some cases. Our results are consistent with the prediction of a word segmentation and recognition model in Chinese Li et al., 2009, Cognit. Psychol., 58, 525. Additionally, we found that a space guides the initial landing position on the word: the initial landing position was further away from the space that inserted into the text, whether it was before or after a word. |
Tzu Chien Liu; Melissa Hui Mei Fan; Fred Paas In: Computers & Education, vol. 70, pp. 9–20, 2014. @article{Liu2014b, Recent research has shown that students involved in computer-based second language learning prefer to use a digital dictionary in which a word can be looked up by clicking on it with a mouse (i.e., click-on dictionary) to a digital dictionary in which a word can be looked up by typing it on a keyboard (i.e., key-in dictionary). This study investigated whether digital dictionary format also differentially affects students' incidental acquisition of spelling knowledge and cognitive load during second language learning. A comparison between a click-on dictionary condition, a key-in dictionary condition, and a non-dictionary control condition for 45 Taiwanese students learning English as a foreign language revealed that learners who used a key-in dictionary invested more time investment on dictionary consultation than learners who used a click-on dictionary. However, on a subsequent unexpected spelling test the key-in group invested less time investment and performed better than the click-on group. The theoretical and practical implications of the results are discussed. |
Simon P. Liversedge; Chuanli Zang; Manman Zhang; Xuejun Bai; Guoli Yan; Denis Drieghe The effect of visual complexity and word frequency on eye movements during Chinese reading Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 441–457, 2014. @article{Liversedge2014, Eye movements of native Chinese readers were monitored when they read sentences containing single-character target words orthogonally manipulated for frequency and visual complexity (number of strokes). Both factors yielded strong main effects on skipping probability but no interaction, with readers skipping visually simple and high frequency words more often. However, an interaction between frequency and complexity was observed on the fixation times on the target words with longer fixations for the low frequency, visually complex words. The results demonstrate that visual complexity and frequency have independent influences on saccadic targeting behaviour during Chinese reading but jointly influence fixation durations and that these two factors differently impact fixation durations and saccade targeting during reading. |
Shih-Yu Lo; Alex O. Holcombe How do we select multiple features? Transient costs for selecting two colors rather than one, persistent costs for color-location conjunctions Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 2, pp. 304–321, 2014. @article{Lo2014, In a previous study Lo, Howard, & Holcombe (Vision Research 63:20-33, 2012), selecting two colors did not induce a performance cost, relative to selecting one color. For example, requiring possible report of both a green and a red target did not yield a worse performance than when both targets were green. Yet a cost of selecting multiple colors was observed when selection needed be contingent on both color and location. When selecting a red target to the left and a green target to the right, superimposing a green distractor to the left and a red distractor to the right impeded performance. Possibly, participants cannot confine attention to a color at a particular location. As a result, distractors that share the target colors disrupt attentional selection of the targets. The attempt to select the targets must then be repeated, which increases the likelihood that the trial terminates when selection is not effective, even for long trials. Consistent with this, here we find a persistent cost of selecting two colors when the conjunction of color and location is needed, but the cost is confined to short exposure durations when the observer just has to monitor red and green stimuli without the need to use the location information. These results suggest that selecting two colors is time-consuming but effective, whereas selection of simultaneous conjunctions is never entirely successful. |
Cai S. Longman; Aureliu Lavric; Cristian Munteanu; Stephen Monsell Attentional inertia and delayed orienting of spatial attention in task-switching Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1580–1602, 2014. @article{Longman2014, Among the potential, but neglected, sources of task-switch costs is the need to reallocate attention to different attributes or objects. Even theorists who recognize the importance of attentional resetting in task-switching sometimes think it too efficient to result in significant behavioral costs. We examined the dynamics of spatial attention in a task-cuing paradigm using eye-tracking. Digits appeared simultaneously at 3 locations. A cue preceded this display by a variable interval, instructing the performance of 1 of 3 classification tasks (odd-even, low-high, inner-outer) each consistently associated with a location, so that task preparation could be tracked via fixation of the task-relevant location. Task-switching led to a delay in selecting the relevant location and a tendency to misallocate attention; the previously relevant location attracted attention much more than the other irrelevant location on switch trials, indicating "inertia" in attentional parameters rather than mere distractibility. These effects predicted reaction time switch costs within and over participants. The switch-induced delay was not confined to trials with slow/late orienting, but characteristic of most switch trials. The attentional pull of the previously relevant location was substantially reduced, but not eliminated, by extending the preparation interval to more than 1 sec, suggesting that attentional inertia contributes to the "residual" switch cost. A control condition, using identical displays but only 1 task, showed that these effects could not be attributed to the (small and transient) delays or inertia observed when the required orientation changed between trials in the absence of a task change. |
Lester C. Loschky; Ryan V. Ringer; Aaron P. Johnson; Adam M. Larson; Mark B. Neider; Arthur F. Kramer Blur detection is unaffected by cognitive load Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 522–547, 2014. @article{Loschky2014, Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects ofselective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N-back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N-back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N-back level on N-back performance, scene recognition memory, and gaze dispersion, but no effect of N-back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze- contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N-back task. |
Matthew W. Lowder; Peter C. Gordon Effects of animacy and noun-phrase relatedness on the processing of complex sentences Journal Article In: Memory & Cognition, vol. 42, no. 5, pp. 794–805, 2014. @article{Lowder2014, Previous work has suggested that syntactically complex object-extracted relative clauses are easier to process when the head noun phrase (NP1) is inanimate and the embedded noun phrase (NP2) is animate, as compared with the reverse animacy configuration, with differences in processing difficulty beginning as early as NP2 (e.g., The article that the senator . . . vs. The senator that the article . . .). Two eye-tracking-while-reading experiments were conducted to better understand the source of this effect. Experiment 1 showed that having an inanimate NP1 facilitated processing even when NP2 was held constant. Experiment 2 manipulated both animacy of NP1 and the degree of semantic relatedness between the critical NPs. When NP1 and NP2 were paired arbitrarily, the early animacy effect emerged at NP2. When NP1 and NP2 were semantically related, this effect disappeared, with effects of NP1 animacy emerging in later processing stages for both the related and arbitrary conditions. The results indicate that differences in the animacy of NP1 influence early processing of complex sentences only when the critical NPs share no meaningful relationship. |
Steven J. Luck; Clara McClenon; Valerie M. Beck; Andrew Hollingworth; Carly J. Leonard; Britta Hahn; Benjamin M. Robinson; James M. Gold Hyperfocusing in schizophrenia: Evidence from interactions between working memory and eye movements Journal Article In: Journal of Abnormal Psychology, vol. 123, no. 4, pp. 783–795, 2014. @article{Luck2014, Recent research suggests that processing resources are focused more narrowly but more intensely in people with schizophrenia (PSZ) than in healthy control subjects (HCS), possibly reflecting local cortical circuit abnormalities. This hyperfocusing hypothesis leads to the counterintuitive prediction that, although PSZ cannot store as much information in working memory as HCS, the working memory representations that are present in PSZ may be more intense than those in HCS. To test this hypothesis, we used a task in which participants make a saccadic eye movement to a peripheral target and avoid a parafoveal nontarget while they are holding a color in working memory. Previous research with this task has shown that the parafoveal nontarget is more distracting when it matches the color being held in working memory. This effect should be enhanced in PSZ if their working memory representations are more intense. Consistent with this prediction, we found that the effect of a match between the distractor color and the memory color was larger in PSZ than in HCS. We also observed evidence that PSZ hyperfocused spatially on the region surrounding the fixation point. These results provide further evidence that some aspects of cognitive dysfunction in schizophrenia may be a result of a narrower and more intense focusing of processing resources. |
Casimir J. H. Ludwig; J. Rhys Davies; Miguel P. Eckstein Foveal analysis and peripheral selection during active visual sampling Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 2, pp. E291–E299, 2014. @article{Ludwig2014, Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. |
Arthur J. Lugtigheid; Laurie M. Wilcox; Robert S. Allison; Ian P. Howard Vergence eye movements are not essential for stereoscopic depth Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 281, pp. 1–7, 2014. @article{Lugtigheid2014, The brain receives disparate retinal input owing to the separation of the eyes, yet we usually perceive a single fused world. This is because of complex interactions between sensory and oculomotor processes that quickly act to reduce excessive retinal disparity. This implies a strong link between depth perception and fusion, but it is well established that stereoscopic depth percepts are also obtained from stimuli that produce double images. Surprisingly, the nature of depth percepts from such diplopic stimuli remains poorly understood. Specifically, despite long-standing debate it is unclear whether depth under diplopia is owing to the retinal disparity (directly), or whether the brain interprets signals from fusional vergence responses to large disparities (indirectly). Here, we addressed this question using stereoscopic afterimages, for which fusional vergence cannot provide retinal feedback about depth. We showed that observers could reliably recover depth sign and magnitude from diplopic afterimages. In addition, measuring vergence responses to large disparity stimuli revealed that that the sign and magnitude of vergence responses are not systematically related to the target disparity, thus ruling out an indirect explanation of our results. Taken together, our research provides the first conclusive evidence that stereopsis is a direct process, even for diplopic targets. |
Katerina Lukasova; Jens Sommer; Mariana P. Nucci-Da-Silva; Gilson Vieira; Marius Blanke; Frank Bremmer; João R. Sato; Tilo Kircher; Edson Amaro Test-retest reliability of fMRI activation generated by different saccade tasks Journal Article In: Journal of Magnetic Resonance Imaging, vol. 40, no. 1, pp. 37–46, 2014. @article{Lukasova2014, PURPOSE: To assess the reproducibility of brain-activation and eye-movement patterns in a saccade paradigm when comparing subjects, tasks, and magnetic resonance (MR) systems. MATERIALS AND METHODS: Forty-five healthy adults at two different sites (n = 45) performed saccade tasks with varying levels of target predictability: predictable (PRED), position predictable (pPRED), time predictable (tPRED), and prosaccade (SAC). Eye-movement pattern was tested with a repeated-measures analysis of variance. Activation maps reproducibility were estimated with the cluster overlap Jaccard index and signal variance coefficient of determination for within-subjects test-retest data, and for between-subjects data from the same and different sites. RESULTS: In all groups latencies increased with decreasing target predictability: PRED < pPRED < tPRED < SAC (P < 0,001). Activation overlap was good to fair (>0.40) in all tasks in the within-subjects test-retest comparisons and poor (<0.40) in the tPRED for different subjects. The overlap of the different tasks for within-groups data was higher (0.40-0.68) than for the between-groups data (0.30-0.50). Activation consistency was 60-85% in the same subjects, 50-79% in different subjects, and 50-80% in different sites. In SAC, the activation found in the same and in different subjects was more consistent than in other tasks (50-80%). CONCLUSION: The predictive saccade tasks produced evidence for brain-activation and eye-movement reproducibility |
Steven G. Luke; Tim J. Smith; Joseph Schmidt; John M. Henderson Dissociating temporal inhibition of return and saccadic momentum across multiple eye-movement tasks Journal Article In: Journal of Vision, vol. 14, no. 14, pp. 1–12, 2014. @article{Luke2014, Saccade latencies are longer prior to an eye movement to a recently fixated location than to control locations, a phenomenon known as oculomotor inhibition of return (O-IOR). There are theoretical reasons to expect that O-IOR would vary in magnitude across different eye movement tasks, but previous studies have produced contradictory evidence. However, this may have been because previous studies have not dissociated O-IOR and a related phenomenon, saccadic momentum, which is a bias to repeat saccade programs that also influences saccade latencies. The present study dissociated the influence of O-IOR and saccadic momentum across three complex visual tasks: scene search, scene memorization, and scene aesthetic preference. O-IOR was of similar magnitude across all three tasks, while saccadic momentum was weaker in scene search. |
Gang Luo; Tyler W. Garaas; Marc Pomplun Salient stimulus attracts focus of peri-saccadic mislocalization Journal Article In: Vision Research, vol. 100, pp. 93–98, 2014. @article{Luo2014, Visual localization during saccadic eye movements is prone to error. Flashes shortly before and after the onset of saccades are usually perceived to shift towards the saccade target, creating a "compression" pattern. Typically, the saccade landing point coincides with a salient saccade target. We investigated whether the mislocalization focus follows the actual saccade landing point or a salient stimulus. Subjects made saccades to either a target or a memorized location without target. In some conditions, another salient marker was presented between the initial fixation and the saccade landing point. The experiments were conducted on both black and picture backgrounds. The results show that: (a) when a saccade target or a marker (spatially separated from the saccade landing point) was present, the compression pattern of mislocalization was significantly stronger than in conditions without them, for both black and picture background conditions, and (b) the mislocalization focus tended towards the salient stimulus regardless of whether it was the saccade target or the marker. Our results suggest that a salient stimulus presented in the scene may have an attracting effect and therefore contribute to the non-uniformity of saccadic mislocalization of a probing flash. |
Xingshan Li; Klinton Bicknell; Pingping Liu; Wei Wei; Keith Rayner In: Journal of Experimental Psychology: General, vol. 143, no. 2, pp. 895–913, 2014. @article{Li2014, While much previous work on reading in languages with alphabetic scripts has suggested that reading is word-based, reading in Chinese has been argued to be less reliant on words. This is primarily because in the Chinese writing system words are not spatially segmented, and characters are themselves complex visual objects. Here, we present a systematic characterization of the effects of a wide range of word and character properties on eye movements in Chinese reading, using a set of mixed-effects regression models. The results reveal a rich pattern of effects of the properties of the current, previous, and next words on a range of reading measures, which is strikingly similar to the pattern of effects of word properties reported in spaced alphabetic languages. This finding provides evidence that reading shares a word-based core and may be fundamentally similar across languages with highly dissimilar scripts. We show that these findings are robust to the inclusion of character properties in the regression models and are equally reliable when dependent measures are defined in terms of characters rather than words, providing strong evidence that word properties have effects in Chinese reading above and beyond characters. This systematic characterization of the effects of word and character properties in Chinese advances our knowledge of the processes underlying reading and informs the future development of models of reading. More generally, however, this work suggests that differences in script may not alter the fundamental nature of reading. |
Chiuhsiang Joe Lin; Chi-Chan Chang; Yung-Hui Lee Evaluating camouflage design using eye movement data Journal Article In: Applied Ergonomics, vol. 45, no. 3, pp. 714–723, 2014. @article{Lin2014d, This study investigates the characteristics of eye movements during a camouflaged target search task. Camouflaged targets were randomly presented on two natural landscapes. The performance of each camouflage design was assessed by target detection hit rate, detection time, number of fixations on display, first saccade amplitude to target, number of fixations on target, fixation duration on target, and subjective ratings of search task difficulty. The results showed that the camouflage patterns could significantly affect the eye-movement behavior, especially first saccade amplitude and fixation duration, and the findings could be used to increase the sensitivity of the camouflage assessment. We hypothesized that the assessment could be made with regard to the differences in detectability and discriminability of the camouflage patterns. These could explain less efficient search behavior in eye movements. Overall, data obtained from eye movements can be used to significantly enhance the interpretation of the effects of different camouflage design. |
Chiuhsiang Joe Lin; Chi-Chan Chang; Bor-Shong Liu Developing and evaluating a target-background similarity metric for camouflage detection Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e87310, 2014. @article{Lin2014e, BACKGROUND: Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. METHODOLOGY: In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. SIGNIFICANCE: The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. |
Hai Lin; Joshua D. Rizak; Yuan-ye Ma; Shang-chuan Yang; Lin Chen; Xin-tian Hu Face recognition increases during saccade preparation Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e93112, 2014. @article{Lin2014, Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition. |
Hsin-Hui Lin; Shu-Fei Yang An eye movement study of attribute framing in online shopping Journal Article In: Journal of Marketing Analytics, vol. 2, no. 2, pp. 72–80, 2014. @article{Lin2014c, This study uses an eye-tracking method to explore the framing effect on observed eye movements and purchase intention in online shopping. The results show that negative framing induces more active eye movements. Function attributes and non-functionality attributes attract more eye movements and with higher intensity. And the scanpath on the areas of interest reveals a certain pattern. These findings have practical implications for e-sellers to improve communication with customers. |
John J. H. Lin; Sunny S. J. Lin Cognitive load for configuration comprehension in computer-supported geometry problem solving: An eye movement perspective Journal Article In: International Journal of Science and Mathematics Education, vol. 12, no. 3, pp. 605–627, 2014. @article{Lin2014a, The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations involving the number of informational elements, the number of element interactivities and the level of mental operations were assumed to account for the increasing difficulty. A sample of 311 9th grade students solved five geometry problems that required knowledge of similar triangles in a computer-supported environment. In the second experiment, 63 participants solved the same problems and eye movements were recorded. The results indicated that (1) the five problems differed in pass rate and in self-reported cognitive load; (2) because the successful solvers were very swift in pattern recognition and visual integration, their fixation did not clearly show valuable information; (3) more attention and more time (shown by the heat maps, dwell time and fixation counts) were given to read the more difficult configurations than to the intermediate or easier configurations; and (4) in addition to number of elements and element interactivities, the level of mental operations accounts for the major cognitive load sources of configuration comprehension. The results derived some implications for design principles of geometry diagrams in secondary school mathematics textbooks. |
John J. H. Lin; Sunny S. J. Lin Tracking eye movements when solving geometry problems with handwriting devices Journal Article In: Journal of Eye Movement Research, vol. 7, no. 1, pp. 1–15, 2014. @article{Lin2014b, The present study investigated the following issues: (1) whether differences are evident in the eye movement measures of successful and unsuccessful problem-solvers; (2) what is the relationship between perceived difficulty and eye movement measures; and (3) whether eye movements in various AOIs differ when solving problems. Sixty-three 11th grade students solved five geometry problems about the properties of similar triangles. A digital drawing tablet and sensitive pressure pen were used to record the responses. The results indicated that unsuccessful solvers tended to have more fixation counts, run counts, and longer dwell time on the problem area, whereas successful solvers focused more on the calculation area. In addition, fixation counts, dwell time, and run counts in the diagram area were positively correlated with the perceived difficulty, suggesting that understanding similar triangles may require translation or mental rotation. We argue that three eye movement measures (i.e., fixation counts, dwell time, and run counts) are appropriate for use in examining problem solving given that they differentiate successful from unsuccessful solvers and correlate with perceived difficulty. Furthermore, the eye-tracking technique provides objective measures of students' cognitive load for instructional designers. |
Angelika Lingnau; Thorsten Albrecht; Jens Schwarzbach; Dirk Vorberg Visual search without central vision - no single pseudofovea location is best Journal Article In: Journal of Eye Movement Research, vol. 7, no. 2, pp. 1–14, 2014. @article{Lingnau2014, We typically fixate targets such that they are projected onto the fovea for best spatial resolution. Macular degeneration patients often develop fixation strategies such that targets are projected to an intact eccentric part of the retina, called pseudofovea. A longstanding debate concerns which pseudofovea-location is optimal for non-foveal vision. We examined how pseudofovea position and eccentricity affect performance in visual search, when vision is restricted to an off-foveal retinal region by a gaze-contingent display that dynamically blurs the stimulus except within a small viewing window (forced field location). Trained normally sighted participants were more accurate when forced field location was congruent with the required scan path direction; this contradicts the view that a single pseudofovea location is generally best. Rather, performance depends on the congruence between pseudofovea location and scan path direction. |
Christina Liossi; Daniel E. Schoth; Hayward J. Godwin; Simon P. Liversedge Using eye movements to investigate selective attention in chronic daily headache Journal Article In: Pain, vol. 155, no. 3, pp. 503–510, 2014. @article{Liossi2014, Previous research has demonstrated that chronic pain is associated with biased processing of pain-related information. Most studies have examined this bias by measuring response latencies. The present study extended previous work by recording eye movement behaviour in individuals with chronic headache and in healthy controls while participants viewed a set of images (ie, facial expressions) from 4 emotion categories (pain, angry, happy, neutral). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze on the picture that was initially fixated, and the mean number of visits, and mean fixation duration per image category. The eye movement behaviour of the participants in the chronic headache group was characterised by a bias in initial shift of orienting to pain. There was no evidence of individuals with chronic headache visiting more often, or spending significantly more time viewing, pain images compared to other images. Both participant groups showed a significantly greater bias to maintain gaze longer on happy images, relative to pain, angry, and neutral images. Results are consistent with a pain-related bias that operates in the orienting of attention on pain-related stimuli, and suggest that chronic pain participants' attentional biases for pain-related information are evident even when other emotional stimuli are present. Pain-related information-processing biases appear to be a robust feature of chronic pain and may have an important role in the maintenance of the disorder. |
Tamar H. Gollan; Elizabeth R. Schotter; Joanne Gomez; Mayra Murillo; Keith Rayner Multiple levels of bilingual language control: Evidence from language intrusions in reading aloud Journal Article In: Psychological Science, vol. 25, no. 2, pp. 585–595, 2014. @article{Gollan2014, Bilinguals rarely produce words in an unintended language. However, we induced such intrusion errors (e.g., saying el instead of he) in 32 Spanish-English bilinguals who read aloud single-language (English or Spanish) and mixed-language (haphazard mix of English and Spanish) paragraphs with English or Spanish word order. These bilinguals produced language intrusions almost exclusively in mixed-language paragraphs, and most often when attempting to produce dominant-language targets (accent-only errors also exhibited reversed language-dominance effects). Most intrusion errors occurred for function words, especially when they were not from the language that determined the word order in the paragraph. Eye movements showed that fixating a word in the nontarget language increased intrusion errors only for function words. Together, these results imply multiple mechanisms of language control, including (a) inhibition of the dominant language at both lexical and sublexical processing levels, (b) special retrieval mechanisms for function words in mixed-language utterances, and (c) attentional monitoring of the target word for its match with the intended language. |
Julie D. Golomb; Colin N. Kupitz; Carina T. Thiemann The influence of object location on identity: A “spatial congruency bias” Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 6, pp. 2262–2278, 2014. @article{Golomb2014a, Objects can be characterized by a number of properties (e.g., shape, color, size, and location). How do our visual systems combine this information, and what allows us to recognize when 2 objects are the same? Previous work has pointed to a special role for location in the binding process, suggesting that location may be automatically encoded even when irrelevant to the task. Here we show that location is not only automatically attended but fundamentally bound to identity representations, influencing object perception in a far more profound way than simply speeding reaction times. Subjects viewed 2 sequentially presented novel objects and performed a same/different identity comparison. Object location was irrelevant to the identity task, but when the 2 objects shared the same location, subjects were more likely to judge them as the same identity. This “congruency bias” reflected an increase in both hits and false alarms when the objects shared the same location, indicating that subjects were unable to suppress the influence of object location—even when maladaptive to the task. Importantly, this bias was driven exclusively by location: Object location robustly and reliably biased identity judgments across 6 experimental scenarios, but the reverse was not true: Object identity did not exert any bias on location judgments. Furthermore, while location biased both shape and color judgments, neither shape nor color biased each other when irrelevant. The results suggest that location provides a unique, automatic, and insuppressible cue for object sameness. |
Julie D. Golomb; Zara E. L'Heureux; Nancy Kanwisher Feature-binding errors after eye movements and shifts of attention Journal Article In: Psychological Science, vol. 25, no. 5, pp. 1067–1078, 2014. @article{Golomb2014, When people move their eyes, the eye-centered (retinotopic) locations of objects must be updated to maintain world-centered (spatiotopic) stability. Here, we demonstrated that the attentional-updating process temporarily distorts the fundamental ability to bind object locations with their features. Subjects were simultaneously presented with four colors after a saccade-one in a precued spatiotopic target location-and were instructed to report the target's color using a color wheel. Subjects' reports were systematically shifted in color space toward the color of the distractor in the retinotopic location of the cue. Probabilistic modeling exposed both crude swapping errors and subtler feature mixing (as if the retinotopic color had blended into the spatiotopic percept). Additional experiments conducted without saccades revealed that the two types of errors stemmed from different attentional mechanisms (attention shifting vs. splitting). Feature mixing not only reflects a new perceptual phenomenon, but also provides novel insight into how attention is remapped across saccades. |
Esther G. González; Linda Lillakas; Naomi Greenwald; Brenda L. Gallie; Martin J. Steinbach Unaffected smooth pursuit but impaired motion perception in monocularly enucleated observers Journal Article In: Vision Research, vol. 101, pp. 151–157, 2014. @article{Gonzalez2014, The objective of this paper was to study the characteristics of closed-loop smooth pursuit eye movements of 15 unilaterally eye enucleated individuals and 18 age-matched controls and to compare them to their performance in two tests of motion perception: relative motion and motion coherence. The relative motion test used a brief (150. ms) small stimulus with a continuously present fixation target to preclude pursuit eye movements. The duration of the motion coherence trials was 1. s, which allowed a brief pursuit of the stimuli. Smooth pursuit data were obtained with a step-ramp procedure. Controls were tested both monocularly and binocularly. The data showed worse performance by the enucleated observers in the relative motion task but no statistically significant differences in motion coherence between the two groups. On the other hand, the smooth pursuit gain of the enucleated participants was as good as that of controls for whom we found no binocular advantage. The data show that enucleated observers do not exhibit deficits in the afferent or sensory pathways or in the efferent or motor pathways of the steady-state smooth pursuit system even though their visual processing of motion is impaired. |
Robert D. Gordon Saccade latency reveals episodic representation of object color Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 6, pp. 1765–1777, 2014. @article{Gordon2014, While previous studies suggest that identity, but not color, plays a role in episodic object representation, such studies have typically used tasks in which only identity is relevant, raising the possibility that the results reflect task demands, rather than the general principles that underlie ob- ject representation. In the present study, participants viewed a preview display containing one (Experiments 1 and 2)ortwo (Experiment 3) letters, then viewed a target display containing a single letter, in either the same or a different location. Participants executed an immediate saccade to fixate the tar- get; saccade latency served as the dependent variable. In all experiments, saccade latencies were longer to fixate a target appearing in its previewed location, consistent with a bias to attend to new objects rather than to objects for which episodic representations are being maintained in visual working mem- ory. The results ofExperiment 3 further demonstrate, howev- er, that changing target color eliminates these latency differ- ences. The results suggest that color and identity are part of episodic representation even when not task relevant and that examining biases in saccade execution may be a useful ap- proach to studying episodic representation. |
Andrei Gorea; Delphine Rider; Qing Yang A unified comparison of stimulus-driven, endogenous mandatory and 'free choice' saccades Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e88990, 2014. @article{Gorea2014, It has been claimed that saccades arising from the three saccade triggering modes-stimulus-driven, endogenous mandatory and 'free choice'-are driven by distinct mechanisms. We tested this claim by instructing observers to saccade from a white or black fixation disc to a same polarity (white or black) disc flashed for 100 or 200 ms presented either alone (Exo), or together with an opposite (Endo) or same (EndoFC) polarity disc (blocked and mixed sessions). Target(s) and distractor were presented at three inter-stimulus intervals (ISIs) relative to the fixation offset (ISI: -200, 0, +200 ms) and were displayed at random locations within a 4°-to-6° eccentricity range. The statistical analysis showed a global saccade triggering mode effect on saccade reaction times (SRTs) with Endo and EndoFC SRTs longer by about 27 ms than Exo-triggered ones but no effect for the Endo-EndoFC comparison. SRTs depended on both ISI (the "gap-effect"), and target duration. Bimodal best fits of the SRT-distributions were found in 65% of cases with their count not different across the three triggering modes. Percentages of saccades in the 'fast' and 'slow' ranges of bimodal fits did not depend on the triggering modes either. Bimodality tests failed to assert a significant difference between these modes. An analysis of the timing of a putative inhibition by the distractor (Endo) or by the duplicated target (EndoFC) yielded no significant difference between Endo and EndoFC saccades but showed a significant shortening with ISI similar to the SRT shortening suggesting that the distractor-target mutual inhibition is itself inhibited by 'fixation' neurons. While other experimental paradigms may well sustain claims of distinct mechanisms subtending the three saccade triggering modes, as here defined reflexive and voluntary saccades appear to differ primarily in the effectiveness with which inhibitory processes slow down the initial fast rise of the saccade triggering signal. |
Harriet Goschy; A. Isabel Koch; Hermann J. Müller; Michael Zehetleitner Early top-down control over saccadic target selection: Evidence from a systematic salience difference manipulation Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 2, pp. 367–382, 2014. @article{Goschy2014, Previous research on the contribution of top-down control to saccadic target selection has suggested that eye movements, especially short-latency saccades, are primarily salience driven. The present study was designed to systematically examine top-down influences as a function of time and relative salience difference between target and distractor. Observers performed a saccadic selection task, requiring them to make an eye movement to an orientation-defined target, while ignoring a color-defined distractor. The salience of the distractor was varied (five levels), permitting the percentage of target and distractor fixations to be analyzed as a function of the salience difference between the target and distractor. This analysis revealed the same pattern of results for both the overall and the short-latency saccades: When the target and distractor were of comparable salience, the vast majority of saccades went directly to the target; even distractors somewhat more salient than the target led to significantly fewer distractor, as compared with target, fixations. To quantify the amount of top-down control applied, we estimated the point of equal selection probability for the target and distractor. Analyses of these estimates revealed that, to be selected with equal probability to the target, a distractor had to have a considerably greater bottom-up salience, as compared with the target. This difference suggests a strong contribution of top-down control to saccadic target selection-even for the earliest saccades. |
Alina Graf; Richard A. Andersen Inferring eye position from populations of lateral intraparietal neurons Journal Article In: eLife, vol. 2014, no. 3, pp. 1–13, 2014. @article{Graf2014, Understanding how the brain computes eye position is essential to unraveling high- level visual functions such as eye movement planning, coordinate transformations and stability of spatial awareness. The lateral intraparietal area (LIP) is essential for this process. However, despite decades of research, its contribution to the eye position signal remains controversial. LIP neurons have recently been reported to inaccurately represent eye position during a saccadic eye movement, and to be too slow to support a role in high-level visual functions. We addressed this issue by predicting eye position and saccade direction from the responses of populations of LIP neurons. We found that both signals were accurately predicted before, during and after a saccade. Also, the dynamics of these signals support their contribution to visual functions. These findings provide a principled understanding of the coding of information in populations of neurons within an important node of the cortical network for visual-motor behaviors. |
Michael J. Gray; Annabelle Blangero; James P. Herman; Josh Wallman; Mark R. Harwood Adaptation of naturally paced saccades Journal Article In: Journal of Neurophysiology, vol. 111, no. 11, pp. 2343–2354, 2014. @article{Gray2014, In the natural environment, humans make saccades almost continuously. In many eye movement experiments, however, observers are required to fixate for unnaturally long periods of time. The resulting long and monotonous experimental sessions can become especially problematic when collecting data in a clinical setting, where time can be scarce and subjects easily fatigued. With this in mind, we tested whether the well-studied motor learning process of saccade adaptation could be induced with a dramatically shortened intertrial interval. Observers made saccades to targets that stepped left or right either ∼250 ms or ∼1,600 ms after the saccade landed. In experiment I, we tested baseline saccade parameters to four different target amplitudes (5°, 10°, 15°, and 20°) in the two timing settings. In experiments II and III, we adapted 10° saccades via 2° intrasaccadic steps either backwards or forwards, respectively. Seven subjects performed eight separate adaptation sessions (2 intertrial timings × 2 adaptation direction × 2 session trial lengths). Adaptation proceeded remarkably similarly in both timing conditions across the multiple sessions. In the faster-paced sessions, robust adaptation was achieved in under 2 min, demonstrating the efficacy of our approach to streamlining saccade adaptation experiments. Although saccade amplitudes were similar between conditions, the faster-paced condition unexpectedly resulted in significantly higher peak velocities in all subjects. This surprising finding demonstrates that the stereotyped "main sequence" relationship between saccade amplitude and peak velocity is not as fixed as originally thought. |
Jennifer L. Greenberg; Lillian Reuman; Andrea S. Hartmann; Irina Kasarskis; Sabine Wilhelm Visual hot spots: An eye tracking study of attention bias in body dysmorphic disorder Journal Article In: Journal of Psychiatric Research, vol. 57, no. 1, pp. 125–132, 2014. @article{Greenberg2014, Attentional biases have been implicated in the development and maintenance of BDD. In particular, a visual attention bias toward one's unattractive features and others' attractive features (negative bias), might underlie BDD symptoms. Healthy individuals typically pay more attention to others' unattractive and their own attractive features (positive bias). This study used eye tracking to examine visual attention in individuals with BDD relative to healthy controls (HC). We also explored the role of avoidance in attention bias. Participants with BDD and primary face/head concerns ( n = 19) and HC ( n = 20) completed computerized tasks and questionnaires. Eye movement data (i.e., fixations, dwell time) were recorded while participants viewed images of their own and a control face (selected for average attractiveness and neutral expression). Participants rated distress and perceived most and least attractive features of their own and another face. BDD participants demonstrated a negative mean total bias score compared to HC (fixation: p = 0.24; dwell: p = 0.08). Age (fixation: p = 0.006; dwell: p = 0.03) and gender (fixation: p = 0.03; dwell: p = 0.03) moderated the relationship. Avoidance was associated with a positive bias in BDD. Results suggest individuals with BDD overfocus on negative attributes, a potential factor in the disorder's etiology and maintenance. Conversely, HC had a more balanced focus on their traits. Elucidating the role of attention bias could help to identify risk and maintenance factors in BDD. |
Harold H. Greene; James M. Brown; Barry Dauphin When do you look where you look? A visual field asymmetry Journal Article In: Vision Research, vol. 102, pp. 33–40, 2014. @article{Greene2014, Pre-saccadic fixation durations associated with saccades directed in different directions were compared in three endogenous-attention oriented saccadic scanning tasks (i.e. visual search and scene viewing). Pre-saccadic fixation durations were consistently briefer before the execution of upward saccades, than downward saccades. Saccades also had a higher probability of being directed upwards than downwards. Pre-saccadic fixation durations were symmetric and longer for horizontally-directed saccades. The vertical visual field asymmetry in pre-saccadic fixation durations reflects an influence of factors not directly related to currently fixated elements. The ability to predict pre-saccadic fixation durations is important for computational modelling of real-time saccadic scanning, and the findings make a case for including directional constraints in computational modelling of when the eyes move. |
Jonas Everaert; Wouter Duyck; Ernst H. W. Koster Attention, interpretation, and memory biases in subclinical depression: A proof-of-principle test of the combined cognitive biases hypothesis Journal Article In: Emotion, vol. 14, no. 2, pp. 331–340, 2014. @article{Everaert2014, Emotional biases in attention, interpretation, and memory are viewed as important cognitive processes underlying symptoms of depression. To date, there is a limited understanding of the interplay among these processing biases. This study tested the dependence of memory on depression-related biases in attention and interpretation. Subclinically depressed and non- depressed participants completed a computerized version of the scrambled sentences test (measuring interpretation bias) while their eye movements were recorded (measuring attention bias). This task was followed by an incidental free recall test of previously constructed interpretations (measuring memory bias). Path analysis revealed a good fit for the model in which selective orienting of attention was associated with interpretation bias, which in turn was associated with a congruent bias in memory. Also, a good fit was observed for a path model in which biases in the maintenance of attention and interpretation were associated with memory bias. Both path models attained a superior fit compared to path models without the theorized functional relations among processing biases. These findings enhance understanding of how mechanisms of attention and interpretation regulate what is remembered. As such, they offer support for the combined cognitive biases hypothesis or the notion that emotionally biased cognitive processes are not isolated mechanisms but instead influence each other. Implications for theoretical models and emotion regulation across the spectrum of depressive symptoms are discussed. |
Ashley Farris-Trimble; Bob McMurray; Nicole Cigrand; J. Bruce Tomblin The process of spoken word recognition in the face of signal degradation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 308–327, 2014. @article{FarrisTrimble2014, Though much is known about how words are recognized, little research has focused on how a degraded signal affects the fine-grained temporal aspects of real-time word recognition. The perception of degraded speech was examined in two populations with the goal of describing the time course of word recognition and lexical competition. Thirty-three postlingually deafened cochlear implant (CI) users and 57 normal hearing (NH) adults (16 in a CI-simulation condition) participated in a visual world paradigm eye-tracking task in which their fixations to a set of phonologically related items were monitored as they heard one item being named. Each degraded-speech group was compared with a set of age-matched NH participants listening to unfiltered speech. CI users and the simulation group showed a delay in activation relative to the NH listeners, and there is weak evidence that the CI users showed differences in the degree of peak and late competitor activation. In general, though, the degraded-speech groups behaved statistically similarly with respect to activation levels. |
Alexandra Fayel; Sylvie Chokron; Céline Cavézian; Dorine Vergilino-Perez; Christelle Lemoine; Karine Doré-Mazars Characteristics of contralesional and ipsilesional saccades in hemianopic patients Journal Article In: Experimental Brain Research, vol. 232, no. 3, pp. 903–917, 2014. @article{Fayel2014, In order to further our understanding of action-blindsight, four hemianopic patients suffering from visual field loss contralateral to a unilateral occipital lesion were compared to six healthy controls during a double task of verbally reported target detection and saccadic responses toward the target. Three oculomotor tasks were used: a fixation task (i.e., without saccade) and two saccade tasks (eliciting reflexive and voluntary saccades, using step and overlap 600 ms paradigms, respectively), in separate sessions. The visual target was briefly presented at two different eccentricities (5° and 8°), in the right or left visual hemifield. Blank trials were interleaved with target trials, and signal detection theory was applied. Despite their hemifield defect, hemianopic patients retained the ability to direct a saccade toward their contralesional hemifield, whereas verbal detection reports were at chance level. However, saccade parameters (latency and amplitude) were altered by the defect. Saccades to the contralesional hemifield exhibited longer latencies and shorter amplitudes compared to those of the healthy group, whereas only the latencies of reflexive saccades to the ipsilesional hemifield were altered. Furthermore, healthy participants showed the expected latency difference between reflexive and voluntary saccades, with the latter longer than the former. This difference was not found in three out of four patients in either hemifield. Our results show action-blindsight for saccades, but also show that unilateral occipital lesions have effects on saccade generation in both visual hemifields. |
Tomer Fekete; Felix D. C. C. Beacher; Jiook Cha; Denis Rubin; Lilianne R. Mujica-Parodi Small-world network properties in prefrontal cortex correlate with predictors of psychopathology risk in young children: A NIRS study Journal Article In: NeuroImage, vol. 85, pp. 345–353, 2014. @article{Fekete2014, Near infrared spectroscopy (NIRS) is an emerging imaging technique that is relatively inexpensive, portable, and particularly well suited for collecting data in ecological settings. Therefore, it holds promise as a potential neurodiagnostic for young children. We set out to explore whether NIRS could be utilized in assessing the risk of developmental psychopathology in young children. A growing body of work indicates that temperament at young age is associated with vulnerability to psychopathology later on in life. In particular, it has been shown that low effortful control (EC), which includes the focusing and shifting of attention, inhibitory control, perceptual sensitivity, and a low threshold for pleasure, is linked to conditions such as anxiety, depression and attention deficit hyperactivity disorder (ADHD). Physiologically, EC has been linked to a control network spanning among other sites the prefrontal cortex. Several psychopathologies, such as depression and ADHD, have been shown to result in compromised small-world network properties. Therefore we set out to explore the relationship between EC and the small-world properties of PFC using NIRS. NIRS data were collected from 44 toddlers, ages 3-5, while watching naturalistic stimuli (movie clips). Derived complex network measures were then correlated to EC as derived from the Children's Behavior Questionnaire (CBQ). We found that reduced levels of EC were associated with compromised small-world properties of the prefrontal network. Our results suggest that the longitudinal NIRS studies of complex network properties in young children hold promise in furthering our understanding of developmental psychopathology. |
Joost Felius; Cynthia L. Beauchamp; David R. Stager Visual acuity deficits in children with nystagmus and down syndrome Journal Article In: American Journal of Ophthalmology, vol. 157, no. 2, pp. 458–463, 2014. @article{Felius2014, Purpose: To investigate the association between visual acuity deficits and fixation instability in children with Down syndrome and nystagmus. Design Prospective cross-sectional study. Methods: setting: Institutional. study population: Sixteen children (aged 10 months-14 years) with Down syndrome and nystagmus, and a control group of 93 age-similar children with unassociated infantile nystagmus. observation procedures: Binocular Teller acuity card testing and eye-movement recordings. Fixation stability was quantified using the nystagmus optimal fixation function (NOFF). An exponential model based on results from the control group with unassociated infantile nystagmus was used to relate fixation stability to age-corrected visual acuity deficits. main outcome measures: Binocular grating visual acuity and NOFF. Results: Visual acuity was 0.2-0.9 logMAR (20/30-20/174 Snellen equivalent) and corresponded to a 0.4 logMAR (4 lines) mean age-corrected visual acuity deficit. Fixation stability ranged from poor to mildly affected. Although visual acuity deficit was on average 0.17 logMAR larger (P =.005) than predicted by the model, most children had visual acuity deficit within the 95% predictive interval. Conclusions: There was a small mean difference between the measured visual acuity deficit and the prediction of the nystagmus model. Although other factors also contribute to visual acuity loss in Down syndrome, nystagmus alone could account for most of the visual acuity deficit in these children. |
Galit Fuhrmann Alpert; Ran Manor; Assaf B. Spanier; Leon Y. Deouell; Amir B. Geva Spatiotemporal representations of rapid visual target detection: A single-trial EEG classification algorithm Journal Article In: IEEE Transactions on Biomedical Engineering, vol. 61, no. 8, pp. 2290–2303, 2014. @article{FuhrmannAlpert2014a, Brain computer interface applications, developed for both healthy and clinical populations, critically depend on decoding brain activity in single trials. The goal of the present study was to detect distinctive spatiotemporal brain patterns within a set of event related responses. We introduce a novel classification algorithm, the spatially weighted FLD-PCA (SWFP), which is based on a two-step linear classification of event-related responses, using fisher linear discriminant (FLD) classifier and principal component analysis (PCA) for dimensionality reduction. As a benchmark algorithm, we consider the hierarchical discriminant component Analysis (HDCA), introduced by Parra, et al. 2007. We also consider a modified version of the HDCA, namely the hierarchical discriminant principal component analysis algorithm (HDPCA). We compare single-trial classification accuracies of all the three algorithms, each applied to detect target images within a rapid serial visual presentation (RSVP, 10 Hz) of images from five different object categories, based on single-trial brain responses. We find a systematic superiority of our classification algorithm in the tested paradigm. Additionally, HDPCA significantly increases classification accuracies compared to the HDCA. Finally, we show that presenting several repetitions of the same image exemplars improve accuracy, and thus may be important in cases where high accuracy is crucial. |
Galit Fuhrmann Alpert; Ran Manor; Assaf B. Spanier; Leon Y. Deouell; Amir B. Geva Spatio-temporal representations of rapid visual target detection: A single trial EEG classification Journal Article In: IEEE Transactions on Biomedical Engineering, vol. 61, no. 8, pp. 2290–2303, 2014. @article{FuhrmannAlpert2014, Brain computer interface applications, developed for both healthy and clinical populations, critically depend on decod- ing brain activity in single trials. The goal of the present study was to detect distinctive spatiotemporal brain patterns within a set of event related responses. We introduce a novel classification algo- rithm, the spatially weighted FLD-PCA (SWFP), which is based on a two-step linear classification of event-related responses, using fisher linear discriminant (FLD) classifier and principal compo- nent analysis (PCA) for dimensionality reduction. As a benchmark algorithm, we consider the hierarchical discriminant component Analysis (HDCA), introduced by Parra, et al. 2007. We also con- sider a modified version of the HDCA, namely the hierarchical discriminant principal component analysis algorithm (HDPCA). We compare single-trial classification accuracies of all the three algorithms, each applied to detect target images within a rapid serial visual presentation (RSVP, 10 Hz) of images from five dif- ferent object categories, based on single-trial brain responses. We find a systematic superiority of our classification algorithm in the tested paradigm.Additionally, HDPCA significantly increases clas- sification accuracies compared to the HDCA. Finally, we show that presenting several repetitions of the same image exemplars im- prove accuracy, and thus may be important in cases where high accuracy is crucial. |
Benjamin Gagl; Stefan Hawelka; Florian Hutzler A similar correction mechanism in slow and fluent readers after suboptimal landing positions Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 355, 2014. @article{Gagl2014, The present eye movements study investigated the optimal viewing position (OVP) and inverted-optimal viewing position (I-OVP) effects in slow readers. The basis of these effects is a phenomenon called corrective re-fixations, which describes a short saccade from a suboptimal landing position (word beginning or end) to the center of the word. The present study found corrective re-fixations in slow readers, which was evident from the I-OVP effects in first fixation durations, the OVP effect in number of fixations and the OVP effect in re-fixation probability. The main result is that slow readers, despite being characterized by a fragmented eye movement pattern during reading, nevertheless share an intact mechanism for performing corrective re-fixations. This correction mechanism is not linked to linguistic processing, but to visual and oculomotor processes, which suggests the integrity of oculomotor and visual processes in slow readers. |
Benjamin Gagl; Stefan Hawelka; Fabio Richlan; Sarah Schuster; Florian Hutzler Parafoveal preprocessing in reading revisited: Evidence from a novel preview manipulation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 2, pp. 588–595, 2014. @article{Gagl2014a, The study investigated parafoveal preprocessing by the means of the classical invisible boundary paradigm and a novel manipulation of the parafoveal previews (i.e., visual degradation). Eye movements were investigated on 5-letter target words with constraining (i.e., highly informative) initial letters or similarly constraining final letters. Visual degradation was administered to all, no, the initial, or the final 2 letters of the parafoveal preview of the target words. Critically, the manipulation of the parafoveal previews did not interfere with foveal processing. Thus, we had a proper baseline to which we could relate our main findings, which were as follows: First, the valid (i.e., nondegraded) preview of the target words' final letters led to shorter fixation times compared to the baseline condition (i.e., the degradation of all letters). Second, this preview benefit for the final letters was comparable to the benefit of previewing the initial letters. Third, the preview of a constraining initial letter sequence, however, yielded a larger preview benefit than the preview of an unconstraining initial letter sequence. The latter finding indicates that preprocessing constraining initial letters is particularly conducive to foveal word recognition. |
Lesya Y. Ganushchak; Agnieszka E. Konopka; Yiya Chen What the eyes say about planning of focused referents during sentence formulation: A cross-linguistic investigation Journal Article In: Frontiers in Psychology, vol. 5, pp. 1124, 2014. @article{Ganushchak2014, This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard "What is happening here?" In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences. |
Xiao Gao; Xiao Deng; Jia Yang; Shuang Liang; Jie Liu; Hong Chen Eyes on the bodies: An eye tracking study on deployment of visual attention among females with body dissatisfaction Journal Article In: Eating Behaviors, vol. 15, no. 4, pp. 540–549, 2014. @article{Gao2014, Visual attentional bias has important functions during the appearance social comparisons. However, for the limitations of experimental paradigms or analysis methods in previous studies, the time course of attentional bias to thin and fat body images among women with body dissatisfaction (BD) has still been unclear. In using free reviewing task combined with eye movement tracking, and based on event-related analyses of the critical first eye movement events, as well as epoch-related analyses of gaze durations, the current study investigated different attentional bias components to body shape/part images during 15. s presentation time among 34 high BD and 34 non-BD young women. In comparison to the controls, women with BD showed sustained maintenance biases on thin and fat body images during both early automatic and late strategic processing stages. This study highlights a clear need for research on the dynamics of attentional biases related to body image and eating disturbances. |
Hans Peter Frey; Anita M. Schmid; Jeremy W. Murphy; Sophie Molholm; Edmund C. Lalor; John J. Foxe Modulation of early cortical processing during divided attention to non-contiguous locations Journal Article In: European Journal of Neuroscience, vol. 39, no. 9, pp. 1499–1507, 2014. @article{Frey2014, We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. Whereas, for several years, the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed by the use of high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classic pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced, and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing time-frames in hierarchically early visual regions, and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. |
Moshe Fried; Eteri Tsitsiashvili; Yoram S. Bonneh; Anna Sterkin; Tamara Wygnanski-Jaffe; Tamir Epstein; Uri Polat ADHD subjects fail to suppress eye blinks and microsaccades while anticipating visual stimuli but recover with medication Journal Article In: Vision Research, vol. 101, pp. 62–72, 2014. @article{Fried2014, Oculomotor behavior and parameters are known to be affected by the allocation of attention and could potentially be used to investigate attention disorders. We explored the oculomotor markers of Attention-deficit/hyperactivity disorder (ADHD) that are involuntary and quantitative and that could be used to reveal the core-affected mechanisms, as well as be used for differential diagnosis. We recorded eye movements in a group of 22 ADHD-diagnosed patients with and without medication (methylphenidate) and in 22 control observers while performing the test of variables of attention (t.o.v.a.). We found that the average microsaccade and blink rates were higher in the ADHD group, especially in the time interval around stimulus onset. These rates increased monotonically over session time for both groups, but with significantly faster increments in the unmedicated ADHD group. With medication, the level and time course of the microsaccade rate were fully normalized to the control level, regardless of the time interval within trials. In contrast, the pupil diameter decreased over time within sessions and significantly increased above the control level with medication. We interpreted the suppression of microsaccades and eye blinks around the stimulus onset as reflecting a temporal anticipation mechanism for the transient allocation of attention, and their overall rates as inversely reflecting the level of arousal. We suggest that ADHD subjects fail to maintain sufficient levels of arousal during a simple and prolonged task, which limits their ability to dynamically allocate attention while anticipating visual stimuli. This impairment normalizes with medication and its oculomotor quantification could potentially be used for differential diagnosis. |