所有EyeLink眼动仪出版物
以下按年份列出了截至2025年(包括2026年初)的所有14000篇经同行评审的EyeLink研究出版物。您可以使用视觉搜索、平滑追求、帕金森氏症等关键字搜索出版物库。您还可以搜索单个作者的姓名。按研究领域分组的眼动追踪研究可以在解决方案页面上找到。如果我们错过了任何EyeLink眼动追踪论文,请给我们发电子邮件!
2016 |
Hamed Zivari Adab; Rufin Vogels Perturbation of posterior inferior temporal cortical activity impairs coarse orientation discrimination Journal Article In: Cerebral Cortex, vol. 26, no. 9, pp. 3814–3827, 2016. @article{Adab2016,It is reasonable to assume that the discrimination of simple visual stimuli depends on the activity of early visual cortical neurons, because simple visual features are supposedly coded in these areas whereas more complex features are coded in late visual areas. Recently, we showed that training monkeys in a coarse orientation discrimination task modified the response properties of single neurons in the posterior inferior temporal (PIT) cortex, a late visual area. Here, we examined the contribution of PIT to coarse orientation discrimination using causal perturbation methods. Electrical stimulation (ES) of PIT with currents of at least 100 µA impaired coarse orientation discrimination in monkeys. The performance deterioration did not exclusively reflect a general impairment to perform a difficult perceptual task. However, high current (650 µA) but not low-current (100 µA) ES also impaired fine color discrimination. ES of temporal regions dorsal or anterior to PIT produced less impairment of coarse orientation discrimination than ES of PIT. Injections of the GABA agonist muscimol into PIT also impaired performance. These data suggest that the late cortical area PIT is part of the network that supports coarse orientation discrimination of a simple grating stimulus, at least after extensive training in this task at threshold. |
Krista A. Ehinger; Ruth Rosenholtz A general account of peripheral encoding also predicts scene perception performance Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–19, 2016. @article{Ehinger2016,People are good at rapidly extracting the ‘‘gist'' of a scene at a glance, meaning with a single fixation. It is generally presumed that this performance cannot be mediated by the same encoding that underlies tasks such as visual search, for which researchers have suggested that selective attention may be necessary to bind features from multiple preattentively computed feature maps. This has led to the suggestion that scenes might be special, perhaps utilizing an unlimited capacity channel, perhaps due to brain regions dedicated to this processing. Here we test whether a single encoding might instead underlie all of these tasks. In our study, participants performed various navigation-relevant scene perception tasks while fixating photographs of outdoor scenes. Participants answered questions about scene category, spatial layout, geographic location, or the presence of objects. We then asked whether an encoding model previously shown to predict performance in crowded object recognition and visual search might also underlie the performance on those tasks. We show that this model does a reasonably good job of predicting performance on these scene tasks, suggesting that scene tasks may not be so special; they may rely on the same underlying encoding as search and crowded object recognition.We also demonstrate that a number of alternative ‘‘models'' of the information available in the periphery also do a reasonable job of predicting performance at the scene tasks, suggesting that scene tasks alone may not be ideal for distinguishing between models. |
Alessio Fracasso; David Melcher Saccades influence the visibility of targets in rapid stimulus sequences: The Roles of mislocalization, retinal distance and remapping Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 58, 2016. @article{Fracasso2016,Briefly presented targets around the time of a saccade are mislocalized towards the saccadic landing point. This has been taken as evidence for a remapping mechanism that accompanies each eye movement, helping maintain visual stability across large retinal shifts. Previous studies have shown that spatial mislocalization is greatly diminished when trains of brief stimuli are presented at a high frequency rate, which might help to explain why mislocalization is rarely perceived in everyday viewing. Studies in the laboratory have shown that mislocalization can reduce metacontrast masking by causing target stimuli in a masking sequence to be perceived as shifted in space towards the saccadic target and thus more easily discriminated. We investigated the influence of saccades on target discrimination when target and masks were presented in a rapid serial visual presentation (RSVP), as well as with forward masking and with backward masking. In a series of experiments, we found that performance was influenced by the retinal displacement caused by the saccade itself but that an additional component of un-masking occurred even when the retinal location of target and mask was matched. These results speak in favor of a remapping mechanism that begins before the eyes start moving and continues well beyond saccadic termination. |
Hamidreza Namazi; Vladimir V. Kulish; Amin Akrami The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal Journal Article In: Scientific Reports, vol. 6, pp. 26639, 2016. @article{Namazi2016,One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex' visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. |
Tania S. Zamuner; Elizabeth Morin-Lessard; Stephanie Strahm; Michael P. A. Page Spoken word recognition of novel words, either produced or only heard during learning Journal Article In: Journal of Memory and Language, vol. 89, pp. 55–67, 2016. @article{Zamuner2016a,Psycholinguistic models of spoken word production differ in how they conceptualize the relationship between lexical, phonological and output representations, making different predictions for the role of production in language acquisition and language processing. This work examines the impact of production on spoken word recognition of newly learned non-words. In Experiment 1, adults were trained on non-words with visual referents; during training, they produced half of the non-words, with the other half being heard-only. Using a visual world paradigm at test, eye tracking results indicated faster recognition of non-words that were produced compared with heard-only during training. In Experiment 2, non-words were correctly pronounced or mispronounced at test. Participants showed a different pattern of recognition for mispronunciation on non-words that were produced compared with heard-only during training. Together these results indicate that production affects the representations of newly learned words. |
Dan J. Graham; Christina A. Roberto In: Health Education and Behavior, vol. 43, no. 4, pp. 389–398, 2016. @article{Graham2016,Background. The U.S. Food and Drug Administration (FDA) has proposed modifying the Nutrition Facts Label (NFL) on food packages to increase consumer attention to this resource and to promote healthier dietary choices. Aims. The present study sought to determine whether the proposed NFL changes will affect consumer attention to the NFL or purchase intentions. Method. This study compared purchase intentions (yes/no responses to “would you purchase this food?” for 64 products) and attention to NFLs (measured via high-speed eye-tracking camera) among 155 young adults randomly assigned to view products with existing versus modified NFLs. Attention to all individual components of the NFL (e.g., calories, fats, sugars) were analyzed separately to assess the impact of each proposed NFL modification on attention to that region. Data were collected in 2014; analysis was conducted in 2015. Results. Modified NFLs did not elicit significantly more visual attention or lead to more healthful purchase intentions than did existing NFLs. Relocating the percent daily value component from the right side of the NFL to the left side, as proposed by the FDA, actually reduced participants' attention to this information. The proposed “added sugars” component was viewed on at least one label by a majority (58%) of participants. Discussion. Results suggest that the proposed NFL changes may not achieve FDA's goals. Changes to nutrition labeling may need to take a different form to meaningfully influence dietary behavior. Conclusion. Young adults' visual attention and purchase intentions do not appear to be meaningfully affected by the proposed NFL modifications. |
P. J. López-Peréz; J. Dampuré; J. A. Hernández-Cabrera; H. A. Barber Semantic parafoveal-on-foveal effects and preview benefits in reading: Evidence from fixation related potentials Journal Article In: Brain and Language, vol. 162, pp. 29–34, 2016. @article{LopezPerez2016,During reading parafoveal information can affect the processing of the word currently fixated (parafovea-on-fovea effect) and words perceived parafoveally can facilitate their subsequent processing when they are fixated on (preview effect). We investigated parafoveal processing by simultaneously recording eye movements and EEG measures. Participants read word pairs that could be semantically associated or not. Additionally, the boundary paradigm allowed us to carry out the same manipulation on parafoveal previews that were displayed until reader's gaze moved to the target words. Event Related Potentials time-locked to the prime-preview presentation showed a parafoveal-on-foveal N400 effect. Fixation Related Potentials time locked to the saccade offset showed an N400 effect related to the prime-target relationship. Furthermore, this later effect interacted with the semantic manipulation of the previews, supporting a semantic preview benefit. These results demonstrate that at least under optimal conditions foveal and parafoveal information can be simultaneously processed and integrated. |
Douglas A. Ruff; Marlene R. Cohen Stimulus dependence of correlated variability across cortical areas Journal Article In: Journal of Neuroscience, vol. 36, no. 28, pp. 7546–7556, 2016. @article{Ruff2016,The way that correlated trial-to-trial variability between pairs ofneurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models ofcortical circuits and ofthe computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas ofvisual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions.Wefound that correlations betweenneurons in different areasdependonstimulusandattention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence ofa second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern ofcross-area correlations is predicted bya normalization model whereMTunits sumV1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. |
Douglas A. Ruff; Marlene R. Cohen Attention increases spike count correlations between visual cortical areas Journal Article In: Journal of Neuroscience, vol. 36, no. 28, pp. 7523–7534, 2016. @article{rc16,Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. |
Sandra Utz; Claus-Christian Carbon Is the Thatcher illusion modulated by face familiarity? Evidence from an eye tracking study Journal Article In: PLoS ONE, vol. 11, no. 10, pp. e0163933, 2016. @article{Utz2016,Thompson (1980) first detected and described the Thatcher Illusion, where participants instantly perceive an upright face with inverted eyes and mouth as grotesque, but fail to do so when the same face is inverted. One prominent but controversial explanation is that the processing of configural information is disrupted in inverted faces. Studies investigating the Thatcher Illusion either used famous faces or non-famous faces. Highly familiar faces were often thought to be processed in a pronounced configural mode, so they seem ideal candi-dates to be tested in one Thatcher study against unfamiliar faces–but this has never been addressed so far. In our study, participants evaluated 16 famous and 16 non-famous faces for their grotesqueness. We tested whether familiarity (famous/non-famous faces) modu-lates reaction times, correctness of grotesqueness assessments (accuracy), and eye movement patterns for the factors orientation (upright/inverted) and Thatcherisation (Thatcherised/non-Thatcherised). On a behavioural level, familiarity effects were only observable via face inversion (higher accuracy and sensitivity for famous compared to non-famous faces) but not via Thatcherisation. Regarding eye movements, however, Thatcheri-sation influenced the scanning of famous and non-famous faces, for instance, in scanning the mouth region of the presented faces (higher number, duration and dwell time of fixa-tions for famous compared to non-famous faces if Thatcherised). Altogether, famous faces seem to be processed in a more elaborate, more expertise-based way than non-famous faces, whereas non-famous, inverted faces seem to cause difficulties in accurate and sen-sitive processing. Results are further discussed in the face of existing studies of familiar vs. unfamiliar face processing. |
Charles Clifton; Lyn Frazier Accommodation to an unlikely episodic state Journal Article In: Journal of Memory and Language, vol. 86, pp. 20–34, 2016. @article{Clifton2016,Mini-discourses like (ia) seem slightly odd compared to their counterparts containing a conjunction (ib).(i)a.Speaker A:John or Bill left.Speaker B:Sam did too.b.Speaker A:John and Bill left.Speaker B:Sam did too.One possibility is that or in Speaker A's utterance in (ia) raises the potential Question Under Discussion (QUD) whether it was John or Bill who left and Speaker B's reply fails to address this QUD. A different possibility is that the epistemic state of the speaker of (ia) is somewhat unlikely or uneven: the speaker knows that someone left, and that it was John or Bill, but doesn't know which one. The results of four acceptability judgment studies confirmed that (ia) is less good or coherent than (ib) (Experiment 1), but not due to failure to address the QUD implicitly introduced by the disjunction because the penalty for disjunction persisted even in the presence of a different overt QUD (Experiment 2) and even when there was no reply to Speaker A (Experiment 3). The hypothesis that accommodating an unusual epistemic state might underlie the lower acceptability of disjunction was supported by the fact that the disjunction penalty is larger in past tense discourses than in future discourses, where partial knowledge of events is the norm (Experiment 4). The results of an eye tracking study revealed a penalty for disjunction relative to conjunction that was significantly smaller when a lead in (. I wonder if it was. . .) explicitly introduced the disjunction. This interaction (connective X lead in) appeared in early measures on the disjunctive phrase itself, suggesting that the input is related to an inferred epistemic state of the speaker in a rapid and ongoing fashion. |
Deborah A. Cronin; James R. Brockmole Evaluating the influence of a fixated object's spatio-temporal properties on gaze control Journal Article In: Attention, Perception, & Psychophysics, vol. 78, no. 4, pp. 996–1003, 2016. @article{Cronin2016,Despite recent progress in understanding the factors that determine where an observer will eventually look in a scene, we know very little about what determines how an observer decides where he or she will look next. We investi- gated the potential roles of object-level representations in the direction ofsubsequent shifts ofgaze. In five experiments, we considered whether a fixated object's spatial orientation, im- plied motion, and perceived animacy affect gaze direction when shifting overt attention to another object. Eye move- ments directed away from a fixated object were biased in the direction it faced. This effect was not modified by implying a particular direction ofinanimate or animate motion. Together, these results suggest that decisions regarding where one should look next are in part determined by the spatial, but not by the implied temporal, properties of the object at the current locus of fixation. WABBLE |
Andrew Isaac Meso; Anna Montagnini; Jason Bell; Guillaume S. Masson Looking for symmetry: Fixational eye movements are biased by image mirror symmetry Journal Article In: Journal of Neurophysiology, vol. 116, pp. 1250–1260, 2016. @article{Meso2016,Humans are highly sensitive to symmetry. During scene exploration, the area of the retina with dense light receptor coverage acquires most information from relevant locations determined by gaze fixation. We characterised patterns of fixational eye movements made by observers staring at synthetic scenes either freely (i.e. free exploration) or during a symmetry orientation discrimination task (i.e. active exploration). Stimuli could be mirror-symmetric or not. Both free and active exploration generated more saccades parallel to the axis of symmetry than along other orientations. Most saccades were small (<2deg) leaving the fovea within a 4-degree radius of fixation. The analysis of saccade dynamics showed that the observed parallel orientation selectivity emerged within 500ms of stimulus onset and persisted throughout the trials under both viewing conditions. Symmetry strongly distorted existing anisotropies in gaze direction in a seemingly automatic process. We argue that this bias serves a functional role in which adjusted scene sampling enhances and maintains sustained sensitivity to local spatial correlations arising from symmetry. |
Manuel Perea; Lourdes Giner; Ana Marcet; Pablo Gomez Does extra interletter spacing help text reading in skilled adult readers? Journal Article In: Spanish Journal of Psychology, vol. 19, pp. 1–7, 2016. @article{Perea2016,A number of experiments have shown that, in skilled adult readers, a small increase in interletter spacing speeds up the process of visual word recognition relative to the default settings (i.e., judge faster than judge). The goal of the present experiment was to examine whether this effect can be generalized to a more ecological scenario: text reading. Each participant read two stories (367 words each) taken from a standardized reading test. The stories were presented with the standard interletter spacing or with a small increase in interletter spacing (+1.2 points to default) in a within-subject design. An eyetracker was used to register the participants' eye movements. Comprehension scores were also examined. Results showed that, on average, fixation durations were shorter while reading the text with extra spacing than while reading the text with the default settings (237 vs. 245 ms, respectively; η2 =. 41 |
Outi Veivo; Juhani Järvikivi; Vincent Porretta; Jukka Hyönä Orthographic activation in L2 spoken word recognition depends on proficiency: Evidence from eye-tracking Journal Article In: Frontiers in Psychology, vol. 7, pp. 1120, 2016. @article{Veivo2016,The use of orthographic and phonological information in spoken word recognition was studied in a visual world task where L1 Finnish learners of L2 French (n = 64) and L1 French native speakers (n = 24) were asked to match spoken word forms with printed words while their eye movements were recorded. In Experiment 1, French target words were contrasted with competitors having a longer (<base> vs. <bague>) or a shorter word initial phonological overlap (<base> vs. <bain>) and an identical orthographic overlap. In Experiment 2, target words were contrasted with competitors of either longer (<mince> vs. <mite>) or shorter word initial orthographic overlap (<mince> vs. <mythe>) and of an identical phonological overlap. A general phonological effect was observed in the L2 listener group but not in the L1 control group. No general orthographic effects were observed in the L2 or L1 groups, but a significant effect of proficiency was observed for orthographic overlap over time: higher proficiency L2 listeners used also orthographic information in the matching task in a time-window from 400 to 700 ms, whereas no such effect was observed for lower proficiency listeners. These results suggest that the activation of orthographic information in L2 spoken word recognition depends on proficiency in L2. |
Nadine Matton; Pierre-Vincent Paubel; Julien Cegarra; Éric Raufaste Differences in multitask resource reallocation after change in task values Journal Article In: Human Factors, vol. 58, no. 8, pp. 1128–1142, 2016. @article{Matton2016,Objective: The objective was to characterize multitask resource reallocation strategies when managing subtasks with various assigned values. Background: When solving a resource conflict in multitasking, Salvucci and Taatgen predict a globally rational strategy will be followed that favors the most urgent subtask and optimizes global performance. However, Katidioti and Taatgen identified a locally rational strategy that optimizes only a subcomponent of the whole task, leading to detrimental consequences on global performance. Moreover, the question remains open whether expertise would have an impact on the choice of the strategy. Method: We adopted a multitask environment used for pilot selection with a change in emphasis on two out of four subtasks while all subtasks had to be maintained over a minimum performance. A laboratory eye-tracking study contrasted 20 recently selected pilot students considered as experienced with this task and 15 university students considered as novices. Results: When two subtasks were emphasized, novices focused their resources particularly on one high-value subtask and failed to prevent both low-value subtasks falling below minimum performance. On the contrary, experienced people delayed the processing of one low-value subtask but managed to optimize global performance. Conclusion: In a multitasking environment where some subtasks are emphasized, novices follow a locally rational strategy whereas experienced participants follow a globally rational strategy. Application: During complex training, trainees are only able to adjust their resource allocation strategy to subtask emphasis changes once they are familiar with the multitasking environment. |
Andrew Isaac Meso; James Rankin; Olivier Faugeras; Pierre Kornprobst; Guillaume S. Masson The relative contribution of noise and adaptation to competition during tri-stable motion perception Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–24, 2016. @article{Meso2016a,Animals exploit antagonistic interactions for sensory processing and these can cause oscillations between competing states. Ambiguous sensory inputs yield such perceptual multistability. Despite numerous empirical studies using binocular rivalry or plaid pattern motion, the driving mechanisms behind the spontaneous transitions between alternatives remain unclear. In the current work, we used a tristable barber pole motion stimulus combining empirical and modeling approaches to elucidate the contributions of noise and adaptation to underlying competition. We first robustly characterized the coupling between perceptual reports of transitions and continuously recorded eye direction, identifying a critical window of 480 ms before button presses, within which both measures were most strongly correlated. Second, we identified a novel nonmonotonic relationship between stimulus contrast and average perceptual switching rate with an initially rising rate before a gentle reduction at higher contrasts. A neural fields model of the underlying dynamics introduced in previous theoretical work and incorporating noise and adaptation mechanisms was adapted, extended, and empirically validated. Noise and adaptation contributions were confirmed to dominate at the lower and higher contrasts, respectively. Model simulations, with two free parameters controlling adaptation dynamics and direction thresholds, captured the measured mean transition rates for participants. We verified the shift from noise-dominated toward adaptation-driven in both the eye direction distributions and intertransition duration statistics. This work combines modeling and empirical evidence to demonstrate the signal-strength-dependent interplay between noise and adaptation during tristability. We propose that the findings generalize beyond the barber pole stimulus case to ambiguous perception in continuous feature spaces. |
Marcus Nyström; Dan Witzner Hansen; Richard Andersson; Ignace T. C. Hooge Why have microsaccades become larger? Investigating eye deformations and detection algorithms Journal Article In: Vision Research, vol. 118, pp. 17–24, 2016. @article{Nystroem2016,The reported size of microsaccades is considerably larger today compared to the initial era of microsaccade studies during the 1950s and 1960s. We investigate whether this increase in size is related to the fact that the eye-trackers of today measure different ocular structures than the older techniques, and that the movements of these structures may differ during a microsaccade. In addition, we explore the impact such differences have on subsequent analyzes of the eye-tracker signals. In Experiment I, the movement of the pupil as well as the first and fourth Purkinje reflections were extracted from series of eye images recorded during a fixation task. Results show that the different ocular structures produce different microsaccade signatures. In Experiment II, we found that microsaccade amplitudes computed with a common detection algorithm were larger compared to those reported by two human experts. The main reason was that the overshoots were not systematically detected by the algorithm and therefore not accurately accounted for. We conclude that one reason to why the reported size of microsaccades has increased is due to the larger overshoots produced by the modern pupil-based eye-trackers compared to the systems used in the classical studies, in combination with the lack of a systematic algorithmic treatment of the overshoot. We hope that awareness of these discrepancies in microsaccade dynamics across eye structures will lead to more generally accepted definitions of microsaccades. |
Jinger Pan; Jochen Laubrock; Ming Yan In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 8, pp. 1257–1273, 2016. @article{Pan2016,We examined how reading mode (i.e., silent vs. oral reading) influences parafoveal semantic and phonological processing during the reading of Chinese sentences, using the gaze-contingent boundary paradigm. In silent reading, we found in 2 experiments that reading times on target words were shortened with semantic previews in early and late processing, whereas phonological preview effects mainly occurred in gaze duration or second-pass reading. In contrast, results showed that phonological preview information is obtained early on in oral reading. Strikingly, in oral reading, we observed a semantic preview cost on the target word in Experiment 1 and a decrease in the effect size of preview benefit from first- to second-pass measures in Experiment 2, which we hypothesize to result from increased preview duration. Taken together, our results indicate that parafoveal semantic information can be obtained irrespective of reading mode, whereas readers more efficiently process parafoveal phonological information in oral reading. We discuss implications for notions of information processing priority and saccade generation during silent and oral reading. |
Gerulf Rieger; Ritch C. Savin-Williams; Meredith L. Chivers; J. Michael Bailey Sexual arousal and masculinity-femininity of women Journal Article In: Journal of Personality and Social Psychology, vol. 111, no. 2, pp. 265–283, 2016. @article{Rieger2016,Studies with volunteers in sexual arousal experiments suggest that women are, on average, physiologically sexually aroused to both male and female sexual stimuli. Lesbians are the exception because they tend to be more aroused to their preferred sex than the other sex, a pattern typically seen in men. A separate research line suggests that lesbians are, on average, more masculine than straight women in their nonsexual behaviors and characteristics. Hence, a common influence could affect the expression of male-typical sexual and nonsexual traits in some women. By integrating these research programs, we tested the hypothesis that male-typical sexual arousal of lesbians relates to their nonsexual masculinity. Moreover, the most masculine-behaving lesbians, in particular, could show the most male-typical sexual responses. Across combined data, Study 1 examined these patterns in women's genital arousal and self-reports of masculine and feminine behaviors. Study 2 examined these patterns with another measure of sexual arousal, pupil dilation to sexual stimuli, and with observer-rated masculinity-femininity in addition to self-reported masculinity-femininity. Although both studies confirmed that lesbians were more male-typical in their sexual arousal and nonsexual characteristics, on average, there were no indications that these 2 patterns were in any way connected. Thus, women's sexual responses and nonsexual traits might be masculinized by independent factors. |
Nicolas Ruffieux; Meike Ramon; Junpeng Lao; Françoise Colombo; Lisa Stacchi; François-Xavier Borruat; Ettore Accolla; Jean-Marie Annoni; Roberto Caldara Residual perception of biological motion in cortical blindness Journal Article In: Neuropsychologia, vol. 93, pp. 301–311, 2016. @article{nmjflf16,From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. |
Virginia Clinton; Kinga Morsanyi; Martha W. Alibali; Mitchell J. Nathan Learning about probability from text and tables: Do color coding and labeling through an interactive-user interface help? Journal Article In: Applied Cognitive Psychology, vol. 30, no. 3, pp. 440–453, 2016. @article{Clinton2016,Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly assigned to learn about probabilistic reasoning from one of 4 computer-based lessons generated from a 2 (color coding/no color coding) by 2 (labeling/no labeling) between-subjects design. Learners added the labels or color coding at their own pace by clicking buttons in a computer-based lesson. Participants' eye movements were recorded while viewing the lesson. Labeling was beneficial for learning, but color coding was not. In addition, labeling, but not color coding, increased attention to important information in the table and time with the lesson. Both labeling and color coding increased looks between the text and corresponding information in the table. The findings provide support for the multimedia principle, and they suggest that providing labeling enhances learning about probabilistic reasoning from text and tables. |
E. Hainque; E. Apartis; P. M. Daye Switching between two targets with non-constant velocity profiles reveals shared internal model of target motion Journal Article In: European Journal of Neuroscience, vol. 44, no. 8, pp. 2622–2634, 2016. @article{Hainque2016,Several experiments have shown that smooth pursuit and saccades interact while tracking an object moving across the visual scene. It was proposed two decades ago that the amplitude of saccades triggered during smooth pursuit (“catch-up saccades”) were corrected by a delayed sensory signal to account for the ongoing target displacement during catch-up saccades. However recent studies used targets with non-constant velocity profiles and suggested that the correction of catch-up saccade amplitude must be done through an internal model of target motion. It is widely accepted that an internal model of target motion is also used by the central nervous system to cancel inherent delays between visual input and smooth pursuit motor output, ensuring accurate tracking of moving targets. Our study proposes a new paradigm in which the target switches unexpectedly from one target with a non-constant periodic velocity profile to another with a non-constant aperiodic velocity profile. Our results confirm the hypothesis that the central nervous system uses an internal model of target motion to correct catch-up saccade amplitude. In addition, we reconcile the sensory delayed and the internal model of target motion hypotheses and show that a common internal model of target motion is shared within the central nervous system to control smooth pursuit and to correct catch-up saccade amplitude. |
Tobias Heed; Jenny Backhaus; Brigitte Röder; Stephanie Badde Disentangling the external reference frames relevant to tactile localization Journal Article In: PLoS ONE, vol. 11, no. 7, pp. e0158829, 2016. @article{Heed2016,Different reference frames appear to be relevant for tactile spatial coding. When participants give temporal order judgments (TOJ) of two tactile stimuli, one on each hand, performance declines when the hands are crossed. This effect is attributed to a conflict between anatomi- cal and external location codes: hand crossing places the anatomically right hand into the left side of external space. However, hand crossing alone does not specify the anchor of the external reference frame, such as gaze, trunk, or the stimulated limb. Experiments that used explicit localization responses, such as pointing to tactile stimuli rather than crossing manipulations, have consistently implicated gaze-centered coding for touch. To test whether crossing effects can be explained by gaze-centered coding alone, participants made TOJ while the position of the hands wasmanipulated relative to gaze and trunk. The two hands either lay on different sides of space relative to gaze or trunk, or they both lay on one side of the respective space. In the latter posture, one hand was on its "regular side of space" despite hand crossing, thus reducing overall conflict between anatomical and exter- nal codes. TOJ crossing effects were significantly reduced when the hands were both located on the same side of space relative to gaze, indicating gaze-centered coding. Evi- dence for trunk-centered coding was tentative, with an effect in reaction time but not in accu- racy. These results link paradigms that use explicit localization and TOJ, and corroborate the relevance of gaze-related coding for touch. Yet, gaze and trunk-centered coding did not account for the total size of crossing effects, suggesting that tactile localization relies on additional, possibly limb-centered, reference frames. Thus, tactile location appears to be estimated by integrating multiple anatomical and external reference frames. |
Andreas Brocher; Stephani Foraker; Jean Pierre Koenig Processing of irregular polysemes in sentence reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 11, pp. 1798–1813, 2016. @article{Brocher2016a,The degree to which meanings are related in memory affects ambiguous word processing. We examined irregular polysemes, which have related senses based on similar or shared features rather than a relational rule, like regular polysemy. We tested to what degree the related meanings of irregular polysemes ("wire") are represented with shared semantic information versus unshared information represented separately, like homonyms ("bank"). Monitoring eye fixations, we found that later context supporting the less frequent meaning of an irregular polyseme did not slow down reading compared with control conditions, whereas for homonyms it did. This indicates that in the absence of preceding biasing context, readers access a shared component of an irregular polyseme's representation. Additionally, when the same context words preceded the ambiguous word, both irregular polysemes and homonyms initially elicited longer reading times, but the observed reading slow-down was weaker and less persistent for irregular polysemes than homonyms, indicating less competition between meaning components. We interpret these results as evidence of a shared features representation for irregular polysemes, which additionally incorporates unshared portions of meaning that can compete. When preceding, biasing context is available, readers activate shared and unshared components of the senses, producing a more fully instantiated meaning. |
Jessica Heeman; Tanja C. W. Nijboer; Nathan Van der Stoep; Jan Theeuwes; Stefan Van der Stigchel Oculomotor interference of bimodal distractors Journal Article In: Vision Research, vol. 123, pp. 46–55, 2016. @article{Heeman2016,When executing an eye movement to a target location, the presence of an irrelevant distracting stimulus can influence the saccade metrics and latency. The present study investigated the influence of distractors of different sensory modalities (i.e. auditory, visual and audiovisual) which were presented at various distances (i.e. close or remote) from a visual target. The interfering effects of a bimodal distractor were more pronounced in the spatial domain than in the temporal domain. The results indicate that the direction of interference depended on the spatial layout of the visual scene. The close bimodal distractor caused the saccade endpoint and saccade trajectory to deviate towards the distractor whereas the remote bimodal distractor caused a deviation away from the distractor. Furthermore, saccade averaging and trajectory deviation evoked by a bimodal distractor was larger compared to the effects evoked by a unimodal distractor. This indicates that a bimodal distractor evoked stronger spatial oculomotor competition compared to a unimodal distractor and that the direction of the interference depended on the distance between the target and the distractor. Together, these findings suggest that the oculomotor vector to irrelevant bimodal input is enhanced and that the interference by multisensory input is stronger compared to unisensory input. |
Marko Nardini; Jennifer Bales; Denis Mareschal Integration of audio-visual information for spatial decisions in children and adults Journal Article In: Developmental Science, vol. 19, no. 5, pp. 803–816, 2016. @article{Nardini2016,In adults, decisions based on multisensory information can be faster and/or more accurate than those relying on a single sense. However, this finding varies significantly across development. Here we studied speeded responding to audio-visual targets, a key multisensory function whose development remains unclear. We found that when judging the locations of targets, children aged 4 to 12 years and adults had faster and less variable response times given auditory and visual information together compared with either alone. Comparison of response time distributions with model predictions indicated that children at all ages were integrating (pooling) sensory information to make decisions but that both the overall speed and the efficiency of sensory integration improved with age. The evidence for pooling comes from comparison with the predictions of Miller's seminal ‘race model', as well as with a major recent extension of this model and a comparable ‘pooling' (coactivation) model. The findings and analyses can reconcile results from previous audio-visual studies, in which infants showed speed gains exceeding race model predictions in a spatial orienting task (Neil et al., 2006) but children below 7 years did not in speeded reaction time tasks (e.g. Barutchu et al., 2009). Our results provide new evidence for early and sustained abilities to integrate visual and auditory signals for spatial localization from a young age. |
Andreas Brocher; Tim Graf Pupil old/new effects reflect stimulus encoding and decoding in short-term memory Journal Article In: Psychophysiology, vol. 53, no. 12, pp. 1823–1835, 2016. @article{Brocher2016,We conducted five pupil old/new experiments to examine whether pupil old/new effects can be linked to familiarity and/or recollection processes of recognition memory. In Experiments 1–3, we elicited robust pupil old/new effects for legal words and pseudowords (Experiment 1), positive and negative words (Experiment 2), and low-frequency and high-frequency words (Experiment 3). Importantly, unlike for old/new effects in ERPs, we failed to find any effects of long-term memory representations on pupil old/new effects. In Experiment 4, using the words and pseudowords from Experiment 1, participants made lexical decisions instead of old/new decisions. Pupil old/new effects were restricted to legal words. Additionally requiring participants to make speeded responses (Experiment 5) led to a complete absence of old/new effects. Taken together, these data suggest that pupil old/new effects do not map onto familiarity and recollection processes of recognition memory. They rather seem to reflect strength of memory traces in short-term memory, with little influence of long-term memory representations. Crucially, weakening the memory trace through manipulations in the experimental task significantly reduces pupil/old new effects. |
Huihui Zhou; Robert John Schafer; Robert Desimone Pulvinar-cortex interactions in vision and attention Journal Article In: Neuron, vol. 89, no. 1, pp. 209–220, 2016. @article{Zhou2016c,The ventro-lateral pulvinar is reciprocally connected with the visual areas of the ventral stream that are important for object recognition. To understand the mechanisms of attentive stimulus processing in this pulvinar-cortex loop, we investigated the interactions between the pulvinar, area V4, and IT cortex in a spatial-attention task. Sensory processing and the influence of attention in the pulvinar appeared to reflect its cortical inputs. However, pulvinar deactivation led to a reduction of attentional effects on firing rates and gamma synchrony in V4, a reduction of sensory-evoked responses and overall gamma coherence within V4, and severe behavioral deficits in the affected portion of the visual field. Conversely, pulvinar deactivation caused an increase in low-frequency cortical oscillations, often associated with inattention or sleep. Thus, cortical interactions with the ventro-lateral pulvinar are necessary for normal attention and sensory processing and for maintaining the cortex in an active state. The pulvinar is often proposed to modulate cortical processing with attention. Zhou et al. find that beyond any role in attention, the pulvinar input to cortex seems necessary to maintain the cortex in an active state. |
Indu P. Bodala; Junhua Li; Nitish V. Thakor; Hasan Al-Nashash EEG and eye tracking demonstrate vigilance enhancement with challenge integration Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 273, 2016. @article{Bodala2016,Maintaining vigilance is possibly the first requirement for surveillance tasks where personnel are faced with monotonous yet intensive monitoring tasks. Decrement in vigilance in such situations could result in dangerous consequences such as accidents, loss of life and system failure. In this paper, we investigate the possibility to enhance vigilance or sustained attention using ‘challenge integration', a strategy that integrates a primary task with challenging stimuli. A primary surveillance task (identifying an intruder in a simulated factory environment) and a challenge stimulus (periods of rain obscuring the surveillance scene) were employed to test the changes in vigilance levels. The effect of integrating challenging events (resulting from artificially simulated rain) into the task were compared to the initial monotonous phase. EEG and eye tracking data is collected and analyzed for n = 12 subjects. Frontal midline theta power and frontal theta to parietal alpha power ratio which are used as measures of engagement and attention allocation show an increase due to challenge integration (p < 0.05 in each case). Relative delta band power of EEG also shows statistically significant suppression on the frontoparietal and occipital cortices due to challenge integration (p < 0.05). Saccade amplitude, saccade velocity and blink rate obtained from eye tracking data exhibit statistically significant changes during the challenge phase of the experiment (p < 0.05 in each case). From the correlation analysis between the statistically significant measures of eye tracking and EEG, we infer that saccade amplitude and saccade velocity decrease with vigilance decrement along with frontal midline theta and frontal theta to parietal alpha ratio. Conversely, blink rate and relative delta power increase with vigilance decrement. However, these measures exhibit a reverse trend when challenge stimulus appears in the task suggesting vigilance enhancement. Moreover, the mean reaction time is lower for the challenge integrated phase (RT mean = 3.65 ± 1.4 secs) compared to initial monotonous phase without challenge (RT mean = 4.6 ± 2.7 secs). Our work shows that vigilance level, as assessed by response of these vital signs, is enhanced by challenge integration. |
Floor Groot; Falk Huettig; Christian N. L. Olivers When meaning matters: The temporal dynamics of semantic influences on visual attention Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 2, pp. 180–196, 2016. @article{Groot2016,An important question is, to what extent is visual attention driven by the semantics of individual objects, rather than by their visual appearance? This study investigates the hypothesis that timing is a crucial factor in the occurrence and strength of semantic influences on visual orienting. To assess the dynamics of such influences, the authors presented the target instruction either before or after visual stimulus onset, while eye movements were continuously recorded throughout the search. The results show a substantial but delayed bias in orienting toward semantically related objects compared with visually related objects when target instruction is presented before visual stimulus onset. However, this delay can be completely undone by presenting the visual information before the target instruction (Experiment 1). Moreover, the absence or presence of visual competition does not change the temporal dynamics of the semantic bias (Experiment 2). Visual orienting is thus driven by priority settings that dynamically shift between visual and semantic representations, with each of these types of bias operating largely independently. The findings bridge the divide between the visual attention and the psycholinguistic literature. |
Hsin-I Liao; Shunsuke Kidani; Makoto Yoneya; Makio Kashino; Shigeto Furukawa Correspondences among pupillary dilation response, subjective salience of sounds, and loudness Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 412–425, 2016. @article{Liao2016,A pupillary dilation response is known to be evoked by salient deviant or contrast auditory stimuli, but so far a direct link between it and subjective salience has been lacking. In two experiments, participants listened to various environmental sounds while their pupillary responses were recorded. In separate sessions, participants performed subjec-tive pairwise-comparison tasks on the sounds with respect to their salience, loudness, vigorousness, preference, beauty, an-noyance, and hardness. The pairwise-comparison data were converted to ratings on the Thurstone scale. The results showed a close link between subjective judgments of salience and loudness. The pupil dilated in response to the sound pre-sentations, regardless of sound type. Most importantly, this pupillary dilation response to an auditory stimulus positively correlated with the subjective salience, as well as the loudness, of the sounds (Exp. 1). When the loudnesses of the sounds were identical, the pupil responses to each sound were similar and were not correlated with the subjective judgments of sa-lience or loudness (Exp. 2). This finding was further con-firmed by analyses based on individual stimulus pairs and participants. In Experiment 3, when salience and loudness were manipulated by systematically changing the sound pres-sure level and acoustic characteristics, the pupillary dilation response reflected the changes in both manipulated factors. A regression analysis showed a nearly perfect linear correlation between the pupillary dilation response and loudness. The overall results suggest that the pupillary dilation response re-flects the subjective salience of sounds, which is defined, or is heavily influenced, by loudness. |
Maria Matziridi; Eli Brenner; Jeroen B. J. Smeets Moving your head reduces perisaccadic compression Journal Article In: Journal of Vision, vol. 16, no. 13, pp. 1–8, 2016. @article{Matziridi2016,Flashes presented around the time of a saccade appear to be closer to the saccade endpoint than they really are. The resulting compression of perceived positions has been found to increase with the amplitude of the saccade. In most studies on perisaccadic compression the head is static, so the eye-in-head movement is equal to the change in gaze. What if moving the head causes part of the change in gaze? Does decreasing the eye-in-head rotation by moving the head decrease the compression of perceived positions? To find out, we asked participants to shift their gaze between two positions, either without moving their head or with the head contributing to the change in gaze. Around the time of the saccades we flashed bars that participants had to localize. When the head contributed to the change in gaze, the duration of the saccade was shorter and compression was reduced. We interpret this reduction in compression as being caused by a reduction in uncertainty about gaze position at the time of the flash. We conclude that moving one's head can reduce the systematic mislocalization of flashes presented around the time of saccades. |
Jérôme Tagu; Karine Doré-Mazars; Christelle Lemoine-Lardennois; Dorine Vergilino-Perez How eye dominance strength modulates the influence of a distractor on saccade accuracy Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 2, pp. 534–543, 2016. @article{Tagu2016,PURPOSE. Neuroimaging studies have shown that the dominant eye is linked preferentially to the ipsilateral primary visual cortex. However, its role in perception still is misunderstood. We examined the influence of eye dominance and eye dominance strength on saccadic parameters, contrasting stimulations presented in the two hemifields. METHODS. Participants with contrasted eye dominance (left or right) and eye dominance strength (strong or weak) were asked to make a saccade toward a target displayed at 5degree or 7degree left or right of a fixation cross. In some trials, a distractor at 3degree of eccentricity also was displayed either in the same hemifield as the target (to induce a global effect on saccade amplitude) or in the opposite hemifield (to induce a remote distractor effect on saccade latency). RESULTS. Eye dominance did influence saccade amplitude as participants with strong eye dominance showed more accurate saccades toward the target (weaker global effect) in the hemifield contralateral to the dominant eye than in the ipsilateral one. Such asymmetry was not found in participants with weak eye dominance or when a remote distractor was used. CONCLUSIONS. We show that eye dominance strength influences saccade target selection. We discuss several arguments supporting the view that such advantage may be linked to the relationship between the dominant eye and ipsilateral hemisphere. |
Gustav Kuhn; Robert Teszka; Natalia Tenaw; Alan Kingstone In: Cognition, vol. 146, pp. 136–142, 2016. @article{Kuhn2016a,People's attention is oriented towards faces, but the extent to which these social attention effects are under top down control is more ambiguous. Our first aim was to measure and compare, in real life and in the lab, people's top-down control over overt and covert shifts in reflexive social attention to the face of another. We employed a magic trick in which the magician used social cues (i.e. asking a question whilst establishing eye contact) to misdirect attention towards his face, and thus preventing participants from noticing a visible colour change to a playing card. Our results show that overall people spend more time looking at the magician's face when he is seen on video than in reality. Additionally, although most participants looked at the magician's face when misdirected, this tendency to look at the face was modulated by instruction (i.e., "keep your attention on the cards"), and therefore, by top down control. Moreover, while the card's colour change was fully visible, the majority of participants failed to notice the change, and critically, change detection (our measure of covert attention) was not affected by where people looked (overt attention). We conclude that there is a tendency to shift overt and covert attention reflexively to faces, but that people exert more top down control over this overt shift in attention. These finding are discussed within a new framework that focuses on the role of eye movements as an attentional process as well as a form of non-verbal communication. |
Richard J. Wiseman; Tamami Nakano Blink and you'll miss it: the role of blinking in the perception of magic tricks Journal Article In: PeerJ, vol. 4, pp. 1–9, 2016. @article{Wiseman2016,<p>Magicians use several techniques to deceive their audiences, including, for example, the misdirection of attention and verbal suggestion. We explored another potential stratagem, namely the relaxation of attention. Participants watched a video of a highly skilled magician whilst having their eye-blinks recorded. The timing of spontaneous eye-blinks was highly synchronized across participants. In addition, the synchronized blinks frequency occurred immediately after a seemingly impossible feat, and often coincided with actions that the magician wanted to conceal from the audience. Given that blinking is associated with the relaxation of attention, these findings suggest that blinking plays an important role in the perception of magic, and that magicians may utilize blinking and the relaxation of attention to hide certain secret actions.</p> |
Jifan Zhou; Chia-Lin Lee; Kuei-An Li; Yung-Hsuan Tien; Su-Ling Yeh Does temporal integration occur for unrecognizable words in visual crowding? Journal Article In: PLoS ONE, vol. 11, no. 2, pp. e0149355, 2016. @article{Zhou2016d,? 2016 Zhou et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Visual crowding - the inability to see an object when it is surrounded by flankers in the periphery - does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration - the simplest kind of temporal semantic integration - did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner Role of motor execution in the ocular tracking of self-generated movements Journal Article In: Journal of Neurophysiology, vol. 116, no. 6, pp. 2586–2593, 2016. @article{Chen2016c,When human observers track the movements of their own hand with their gaze, the eyes can start moving before the finger (i.e., anticipatory smooth pursuit). The signals driving anticipation could come from motor commands during finger motor execution or from motor intention and decision processes associated with self-initiated movements. For the present study, we built a mechanical device that could move a visual target either in the same direction as the participant's hand or in the opposite direction. Gaze pursuit of the target showed stronger anticipation if it moved in the same direction as the hand compared with the opposite direction, as evidenced by decreased pursuit latency, increased positional lead of the eye relative to target, increased pursuit gain, decreased saccade rate, and decreased delay at the movement reversal. Some degree of anticipation occurred for incongruent pursuit, indicating that there is a role for higher-level movement prediction in pursuit anticipation. The fact that anticipation was larger when target and finger moved in the same direction provides evidence for a direct coupling between finger and eye motor commands. |
Floor Groot; Falk Huettig; Christian N. L. Olivers Revisiting the looking at nothing phenomenon: Visual and semantic biases in memory search Journal Article In: Visual Cognition, vol. 24, no. 3, pp. 226–245, 2016. @article{Groot2016a,When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than looking at unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present. [ABSTRACT FROM AUTHOR] |
Hsin-I Liao; Makoto Yoneya; Shunsuke Kidani; Makio Kashino; Shigeto Furukawa Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention Journal Article In: Frontiers in Neuroscience, vol. 10, pp. 43, 2016. @article{Liao2016a,A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. |
Luis Morales; Daniela Paolieri; Paola E. Dussias; Jorge R. Valdés Kroff; Chip Gerfen; María Teresa Bajo The gender congruency effect during bilingual spoken-word recognition Journal Article In: Bilingualism: Language and Cognition, vol. 19, no. 2, pp. 294–310, 2016. @article{Morales2016,We investigate the 'gender-congruency' effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian-Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / 'find the scarf') and clicked on the object named in the instruction. Grammatical gender of the objects' name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. |
Aaron Veldre; Sally Andrews Semantic preview benefit in English: Individual differences in the extraction and use of parafoveal semantic information Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 6, pp. 837–854, 2016. @article{Veldre2016a,Although there is robust evidence that skilled readers of English extract and use orthographic and phonological information from the parafovea to facilitate word identification, semantic preview benefits have been elusive. We sought to establish whether individual differences in the extraction and/or use of parafoveal semantic information could account for this discrepancy. Ninety-nine adult readers who were assessed on measures of reading and spelling ability read sentences while their eye movements were recorded. The gaze-contingent boundary paradigm was used to manipulate the availability of relevant semantic and orthographic information in the parafovea. On average, readers showed a benefit from previews high in semantic feature overlap with the target. However, reading and spelling ability yielded opposite effects on semantic preview benefit. High reading ability was associated with a semantic preview benefit that was equivalent to an identical preview on first-pass reading. High spelling ability was associated with a reduced semantic preview benefit despite an overall higher rate of skipping. These results suggest that differences in the magnitude of semantic preview benefits in English reflect constraints on extracting semantic information from the parafovea and competition between the orthographic features of the preview and the target. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner LRP predicts smooth pursuit eye movement onset during the ocular tracking of self-generated movements Journal Article In: Journal of Neurophysiology, vol. 116, no. 1, pp. 18–29, 2016. @article{Chen2016d,Several studies indicated that human observers are very efficient at tracking self-generated hand movements with their gaze, yet it is not clear whether this is simply a byproduct of the predictability of self-generated actions or if it results from a deeper coupling of the somatomotor and oculomotor systems. In a first behavioral experiment we compared pursuit performance as observers either followed their own finger or tracked a dot whose motion was externally generated but mimicked their finger motion. We found that even when the dot motion was completely predictable both in terms of onset time and in terms of kinematics, pursuit was not identical to the one produced as the observers tracked their finger, as evidenced by increased rate of catch-up saccades and by the fact that in the initial phase of the movement gaze was lagging behind the dot, whereas it was ahead of the finger. In a second experiment we recorded EEG in the attempt to find a direct link between the finger motor preparation, indexed by the lateralized readiness potential (LRP), and the latency of smooth pursuit. After taking into account finger movement onset variability, we observed larger LRP amplitudes associated with earlier smooth pursuit onset across trials. The same held across subjects, where average LRP onset correlated with average eye latency. The evidence from both experiments concurs to indicate that a strong coupling exists between the motor systems leading to eye and finger movements and that simple top-down predictive signals are unlikely to support optimal coordination. |
Franziska Geringswald; Eleonora Porracin; Stefan Pollmann Impairment of visual memory for objects in natural scenes by simulated central scotomata Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–12, 2016. @article{Geringswald2016,Because of the close link between foveal vision and the spatial deployment of attention, typically only objects that have been foveated during scene exploration may form detailed and persistent memory representations. In a recent study on patients suffering from age-related macular degeneration, however, we found surprisingly accurate visual long-term memory for objects in scenes. Normal exploration patterns that the patients had learned to rereference saccade targets to an extrafoveal retinal location. This rereferencing may allow use of an extrafoveal location as a focus of attention for efficient object encoding into long-term memory. Here, we tested this hypothesis in normal-sighted observers with gaze-contingent central scotoma simulations. As these observers were inexperienced in scene exploration with central vision loss and had not developed saccadic rereferencing, we expected deficits in long-termmemory for objects.We used the same change detection task as in our patient study, probing sensitivity to object changes after a period of free scene exploration. Change detection performance was significantly reduced for two types of scotoma simulation diminishing foveal and parafoveal vision—a visible gray disc and a more subtle image warping—compared with unimpaired controls, confirming our hypothesis. The impact of a smaller scotoma covering specifically foveal vision was less distinct, leading to a marginally significant decrease of long-term memory performance compared with controls. We conclude that attentive encoding of objects is deficient when central vision is lost as long as successful saccadic rereferencing has not yet developed. |
Lynn Huestegge; Anne Böckler Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–15, 2016. @article{Huestegge2016,Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards. |
Makoto Kobayashi Delayed saccade to perceptually demanding locations in Parkinson's disease: Analysis from the perspective of the speed–accuracy trade-off Journal Article In: Neurological Sciences, vol. 37, no. 11, pp. 1841–1848, 2016. @article{Kobayashi2016,Parkinson's disease (PD) patients reportedly have shortened, normal, or prolonged latency of visually guided saccades (VGSs). This inconsistency seems to be partly derived from differences in experimental conditions, such as target eccentricity and direction. Another etiology may be a physiological saccade property, the speed-accuracy trade-off. VGS latency tends to increase along with its gain in certain conditions; however, this relationship has not been addressed in PD saccade studies. In this study, we measured VGS latency and gain in 47 PD patients and 48 normal controls (NCs). VGS was evoked by a target, which was presented at the central position initially and pseudo-randomly jumped to the horizontal (10° or 20° eccentricity) or vertical (10° or 15°) meridian. For each target location, the logarithm of the latency (log-latency) was modeled with subject type (PD or NC), age, and gain in the linear-mixed regression analysis. Subsequently, for target locations where PD patients showed an abnormality, the log-latency was similarly modeled with additional clinical variables measured by the mini-mental state examination (MMSE) and unified Parkinson's disease rating scale Part III. PD saccade latency was prolonged and influenced by the MMSE score when targets were presented at the 20° horizontal and upper vertical meridians. Furthermore, gain was a consistently significant variable in all models. The target locations of the delayed saccade corresponded to perceptually demanding locations, indicating that PD subclinical visual dysfunction prolonged the latency. The influence of the MMSE score supports this reasoning. Moreover, the speed-accuracy trade-off appeared to contribute to the accurate saccade analysis. |
Chuanli Zang; Yongsheng Wang; Xuejun Bai; Guoli Yan; Denis Drieghe; Simon P. Liversedge The use of probabilistic lexicality cues for word segmentation in Chinese reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 3, pp. 548–560, 2016. @article{Zang2016,In an eye-tracking experiment we examined whether Chinese readers were sensitive to information concerning how often a Chinese character appears as a single-character word versus the first character in a two-character word, and whether readers use this information to segment words and adjust the amount of parafoveal processing of subsequent characters during reading. Participants read sentences containing a two-character target word with its first character more or less likely to be a single-character word. The boundary paradigm was used. The boundary appeared between the first character and the second character of the target word, and we manipulated whether readers saw an identity or a pseudocharacter preview of the second character of the target. Linear mixed-effects models revealed reduced preview benefit from the second character when the first character was more likely to be a single-character word. This suggests that Chinese readers use probabilistic combinatorial information about the likelihood of a Chinese character being single-character word or a two-character word online to modulate the extent of parafoveal processing. |
Wolfgang Einhäuser; Antje Nuthmann Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing Journal Article In: Journal of Vision, vol. 16, no. 11, pp. 1–17, 2016. @article{Einhaeuser2016,During natural scene viewing, humans typically attend and fixate selected locations for about 200–400 ms. Two variables characterize such ‘‘overt'' attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two- step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations. Introduction |
Ioannis Rigas; Evgeniy Abdulin; Oleg V. Komogortsev Towards a multi-source fusion approach for eye movement-driven recognition Journal Article In: Information Fusion, vol. 32, pp. 13–25, 2016. @article{Rigas2016,This paper presents a research for the use of multi-source information fusion in the field of eye movement biometrics. In the current state-of-the-art, there are different techniques developed to extract the physical and the behavioral biometric characteristics of the eye movements. In this work, we explore the effects from the multi-source fusion of the heterogeneous information extracted by different biometric algorithms under the presence of diverse visual stimuli. We propose a two-stage fusion approach with the employment of stimulus-specific and algorithm-specific weights for fusing the information from different matchers based on their identification efficacy. The experimental evaluation performed on a large database of 320 subjects reveals a considerable improvement in biometric recognition accuracy, with minimal equal error rate (EER) of 5.8%, and best case Rank-1 identification rate (Rank-1 IR) of 88.6%. It should be also emphasized that although the concept of multi-stimulus fusion is currently evaluated specifically for the eye movement biometrics, it can be adopted by other biometric modalities too, in cases when an exogenous stimulus affects the extraction of the biometric features. |
Aaron Veldre; Sally Andrews Is semantic preview benefit due to relatedness or plausibility? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 7, pp. 939–952, 2016. @article{Veldre2016,There is increasing evidence that skilled readers of English benefit from processing a parafoveal preview of a semantically related word. However, in previous investigations of semantic preview benefit using the gaze-contingent boundary paradigm the semantic relatedness between the preview and target has been confounded with the plausibility of the preview word in the sentence. In the present study, preview relatedness and plausibility were independently manipulated in neutral sentences read by a large sample of skilled adult readers. Participants were assessed on measures of reading and spelling ability to identify possible sources of individual differences in preview effects. The results showed that readers benefited from a preview of a plausible word, regardless of the semantic relatedness of the preview and the target. However, there was limited evidence of a semantic relatedness benefit when the plausibility of the preview was controlled. The plausibility preview benefit was strongest for low proficiency readers, suggesting that poorer readers were more likely to program a forward saccade based on information extracted from the preview. High proficiency readers showed equivalent disruption from all nonidentical previews suggesting that they were more likely to suffer interference from the orthographic mismatch between preview and target. |
Timothy J. Slattery; Mark Yates; Bernhard Angele Interword and interletter spacing effects during reading revisited: Interactions with word and font characteristics Journal Article In: Journal of Experimental Psychology: Applied, vol. 22, no. 4, pp. 406–422, 2016. @article{Slattery2016,Despite the large number of eye movement studies conducted over the past 30+ years, relatively few have examined the influence that font characteristics have on reading. However, there has been renewed interest in 1 particular font characteristic, letter spacing, which has both theoretical (visual word recognition) and applied (font design) importance. Recently published results that letter spacing has a bigger impact on the reading performance of dyslexic children have perhaps garnered the most attention (Zorzi et al., 2012). Unfortunately, the effects of increased interletter spacing have been mixed with some authors reporting facilitation and others reporting inhibition (van den Boer & Hakvoort, 2015). The authors present findings from 3 experiments designed to resolve the seemingly inconsistent letter-spacing effects and provide clarity to researchers and font designers and researchers. The results indicate that the direction of spacing effects depend on the size of the default spacing chosen by font developers. Experiment 3 found that interletter spacing interacts with interword spacing, as the required space between words depends on the amount of space used between letters. Interword spacing also interacted with word type as the inhibition seen with smaller interword spacing was evident with nouns and verbs but not with function words. |
B. J. Sleezer; M. D. Castagno; Benjamin Y. Hayden Rule encoding in orbitofrontal cortex and striatum guides selection Journal Article In: Journal of Neuroscience, vol. 36, no. 44, pp. 11223–11237, 2016. @article{sch16,Active maintenance of rules, like other executive functions, is often thought to be the domain of a discrete executive system. Analternative view is that rule maintenance is a broadly distributed function relying on widespread cortical and subcortical circuits. Tentative evidence supporting this view comes from research showing some rule selectivity in the orbitofrontal cortex and dorsal striatum. We recorded in these regions and in the ventral striatum, which has not been associated previously with rule representation, as macaques performed a Wisconsin Card Sorting Task. We found robust encoding ofrule category (color vs shape) and rule identity (six possible rules) in all three regions. Rule identity modulated responses to potential choice targets, suggesting that rule information guides behavior by highlighting choice targets. The effects that we observed were not explained by differences in behavioral performance across rules and thus cannot be attributed to reward expectation. Our results suggest that rule maintenance and rule-guided selection of options are distributed processes and provide new insight into orbital and striatal contributions to executive control. |
Aiping Wang; Junmo Yeon; Wei Zhou; Hua Shu; Ming Yan Cross-language parafoveal semantic processing: Evidence from Korean-Chinese bilinguals Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 1, pp. 285–290, 2016. @article{Wang2016,In the present study, we aimed at testing cross- language cognate and semantic preview effects. We tested how native Korean readers who learned Chinese as a second language make use of the parafoveal information during the reading of Chinese sentences. There were 3 types of Korean preview words: cognate translations of the Chinese target words, semantically related noncognate words, and unrelated words. Together with a highly significant cognate preview effect, more critically, we also observed reliable facilitation in processing of the target word from the semantically related previews in all fixation measures. Results from the present study provide first evidence for semantic processing from parafoveally presented Korean words and for cross-language parafoveal semantic processing. |
Chuanli Zang; Manman Zhang; Xuejun Bai; Guoli Yan; Kevin B. Paterson; Simon P. Liversedge Effects of word frequency and visual complexity on eye movements of young and older Chinese readers Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 7, pp. 1409–1425, 2016. @article{Zang2016a,Research using alphabetic languages shows that, compared to young adults, older adults employ a risky reading strategy in which they are more likely to guess word identities and skip words to compensate for their slower processing of text. However, little is known about how ageing affects reading behaviour for naturally unspaced, logographic languages like Chinese. Accordingly, to assess the generality of age-related changes in reading strategy across different writing systems we undertook an eye movement investigation of adult age differences in Chinese reading. Participants read sentences containing a target word (a single Chinese character) that had a high or low frequency of usage and was constructed from either few or many character strokes, and so either visually simple or complex. Frequency and complexity produced similar patterns of influence for both age groups on skipping rates and fixation times for target words. Both groups therefore demonstrated sensitivity to these manipulations. But compared to the young adults, the older adults made more and longer fixations and more forward and backward eye movements overall. They also fixated the target words for longer, especially when these were visually complex. Crucially, the older adults skipped words less and made shorter progressive saccades. Therefore, in contrast with findings for alphabetic languages, older Chinese readers appear to use a careful reading strategy according to which they move their eyes cautiously along lines of text and skip words infrequently. We propose they use this more careful reading strategy to compensate for increased difficulty processing word boundaries in Chinese. |
Ricarda Schmidt; Patrick Lüthold; Rebekka Kittel; Anne Tetzlaff; Anja Hilbert Visual attentional bias for food in adolescents with binge-eating disorder Journal Article In: Journal of Psychiatric Research, vol. 80, pp. 22–29, 2016. @article{slkth16,Evidence suggests that adults with binge-eating disorder (BED) are prone of having their attention interfered by food cues, and that food-related attentional biases are associated with calorie intake and eating disorder psychopathology. For adolescents with BED experimental evidence on attentional processing of food cues is lacking. Using eye-tracking and a visual search task, the present study examined visual orienting and disengagement processes of food in youth with BED. Eye-movement data and reaction times were recorded in 25 adolescents (12-20 years) with BED and 25 controls (CG) individually matched for sex, age, body mass index, and socio-economic status. During a free exploration paradigm, the BED group showed a greater gaze duration bias for food images than the CG. Groups did not differ in gaze direction biases. In a visual search task, the BED group showed a greater detection bias for food targets than the CG. Group differences were more pronounced for personally attractive than unattractive food images. Regarding clinical associations, only in the BED group the gaze duration bias for food was associated with increased hunger and lower body mass index, and the detection bias for food targets was associated with greater reward sensitivity. The study provided first evidence of an attentional bias to food in adolescents with BED. However, more research is needed for further specifying disengagement and orienting processes in adolescent BED, including overt and covert attention, and their prospective associations with binge-eating behaviors and associated psychopathology. |
Brianna J. Sleezer; Benjamin Y. Hayden Differential contributions of ventral and dorsal striatum to early and late phases of cognitive set reconfiguration Journal Article In: Journal of Cognitive Neuroscience, vol. 28, no. 12, pp. 1849–1864, 2016. @article{Sleezer2016,Flexible decision-making, a defining feature of human cognition, is typically thought of as a canonical pFC function. Recent work suggests that the striatum may participate as well; however, its role in this process is not well understood. We recorded activity of neurons in both the ventral (VS) and dorsal (DS) striatum while rhesus macaques performed a version of the Wisconsin Card Sorting Test, a classic test of flexibility. Our version of the task involved a trial-and-error phase before monkeys could identify the correct rule on each block. We observed changes in firing rate in both regions when monkeys switched rules. Specifically, VS neurons demonstrated switch-related activity early in the trial-and-error period when the rule needed to be updated, and a portion of these neurons signaled information about the switch context (i.e., whether the switch was intradimensional or extradimensional). Neurons in both VS and DS demonstrated switch-related activity at the end of the trial-and-error period, immediately before the rule was fully established and main- tained, but these signals did not carry any information about switch context. We also observed associative learning signals (i.e., specific responses to options associated with rewards in the presentation period before choice) that followed the same pattern as switch signals (early in VS, later in DS). Taken together, these results endorse the idea that the striatum participates directly in cognitive set reconfiguration and suggest that single neurons in the striatum may contribute to a functional handoff from the VS to the DS during reconfiguration processes. |
Rick A. Adams; Markus Bauer; Dimitris Pinotsis; Karl J. Friston Dynamic causal modelling of eye movements during pursuit: Confirming precision-encoding in V1 using MEG Journal Article In: Neuroimage, vol. 132, pp. 175–189, 2016. @article{Adams2016,This paper shows that it is possible to estimate the subjective precision (inverse variance) of Bayesian beliefs during oculomotor pursuit. Subjects viewed a sinusoidal target, with or without random fluctuations in its motion. Eye trajectories and magnetoencephalographic (MEG) data were recorded concurrently. The target was periodically occluded, such that its reappearance caused a visual evoked response field (ERF). Dynamic causal modelling (DCM) was used to fit models of eye trajectories and the ERFs. The DCM for pursuit was based on predictive coding and active inference, and predicts subjects' eye movements based on their (subjective) Bayesian beliefs about target (and eye) motion. The precisions of these hierarchical beliefs can be inferred from behavioural (pursuit) data. The DCM for MEG data used an established biophysical model of neuronal activity that includes parameters for the gain of superficial pyramidal cells, which is thought to encode precision at the neuronal level. Previous studies (using DCM of pursuit data) suggest that noisy target motion increases subjective precision at the sensory level: i.e., subjects attend more to the target's sensory attributes. We compared (noisy motion-induced) changes in the synaptic gain based on the modelling of MEG data to changes in subjective precision estimated using the pursuit data. We demonstrate that imprecise target motion increases the gain of superficial pyramidal cells in V1 (across subjects). Furthermore, increases in sensory precision – inferred by our behavioural DCM – correlate with the increase in gain in V1, across subjects. This is a step towards a fully integrated model of brain computations, cortical responses and behaviour that may provide a useful clinical tool in conditions like schizophrenia. |
Stefan Frässle; Sören Krach; Frieder M. Paulus; Andreas Jansen Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing Journal Article In: Scientific Reports, vol. 6, pp. 27153, 2016. @article{Fraessle2016,While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness- related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization. |
Hanna Gertz; Maximilian Hilger; Mathias Hegele; Katja Fiehler Violating instructed human agency: An fMRI study on ocular tracking of biological and nonbiological motion stimuli Journal Article In: NeuroImage, vol. 138, pp. 109–122, 2016. @article{Gertz2016,Previous studies have shown that beliefs about the human origin of a stimulus are capable of modulating the coupling of perception and action. Such beliefs can be based on top-down recognition of the identity of an actor or bottom-up observation of the behavior of the stimulus. Instructed human agency has been shown to lead to superior tracking performance of a moving dot as compared to instructed computer agency, especially when the dot followed a biological velocity profile and thus matched the predicted movement, whereas a violation of instructed human agency by a nonbiological dot motion impaired oculomotor tracking (Zwickel et al., 2012). This suggests that the instructed agency biases the selection of predictive models on the movement trajectory of the dot motion. The aim of the present fMRI study was to examine the neural correlates of top-down and bottom-up modulations of perception–action couplings by manipulating the instructed agency (human action vs. computer-generated action) and the observable behavior of the stimulus (biological vs. nonbiological velocity profile). To this end, participants performed an oculomotor tracking task in an MRI environment. Oculomotor tracking activated areas of the eye movement network. A right-hemisphere occipito-temporal cluster comprising the motion-sensitive area V5 showed a preference for the biological as compared to the nonbiological velocity profile. Importantly, a mismatch between instructed human agency and a nonbiological velocity profile primarily activated medial–frontal areas comprising the frontal pole, the paracingulate gyrus, and the anterior cingulate gyrus, as well as the cerebellum and the supplementary eye field as part of the eye movement network. This mismatch effect was specific to the instructed human agency and did not occur in conditions with a mismatch between instructed computer agency and a biological velocity profile. Our results support the hypothesis that humans activate a specific predictive model for biological movements based on their own motor expertise. A violation of this predictive model causes costs as the movement needs to be corrected in accordance with incoming (nonbiological) sensory information. |
Seppo Vainio; Anneli Pajunen; Jukka Hyönä Processing modifier–head agreement in L1 and L2 Finnish: An eye-tracking study Journal Article In: Second Language Research, vol. 32, no. 1, pp. 3–24, 2016. @article{Vainio2016,This study investigated the effect of first language (L1) on the reading of modifier-head case agreement in second language (L2) Finnish by native Russian and Chinese speakers. Russian is similar to Finnish in that both languages use case endings to mark grammatical roles, whereas such markings are absent in Chinese. The critical nouns were embedded in sentences, where the head noun was either preceded by an agreeing modifier or the modifier was absent. Readers' eye fixation patterns were used as indices of online processing. Both natives and non-natives showed a facilitatory effect of agreement; reading head nouns was easier when they were preceded by an agreeing modifier. Typological distance in terms of the structural complexity of words between L1 and L2 did not influence the processing. |
Bernhard Angele; Timothy J. Slattery; Keith Rayner Two stages of parafoveal processing during reading: Evidence from a display change detection task Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 4, pp. 1241–1249, 2016. @article{Angele2016,We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924–1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage. |
Moreno I. Coco; Frank Keller; George L. Malcolm Anticipation in real-world scenes: The role of visual context and visual memory Journal Article In: Cognitive Science, vol. 40, no. 8, pp. 1995–2024, 2016. @article{Coco2016,The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. |
Stefan Frässle; Frieder M. Paulus; Sören Krach; Stefan Robert Schweinberger; Klaas Enno Stephan; Andreas Jansen Mechanisms of hemispheric lateralization: Asymmetric interhemispheric recruitment in the face perception network Journal Article In: NeuroImage, vol. 124, pp. 977–988, 2016. @article{Fraessle2016a,Perceiving human faces constitutes a fundamental ability of the human mind, integrating a wealth of information essential for social interactions in everyday life. Neuroimaging studies have unveiled a distributed neural network consisting of multiple brain regions in both hemispheres. Whereas the individual regions in the face perception network and the right-hemispheric dominance for face processing have been subject to intensive research, the functional integration among these regions and hemispheres has received considerably less attention. Using dynamic causal modeling (DCM) for fMRI, we analyzed the effective connectivity between the core regions in the face perception network of healthy humans to unveil the mechanisms underlying both intra- and interhemispheric integration. Our results suggest that the right-hemispheric lateralization of the network is due to an asymmetric face-specific interhemispheric recruitment at an early processing stage - that is, at the level of the occipital face area (OFA) but not the fusiform face area (FFA). As a structural correlate, we found that OFA gray matter volume was correlated with this asymmetric interhemispheric recruitment. Furthermore, exploratory analyses revealed that interhemispheric connection asymmetries were correlated with the strength of pupil constriction in response to faces, a measure with potential sensitivity to holistic (as opposed to feature-based) processing of faces. Overall, our findings thus provide a mechanistic description for lateralized processes in the core face perception network, point to a decisive role of interhemispheric integration at an early stage of face processing among bilateral OFA, and tentatively indicate a relation to individual variability in processing strategies for faces. These findings provide a promising avenue for systematic investigations of the potential role of interhemispheric integration in future studies. |
Ioannis Rigas; Oleg V. Komogortsev; Reza Shadmehr Biometric recognition via eye movements: Saccadic vigor and acceleration cues Journal Article In: ACM Transactions on Applied Perception, vol. 13, no. 2, pp. 1–21, 2016. @article{Rigas2016a,Previous research shows that human eye movements can serve as a valuable source of information about the structural elements of the oculomotor system and they also can open a window to the neural functions and cognitive mechanisms related to visual attention and perception. The research field of eye movement-driven biometrics explores the extraction of individual-specific characteristics from eye movements and their employment for recognition purposes. In this work, we present a study for the incorporation of dynamic saccadic features into a model of eye movement-driven biometrics. We show that when these features are added to our previous biometric framework and tested on a large database of 322 subjects, the biometric accuracy presents a relative improvement in the range of 31.6–33.5% for the verification scenario, and in range of 22.3–53.1% for the identification scenario. More importantly, this improvement is demonstrated for different types of visual stimulus (random dot, text, video), indicating the enhanced robustness offered by the incorporation of saccadic vigor and acceleration cues. |
Hassan Zanganeh Momtaz; Mohammad Reza Daliri Predicting the eye fixation locations in the gray scale images in the visual scenes with different semantic contents Journal Article In: Cognitive Neurodynamics, vol. 10, no. 1, pp. 31–47, 2016. @article{ZanganehMomtaz2016,In recent years, there has been considerable interest in visual attention models (saliency map of visual attention). These models can be used to predict eye fixation locations, and thus will have many applications in various fields which leads to obtain better performance in machine vision systems. Most of these models need to be improved because they are based on bottom-up computation that does not consider top-down image semantic contents and often does not match actual eye fixation locations. In this study, we recorded the eye movements (i.e., fixations) of fourteen individuals who viewed images which consist natural (e.g., landscape, animal) and man-made (e.g., building, vehicles) scenes. We extracted the fixation locations of eye movements in two image categories. After extraction of the fixation areas (a patch around each fixation location), characteristics of these areas were evaluated as compared to non-fixation areas. The extracted features in each patch included the orientation and spatial frequency. After feature extraction phase, different statistical classifiers were trained for prediction of eye fixation locations by these features. This study connects eye-tracking results to automatic prediction of saliency regions of the images. The results showed that it is possible to predict the eye fixation locations by using of the image patches around subjects' fixation points. |
Aasef G. Shaikh; Jorge Otero-Millan; Priyanka Kumar; Fatema F. Ghasia Abnormal fixational eye movements in amblyopia Journal Article In: PLoS ONE, vol. 11, no. 3, pp. e0149953, 2016. @article{Shaikh2016,PURPOSE: Fixational saccades shift the foveal image to counteract visual fading related to neural adaptation. Drifts are slow eye movements between two adjacent fixational saccades. We quantified fixational saccades and asked whether their changes could be attributed to pathologic drifts seen in amblyopia, one of the most common causes of blindness in childhood. METHODS: Thirty-six pediatric subjects with varying severity of amblyopia and eleven healthy age-matched controls held their gaze on a visual target. Eye movements were measured with high-resolution video-oculography during fellow eye-viewing and amblyopic eye-viewing conditions. Fixational saccades and drifts were analyzed in the amblyopic and fellow eye and compared with controls. RESULTS: We found an increase in the amplitude with decreased frequency of fixational saccades in children with amblyopia. These alterations in fixational eye movements correlated with the severity of their amblyopia. There was also an increase in eye position variance during drifts in amblyopes. There was no correlation between the eye position variance or the eye velocity during ocular drifts and the amplitude of subsequent fixational saccade. Our findings suggest that abnormalities in fixational saccades in amblyopia are independent of the ocular drift. DISCUSSION: This investigation of amblyopia in pediatric age group quantitatively characterizes the fixation instability. Impaired properties of fixational saccades could be the consequence of abnormal processing and reorganization of the visual system in amblyopia. Paucity in the visual feedback during amblyopic eye-viewing condition can attribute to the increased eye position variance and drift velocity. |
Lei Zhou; Yang-Yang Zhang; Zuo-Jun Wang; Li-Lin Rao; Wei Wang; Shu Li; Xingshan Li; Zhu-Yuan Liang A scanpath analysis of the risky decision-making process Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 169–182, 2016. @article{Zhou2016,In the field of eye tracking, scanpath analysis can reflect the sequential and temporal properties of the cognitive process. However, the advantages of scanpath analysis have not yet been utilized in the study of risky decision making. We explored the methodological applicability of scanpath analysis to test models of risky decision making by analyzing published data from the eye-tracking studies of Su et al. (2013); Wang and Li (2012), and Sun, Rao, Zhou, and Li (2014). These studies used a proportion task, an outcome-matched presentation condition, and a multiple-play condition as the baseline for comparison with information search and processing in the risky decision-making condition. We found that (i) the similarity scores of the intra-conditions were significantly higher than those of the inter-condition; (ii) the scanpaths of the two conditions were separable; and (iii) based on an inspection of typical trials, the patterns of the scanpaths differed between the two conditions. These findings suggest that scanpath analysis is reliable and valid for examining the process of risky decision making. In line with the findings of the three original studies, our results indicate that risky decision making is unlikely to be based on a weighting and summing process, as hypothesized by the family of expectation models. The findings highlight a new methodological direction for research on decision making. |
Mallory Frayn; Christopher R. Sears; Kristin M. Ranson A sad mood increases attention to unhealthy food images in women with food addiction Journal Article In: Appetite, vol. 100, pp. 55–63, 2016. @article{Frayn2016,Food addiction and emotional eating both influence eating and weight, but little is known of how negative mood affects the attentional processes that may contribute to food addiction. The purpose of this study was to compare attention to food images in adult women (N = 66) with versus without food addiction, before and after a sad mood induction (MI). Participants' eye fixations were tracked and recorded throughout 8-s presentations of displays with healthy food, unhealthy food, and non-food images. Food addiction was self-reported using the Yale Food Addiction Scale. The sad MI involved watching an 8-min video about a young child who passed away from cancer. It was predicted that: (1) participants in the food addiction group would attend to unhealthy food significantly more than participants in the control group, and (2) participants in the food addiction group would increase their attention to unhealthy food images following the sad MI, due to increased emotional reactivity and poorer emotional regulation. As predicted, the sad MI had a different effect for those with versus without food addiction: for participants with food addiction, attention to unhealthy images increased following the sad MI and attention to healthy images decreased, whereas for participants without food addiction the sad MI did not alter attention to food. These findings contribute to researchers' understanding of the cognitive factors underlying food addiction. |
Karin Heidlmayr; Karine Dore-Mazars; Xavier Aparico; Frederic Isel In: PLoS ONE, vol. 11, no. 11, pp. e0165029, 2016. @article{Heidlmayr2016,In the present electroencephalographical study, we asked to which extent executive control processes are shared by both the language and motor domain. The rationale was to examine whether executive control processes whose efficiency is reinforced by the frequent use of a second language can lead to a benefit in the control of eye movements, i.e. a non-linguistic activity. For this purpose, we administrated to 19 highly proficient late French-German bilingual participants and to a control group of 20 French monolingual participants an antisaccade task, i.e. a specific motor task involving control. In this task, an automatic saccade has to be suppressed while a voluntary eye movement in the opposite direction has to be carried out. Here, our main hypothesis is that an advantage in the antisaccade task should be observed in the bilinguals if some properties of the control processes are shared between linguistic and motor domains. ERP data revealed clear differences between bilinguals and monolinguals. Critically, we showed an increased N2 effect size in bilinguals, thought to reflect better efficiency to monitor conflict, combined with reduced effect sizes on markers reflecting inhibitory control, i.e. cue-locked positivity, the target-locked P3 and the saccade-locked presaccadic positivity (PSP). Moreover, effective connectivity analyses (dynamic causal modelling; DCM) on the neuronal source level indicated that bilinguals rely more strongly on ACC-driven control while monolinguals rely on PFC-driven control. Taken together, our combined ERP and effective connectivity findings may reflect a dynamic interplay between strengthened conflict monitoring, associated with subsequently more efficient inhibition in bilinguals. Finally, L2 proficiency and immersion experience constitute relevant factors of the language background that predict efficiency of inhibition. To conclude, the present study provided ERP and effective connectivity evidence for domain-general executive control involvement in handling multiple language use, leading to a control advantage in bilingualism. |
Xaver Koch; Esther Janse Speech rate effects on the processing of conversational speech across the adult life span Journal Article In: The Journal of the Acoustical Society of America, vol. 139, no. 4, pp. 1618–1636, 2016. @article{Koch2016,This study investigates the effect of speech r ate on spoken word recognition across the adult life span. Contrary to previous studies, convers ational materials with a natural variation in speech rate were used rather than lab-recorded stimuli that are subseque ntly artificially time-compressed. It was investigated whether older adults' speech recognition is more adversely affected by increased speech rate compared to younger and middle-aged adults, and which individual listener characteristics (e.g., hearing, fluid cognitive processi ng ability) predict the size of the speech rate effect on recognition performance. In an eye-trac king experiment, par-ticipants indicated with a mous e-click which visually presented words they recognized in a conversational fragment. Click response times , gaze, and pupil size data were analyzed. As expected, click response times and gaze behavi or were affected by speech rate, indicating that word recognition is more difficult if speec h rate is faster. Contrary to earlier findings, increased speech rate affecte d the age groups to the same extent. Fluid cognitive processing ability predicted general re cognition performance, but did not modulate the speech rate effect. These findings emphasize that earlier results of age by speech rate interactions mainly obtained with artificially speeded materials ma y not generalize to speech rate variation as encountered in conversational speech. |
Anuenue Kukona; David Braze; Clinton L. Johns; W. Einar Mencl; Julie A. Van Dyke; James S. Magnuson; Kenneth R. Pugh; Donald P. Shankweiler; Whitney Tabor The real-time prediction and inhibition of linguistic outcomes: Effects of language and literacy skill Journal Article In: Acta Psychologica, vol. 171, pp. 72–84, 2016. @article{Kukona2016,Recent studies have found considerable individual variation in language comprehenders' predictive behaviors, as revealed by their anticipatory eye movements during language comprehension. The current study investigated the relationship between these predictive behaviors and the language and literacy skills of a diverse, community-based sample of young adults. We found that rapid automatized naming (RAN) was a key determinant of comprehenders' prediction ability (e.g., as reflected in predictive eye movements to a WHITE CAKE on hearing “The boy will eat the white…”). Simultaneously, comprehension-based measures predicted participants' ability to inhibit eye movements to objects that shared features with predictable referents but were implausible completions (e.g., as reflected in eye movements to a white but inedible WHITE CAR). These findings suggest that the excitatory and inhibitory mechanisms that support prediction during language processing are closely linked with specific cognitive abilities that support literacy. We show that a self-organizing cognitive architecture captures this pattern of results. |
Michelle L. Eisenberg; Jeffrey M. Zacks Ambient and focal visual processing of naturalistic activity Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–12, 2016. @article{Eisenberg2016,When people inspect a picture, they progress through two distinct phases of visual processing: an ambient, or exploratory, phase that emphasizes input from peripheral vision and rapid acquisition of low-frequency information, followed by a focal phase that emphasizes central vision, salient objects, and high-frequency information. Does this qualitative shift occur during dynamic scene viewing? If so, when? One possibility is that shifts to exploratory processing are triggered at subjective event boundaries. This shift would be adaptive, because event boundaries typically occur when activity features change and when activity becomes unpredictable. Here, we used a perceptual event segmentation task, in which people identified boundaries between meaningful units of activity, to test this hypothesis. In two studies, an eye tracker recorded eye movements and pupil size while participants first watched movies of actors engaged in everyday activities and then segmented them into meaningful events. Saccade amplitudes and fixation durations during the initial viewings suggest that event boundaries function much like the onset of a new picture during static picture presentation: Viewers initiate an ambient processing phase and then progress to focal viewing as the event progresses. These studies suggest that this shift in processing mode could play a role in the formation of mental representations of the current environment. |
Theodoros P. Zanos; Patrick J. Mineault; Daniel Guitton; Christopher C. Pack Mechanisms of saccadic suppression in primate cortical area V4 Journal Article In: Journal of Neuroscience, vol. 36, no. 35, pp. 9227–9239, 2016. @article{Zanos2016,Psychophysical studies have shown that subjects are often unaware of visual stimuli presented around the time of an eye movement. This saccadic suppression is thought to be a mechanism for maintaining perceptual stability. The brain might accomplish saccadic suppression by reducing the gain of visual responses to specific stimuli or by simply suppressing firing uniformly for all stimuli. Moreover, the suppression might be identical across the visual field or concentrated at specific points. To evaluate these possibilities, we recorded from individual neurons in cortical area V4 of nonhuman primates trained to execute saccadic eye movements. We found that both modes of suppression were evident in the visual responses of these neurons and that the two modes showed different spatial and temporal profiles: while gain changes started earlier and were more widely distributed across visual space, nonspecific suppression was found more often in the peripheral visual field, after the completion of the saccade. Peripheral suppression was also associated with increased noise correlations and stronger local field potential oscillations in the α frequency band. This pattern of results suggests that saccadic suppression shares some of the circuitry responsible for allocating voluntary attention. SIGNIFICANCE STATEMENT We explore our surroundings by looking at things, but each eye movement that we make causes an abrupt shift of the visual input. Why doesn't the world look like a film recorded on a shaky camera? The answer in part is a brain mechanism called saccadic suppression, which reduces the responses of visual neurons around the time of each eye movement. Here we reveal several new properties of the underlying mechanisms. First, the suppression operates differently in the central and peripheral visual fields. Second, it appears to be controlled by oscillations in the local field potentials at frequencies traditionally associated with attention. These results suggest that saccadic suppression shares the brain circuits responsible for actively ignoring irrelevant stimuli. |
Susannah F. Freebody; Gustav Kuhn Own-age biases in adults' and children's joint attention: Biased face prioritization, but not gaze following! Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 2, pp. 372–379, 2016. @article{Freebody2016,Previous studies have reported own-age biases in younger and older adults in gaze following. We investigated own-age biases in social attentional processes between adults and children by focusing on two aspects of the joint attention process; the extent to which people attend towards an individual's face, and the extent to which they fixate objects that are looked at by this person (i.e., gaze following). Participants viewed images that always contained a child and an adult who either looked towards each other or each looked at objects located to their side. Observers consistently, and rapidly fixated the actor's faces, though the children were faster to fixate the child's face than the adult's faces, whilst the adults were faster to fixate on the adult's face than the child's face. The children also spent significantly more time fixating the child's face than the adult's face, and the opposite pattern of results was found for the adults. Whilst both adults and children prioritized objects when they were looked at by the actor, both groups showed equivalent levels of gaze following, and there was no own-age bias for gaze following. Our results show an own-age bias for prioritizing faces, but not gaze following. |
Matthew W. Lowder; Fernanda Ferreira Prediction in the processing of repair disfluencies: Evidence from the Visual-World Paradigm Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 9, pp. 1400–1416, 2016. @article{Lowder2016,Two visual-world eye-tracking experiments investigated the role of prediction in the processing of repair disfluencies (e.g., "The chef reached for some salt uh I mean some ketchup . . ."). Experiment 1 showed that listeners were more likely to fixate a critical distractor item (e.g., pepper) during the processing of repair disfluencies compared with the processing of coordination structures (e.g., ". . . some salt and also some ketchup . . ."). Experiment 2 replicated the findings of Experiment 1 for disfluency versus coordination constructions and also showed that the pattern of fixations to the critical distractor for disfluency constructions was similar to the fixation patterns for sentences employing contrastive focus (e.g., ". . . not some salt but rather some ketchup . . ."). The results suggest that similar mechanisms underlie the processing of repair disfluencies and contrastive focus, with listeners generating sets of entities that stand in semantic contrast to the reparandum in the case of disfluencies or the negated entity in the case of contrastive focus. |
E. Oberwelland; Leonhard Schilbach; I. Barisic; Sarah C. Krall; K. Vogeley; Gereon R. Fink; B. Herpertz-Dahlmann; Kerstin Konrad; Martin Schulte-Rüther Look into my eyes: Investigating joint attention using interactive eye-tracking and fMRI in a developmental sample Journal Article In: NeuroImage, vol. 130, pp. 248–260, 2016. @article{Oberwelland2016,Joint attention, the shared attentional focus of at least two people on a third significant object, is one of the earliest steps in social development and an essential aspect of reciprocal interaction. However, the neural basis of joint attention (JA) in the course of development is completely unknown. The present study made use of an interactive eye-tracking paradigm in order to examine the developmental trajectories of JA and the influence of a familiar interaction partner during the social encounter. Our results show that across children and adolescents JA elicits a similar network of "social brain" areas as well as attention and motor control associated areas as in adults. While other-initiated JA particularly recruited visual, attention and social processing areas, self-initiated JA specifically activated areas related to social cognition, decision-making, emotions and motivational/reward processes highlighting the rewarding character of self-initiated JA. Activation was further enhanced during self-initiated JA with a familiar interaction partner. With respect to developmental effects, activation of the precuneus declined from childhood to adolescence and additionally shifted from a general involvement in JA towards a more specific involvement for self-initiated JA. Similarly, the temporoparietal junction (TPJ) was broadly involved in JA in children and more specialized for self-initiated JA in adolescents. Taken together, this study provides first-time data on the developmental trajectories of JA and the effect of a familiar interaction partner incorporating the interactive character of JA, its reciprocity and motivational aspects. |
Peiyun Zhou; Kiel Christianson I “hear” what you're “saying”: Auditory perceptual simulation, reading speed, and reading comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 5, pp. 972–995, 2016. @article{Zhou2016a,Auditory perceptual simulation (APS) during silent reading refers to situations in which the reader actively simulates the voice of a character or other person depicted in a text. In three eye-tracking experiments, APS effects were investigated as people read utterances attributed to a native English speaker, a non-native English speaker, or no speaker at all. APS effects were measured via online eye movements and offline comprehension probes. Results demonstrated that inducing APS during silent reading resulted in observable differences in reading speed when readers simulated the speech of faster compared to slower speakers and compared to silent reading without APS. Social attitude survey results indicated that readers' attitudes towards the native and non-native speech did not consistently influence APS-related effects. APS of both native speech and non-native speech increased reading speed, facilitated deeper, less good-enough sentence processing, and improved comprehension compared to normal silent reading. |
Sarah R. Heilbronner; Benjamin Y. Hayden The description-experience gap in risky choice in nonhuman primates Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 593–600, 2016. @article{Heilbronner2016,Risk attitudes in humans depend on the format used to present the gamble: we are more risk-averse for common gambles in the gains domain whose properties are described to us verbally than for those whose properties we learned about solely through experience. This difference, which constitutes part ofthe description-experience gap,is important, because it highlights the role ofknowledge acquisition in decision-mak- ing. The reasons for the gap remain obscure, but could depend upon uniquely human cognitive abilities, such as those asso- ciated with language. Thus, the gap may or may not extend to nonhuman animals. For this study, rhesus monkeys performed a novel task in which the properties of some gambles were explicitly cued (described), whereas others were learned through previous choices (experienced). Our monkeys displayed a description-experience gap. Overall, monkeys were more risk-seeking for experienced than for described gambles. This difference was observed for a range ofgamble probabilities (from 20% to 80% likelihood of payoff), indicating that it is not limited to low probability events. These results suggest that the description- experience gap does not depend on uniquely human cognitive abilities, such as those associated with lan- guage, and support the idea that epistemic influences on risk attitudes are evolutionarily ancient. |
Emily R. Oby; Sagi Perel; Patrick T. Sadtler; Douglas A. Ruff; Jessica L. Mischel; David F. Montez; Marlene R. Cohen; Aaron P. Batista; Steven M. Chase Extracellular voltage threshold settings can be tuned for optimal encoding of movement and stimulus parameters Journal Article In: Journal of Neural Engineering, vol. 13, no. 3, pp. 1–15, 2016. @article{Oby2016,OBJECTIVE: A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). APPROACH: We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. MAIN RESULTS: The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. SIGNIFICANCE: How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. |
Peiyun Zhou; Kiel Christianson Auditory perceptual simulation: Simulating speech rates or accents? Journal Article In: Acta Psychologica, vol. 168, pp. 85–90, 2016. @article{Zhou2016b,When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. |
Matthew W. Lowder; Peter C. Gordon Eye-tracking and corpus-based analyses of syntax-semantics interactions in complement coercion Journal Article In: Language, Cognition and Neuroscience, vol. 31, no. 7, pp. 921–939, 2016. @article{Lowder2016a,Previous work has shown that the difficulty associated with processing complex semantic expressions is reduced when the critical constituents appear in separate clauses as opposed to when they appear together in the same clause. We investigated this effect further, focusing in particular on complement coercion, in which an event-selecting verb (e.g. began) combines with a complement that represents an entity (e.g. began the memo). Experiment 1 compared reading times for coercion versus control expressions when the critical verb and complement appeared together in a subject-extracted relative clause (SRC) (e.g. The secretary that began/wrote the memo) compared to when they appeared together in a simple sentence. Readers spent more time processing coercion expressions than control expressions, replicating the typical coercion cost. In addition, readers spent less time processing the verb and complement in SRCs than in simple sentences; however, the magnitude of the coercion cost did not depend on sentence structure. In contrast, Experiment 2 showed that the coercion cost was reduced when the complement appeared as the head of an object-extracted relative clause (ORC) (e.g. The memo that the secretary began/wrote) compared to when the constituents appeared together in an SRC. Consistent with the eye-tracking results of Experiment 2, a corpus analysis showed that expressions requiring complement coercion are more frequent when the constituents are separated by the clause boundary of an ORC compared to when they are embedded together within an SRC. The results provide important information about the types of structural configurations that contribute to reduced difficulty with complex semantic expressions, as well as how these processing patterns are reflected in naturally occurring language. |
Guido Maiello; William J. Harrison; Peter J. Bex Monocular and binocular contributions to oculomotor plasticity Journal Article In: Scientific Reports, vol. 6, pp. 31861, 2016. @article{Maiello2016,Most eye movements in the real-world redirect the foveae to objects at a new depth and thus require the co-ordination of monocular saccade amplitudes and binocular vergence eye movements. Additionally to maintain the accuracy of these oculomotor control processes across the lifespan, ongoing calibration is required to compensate for errors in foveal landing positions. Such oculomotor plasticity has generally been studied under conditions in which both eyes receive a common error signal, which cannot resolve the long-standing debate regarding whether both eyes are innervated by a common cortical signal or by a separate signal for each eye. Here we examine oculomotor plasticity when error signals are independently manipulated in each eye, which can occur naturally owing to aging changes in each eye's orbit and extra-ocular muscles, or in oculomotor dysfunctions. We find that both rapid saccades and slow vergence eye movements are continuously recalibrated independently of one another and corrections can occur in opposite directions in each eye. Whereas existing models assume a single cortical representation of space employed for the control of both eyes, our findings provide evidence for independent monoculomotor and binoculomotor plasticities and dissociable spatial mapping for each eye. |
Zaeinab Afsari; José P. Ossandón; Peter Konig The dynamic effect of reading direction habit on spatial asymmetry of image perception Journal Article In: Journal of Vision, vol. 16, no. 11, pp. 1–21, 2016. @article{Afsari2016,Exploration of images after stimulus onset is initially biased to the left. Here, we studied the causes of such an asymmetry and investigated effects of reading habits, text primes, and priming by systematically biased eye movements on this spatial bias in visual exploration. Bilinguals first read text primes with right- to-left (RTL) or left-to-right (LTR) reading directions and subsequently explored natural images. In Experiment 1, native RTL speakers showed a leftward free-viewing shift after reading LTR primes but a weaker rightward bias after reading RTL primes. This demonstrates that reading direction dynamically influences the spatial bias. However, native LTR speakers wholearnedanRTL languagelateinlife showed a leftward bias after reading either LTR or RTL primes, which suggests the role of habit formation in the production of the spatial bias. In Experiment 2, LTR bilinguals showed a slightly enhanced leftward bias after reading LTR text primes in their second language. This might contribute to the differences of native RTL and LTR speakers observed in Experiment 1. In Experiment 3, LTR bilinguals read normal (LTR, habitual reading) and mirrored left-to-right (mLTR, nonhabitual reading) texts. We observed a strong leftward bias in both cases, indicating that the bias direction is influenced by habitual reading direction and is not secondary to the actual reading direction. This is confirmed in Experiment 4, in which LTR participants were asked to follow RTL and LTR moving dots in prior image presentation and showed no change in the normal spatial bias. In conclusion, the horizontal bias is a dynamic property and is modulated by habitual reading direction. Introduction |
Inga Meyhöfer; Katja Bertsch; Moritz Esser; Ulrich Ettinger Variance in saccadic eye movements reflects stable traits Journal Article In: Psychophysiology, vol. 53, no. 4, pp. 566–578, 2016. @article{Meyhoefer2016,Saccadic tasks are widely used to study cognitive processes, effects of pharmacological treatments, and mechanisms underlying psychiatric disorders. In genetic studies, it is assumed that saccadic endophenotypes are traits. While internal consistency and temporal stability of saccadic performance is high for most of the measures, the magnitude of underlying trait components has not been estimated, and influences of situational aspects and person by situation interactions have not been investigated. To do so, 68 healthy participants performed prosaccades, antisaccades, and memory-guided saccades on three occasions at weekly intervals at the same time of day. Latent state-trait modeling was applied to estimate the proportions of variance reflecting stable trait components, situational influences, and Person × Situation interaction effects. Mean variables for all saccadic tasks showed high to excellent reliabilities. Intraindividual standard deviations were found to be slightly less reliable. Importantly, an average of 60% of variance of a single measurement was explained by trans-situationally stable person effects, while situation aspects and interactions between person and situation were found to play a negligible role. We conclude that saccadic variables, in standard laboratory settings, represent highly reliable measures that are largely unaffected by situational influences. Extending previous reliability studies, these findings clearly demonstrate the trait-like nature of these measures and support their role as endophenotypes. |
Maria Steffens; B. Becker; C. Neumann; Anna-Maria Kasparbauer; Inga Meyhöfer; Bernd Weber; Mitul A. Mehta; R. Hurlemann; Ulrich Ettinger Effects of ketamine on brain function during smooth pursuit eye movements Journal Article In: Human Brain Mapping, vol. 37, no. 11, pp. 4047–4060, 2016. @article{Steffens2016,The uncompetitive NMDA receptor antagonist ketamine has been proposed to model symptoms of psychosis. Smooth pursuit eye movements (SPEM) are an established biomarker of schizophrenia. SPEM performance has been shown to be impaired in the schizophrenia spectrum and during ketamine administration in healthy volunteers. However, the neural mechanisms mediating SPEM impairments during ketamine administration are unknown. In a counter-balanced, placebo-controlled, double-blind, within-subjects design, 27 healthy participants received intravenous racemic ketamine (100 ng/mL target plasma concentration) on one of two assessment days and placebo (intravenous saline) on the other. Participants performed a block-design SPEM task during functional magnetic resonance imaging (fMRI) at 3 Tesla field strength. Self-ratings of psychosis-like experiences were obtained using the Psychotomimetic States Inventory (PSI). Ketamine administration induced psychosis-like symptoms, during ketamine infusion, participants showed increased ratings on the PSI dimensions cognitive disorganization, delusional thinking, perceptual distortion and mania. Ketamine led to robust deficits in SPEM performance, which were accompanied by reduced blood oxygen level dependent (BOLD) signal in the SPEM network including primary visual cortex, area V5 and the right frontal eye field (FEF), compared to placebo. A measure of connectivity with V5 and FEF as seed regions, however, was not significantly affected by ketamine. These results are similar to the deviations found in schizophrenia patients. Our findings support the role of glutamate dysfunction in impaired smooth pursuit performance and the use of ketamine as a pharmacological model of psychosis, especially when combined with oculomotor biomarkers. |
Pascasie L. Dombert; Gereon R. Fink; Simone Vossel The impact of probabilistic feature cueing depends on the level of cue abstraction Journal Article In: Experimental Brain Research, vol. 234, no. 3, pp. 685–694, 2016. @article{Dombert2016,Allocation of attentional resources rests on predictions about the likelihood of events. While this effect has been extensively studied in the spatial attention domain where the location of a target stimulus is pre-cued, less is known about the cueing of stimulus features such as the color of a behaviorally relevant target. Moreover, there is disagreement about which types of color cues are effective for biasing attention. Here we investigated the effects of probabilistic context (percentage of cue validity, %CV) for different levels of cue abstraction to elucidate how feature-based search information is processed and used to direct attention. The color of a target was cued by presenting the perceptual color, the color word, or two-letter abbreviations. %CV, i.e., the probability that the cue indicated the color correctly, changed unpredictably between 50, 70, and 90%. Response times (RTs) for valid and invalid trials in each %CV condition were recorded in 60 datasets and analyzed with analyses of variance. The results showed that all cues were associated with comparable RT costs after invalid cueing. The modulation of RT costs by probabilities, however, depended upon level of cue abstraction and time on task: While a strong, immediate impact of %CV was found for two-letter cueing, the effect was solely observed in the second half of the experiment for perceptual and word cues. These results demonstrate that probabilistic feature-based information is processed differently for different levels of cue abstraction. Moreover, the modulatory effect of the environmental statistics differentially depends on the time on task for different feature cues.; |
Bartholomäus Odoj; Daniela Balslev Role of oculoproprioception in coding the locus of attention Journal Article In: Journal of Cognitive Neuroscience, vol. 28, no. 3, pp. 517–528, 2016. @article{Odoj2016,The most common neural representations for spatial atten- tion encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be com- bined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allo- cation of attention, the source of this input has so far re- mained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculopro- prioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants dis- criminated visual targets whose location was cued in a non- visual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculo- proprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculo- proprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention. |
Shin-ichi Tokushige; Yasuo Terao; Shun-ichi Matsuda; Satomi Inomata-Terada; Takahiro Shimizu; Nobuyuki Tanaka; Masashi Hamada; Akihiro Yugeta; Ritsuko Hanajima; Harushi Mori; Shoji Tsuji; Yoshikazu Ugawa Motor neuron disease with saccadic abnormalities similar to progressive supranuclear palsy Journal Article In: Neurology and Clinical Neuroscience, vol. 4, pp. 146–152, 2016. @article{Tokushige2016,Background: In recent years, a variety of clinical types of amyotrophic lateral sclerosis have come to be recognized. As some patients present with oculomotor abnormalities both clinically and pathologically, the progressive supranuclear palsy variant of amyotrophic lateral sclerosis has been proposed. Aim: To describe atypical cases of motor neuron disease with abnormal extraocular movements mimicking progressive supranuclear palsy. Methods: We present three motor neuron disease patients with slow saccades, who were aged 57, 63 and 62 years. Neurological examinations found vertical gaze palsy in two patients. The two patients who presented extrapyramidal signs were regarded as motor neuron disease with parkinsonism, whereas the other was diagnosed with amyotrophic lateral sclerosis. Their saccades were investigated by visually-guided saccade and memory-guided saccade tasks, and were compared with those of 14 age-matched normal participants (60.3 +/- 1.9 years). Results: In all these patients, the visually-guided saccade latencies were significantly prolonged compared with normal participants, whereas the memory-guided saccade latencies were not. The velocity and amplitude of saccades of the patients were significantly reduced in visually-guided saccade and memory-guided saccade in comparison with normal participants. Conclusion: The patterns of saccadic abnormalities in the patients were similar to those of progressive supranuclear palsy patients, suggesting that some patients with motor neuron disease show saccade abnormalities similar to those of progressive supranuclear palsy patients from the clinical and physiological perspective. Motor neuron disease with slow saccades and parkinsonism, as reported here, suggest the existence of progressive supranuclear palsy-variant amyotrophic lateral sclerosis. |
Pascasie L. Dombert; Anna B. Kuhns; Paola Mengotti; Gereon R. Fink; Simone Vossel Functional mechanisms of probabilistic inference in feature- and space-based attentional systems Journal Article In: NeuroImage, vol. 142, pp. 553–564, 2016. @article{Dombert2016a,Humans flexibly attend to features or locations and these processes are influenced by the probability of sensory events. We combined computational modeling of response times with fMRI to compare the functional correlates of (re-)orienting, and the modulation by probabilistic inference in spatial and feature-based attention systems. Twenty-four volunteers performed two task versions with spatial or color cues. Percentage of cue validity changed unpredictably. A hierarchical Bayesian model was used to derive trial-wise estimates of probability-dependent attention, entering the fMRI analysis as parametric regressors. Attentional orienting activated a dorsal frontoparietal network in both tasks, without significant parametric modulation. Spatially invalid trials activated a bilateral frontoparietal network and the precuneus, while invalid feature trials activated the left intraparietal sulcus (IPS). Probability-dependent attention modulated activity in the precuneus, left posterior IPS, middle occipital gyrus, and right temporoparietal junction for spatial attention, and in the left anterior IPS for feature-based and spatial attention. These findings provide novel insights into the generality and specificity of the functional basis of attentional control. They suggest that probabilistic inference can distinctively affect each attentional subsystem, but that there is an overlap in the left IPS, which responds to both spatial and feature-based expectancy violations. |
Sharna D. Jamadar; Gary F. Egan; Vince D. Calhoun; Beth P. Johnson; Joanne Fielding In: Brain Connectivity, vol. 6, no. 6, pp. 505–517, 2016. @article{Jamadar2016,Intrinsic brain activity provides the functional framework for the brain's full repertoire of behavioural responses; that is, a common mechanism underlies intrinsic and extrinsic neural activity, with extrinsic activity building upon the underlying baseline intrinsic activity. The generation of a motor movement in response to sensory stimulation is one of the most fundamental functions of the central nervous system. Since saccadic eye movements are among our most stereotyped motor responses, we hypothesized that individual variability in the ability to inhibit a prepotent saccade and make a voluntary antisaccade would be related to individual variability in intrinsic connectivity. Twenty-three individuals completed the antisaccade task and resting-state functional magnetic resonance imaging (fMRI). A multivariate analysis of covariance identified relationships between fMRI oscillations (0.01-0.2Hz) of resting-state networks determined using high-dimensional independent component analysis (ICA) and antisaccade performance (latency, error rate). Significant multivariate relationships between antisaccade latency and directional error rate were obtained in independent components across the entire brain. Some of the relationships were obtained in components that overlapped substantially with the task, however many were obtained in components that showed little overlap with the task. The current results demonstrate that even in the absence of a task, spectral power in regions showing little overlap with task activity predicts an individual's performance on a saccade task. |
Klaartje Heinen; Laura Sagliano; Michela Candini; Masud Husain; Marinella Cappelletti; Nahid Zokaei Cathodal transcranial direct current stimulation over posterior parietal cortex enhances distinct aspects of visual working memory Journal Article In: Neuropsychologia, vol. 87, pp. 35–42, 2016. @article{Heinen2016,In this study, we investigated the effects of tDCS over the posterior parietal cortex (PPC) during a visual working memory (WM) task, which probes different sources of response error underlying the precision of WM recall. In two separate experiments, we demonstrated that tDCS enhanced WM precision when applied bilaterally over the PPC, independent of electrode configuration. In a third experiment, we demonstrated with unilateral electrode configuration over the right PPC, that only cathodal tDCS enhanced WM precision and only when baseline performance was low. Looking at the effects on underlying sources of error, we found that cathodal stimulation enhanced the probability of correct target response across all participants by reducing feature-misbinding. Only for low-baseline performers, cathodal stimulation also reduced variability of recall. We conclude that cathodal- but not anodal tDCS can improve WM precision by preventing feature-misbinding and hereby enhancing attentional selection. For low-baseline performers, cathodal tDCS also protects the memory trace. Furthermore, stimulation over bilateral PPC is more potent than unilateral cathodal tDCS in enhancing general WM precision. |
Raika Pancaroglu; Charlotte S. Hills; Alla Sekunova; Jayalakshmi Viswanathan; Brad Duchaine; Jason J. S. Barton Seeing the eyes in acquired prosopagnosia Journal Article In: Cortex, vol. 81, pp. 251–265, 2016. @article{Pancaroglu2016,Case reports have suggested that perception of the eye region may be impaired more than that of other facial regions in acquired prosopagnosia. However, it is unclear how frequently this occurs, whether such impairments are specific to a certain anatomic subtype of prosopagnosia, and whether these impairments are related to changes in the scanning of faces. We studied a large cohort of 11 subjects with this rare disorder, who had a variety of occipitotemporal or anterior temporal lesions, both unilateral and bilateral. Lesions were characterized by functional and structural imaging. Subjects performed a perceptual discrimination test in which they had to discriminate changes in feature position, shape, or external contour. Test conditions were manipulated to stress focused or divided attention across the whole face. In a second experiment we recorded eye movements while subjects performed a face memory task. We found that greater impairment for eye processing was more typical of subjects with occipitotemporal lesions than those with anterior temporal lesions. This eye selectivity was evident for both eye position and shape, with no evidence of an upper/lower difference for external contour. A greater impairment for eye processing was more apparent under attentionally more demanding conditions. Despite these perceptual deficits, most subjects showed a normal tendency to scan the eyes more than the mouth. We conclude that occipitotemporal lesions are associated with a partially selective processing loss for eye information and that this deficit may be linked to loss of the right fusiform face area, which has been shown to have activity patterns that emphasize the eye region. |
Mehmet N. Ağaoğlu; Susana T. L. Chung Can (should) theories of crowding be unified? Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–22, 2016. @article{Agaoglu2016,Objects in clutter are difficult to recognize, a phenomenon known as crowding. There is little consensus on the underlying mechanisms of crowding, and a large number of models have been proposed. There have also been attempts at unifying the explanations of crowding under a single model, such as the weighted feature model of Harrison and Bex (2015) and the texture synthesis model of Rosenholtz and colleagues (Balas, Nakano, & Rosenholtz, 2009; Keshvari & Rosenholtz, 2016). The goal of this work was to test various models of crowding and to assess whether a unifying account can be developed. Adopting Harrison and Bex's (2015) experimental paradigm, we asked observers to report the orientation of two concentric C-stimuli. Contrary to the predictions of their model, observers' recognition accuracy was worse for the inner C-stimulus. In addition, we demonstrated that the stimulus paradigm used by Harrison and Bex has a crucial confounding factor, eccentricity, which limits its usage to a very narrow range of stimulus parameters. Nevertheless, reporting the orientations of both C-stimuli in this paradigm proved very useful in pitting different crowding models against each other. Specifically, we tested deterministic and probabilistic versions of averaging, substitution, and attentional resolution models as well as the texture synthesis model. None of the models alone was able to explain the entire set of data. Based on these findings, we discuss whether the explanations of crowding can (should) be unified. |
Evan G. Antzoulatos; Earl K. Miller Synchronous beta rhythms of frontoparietal networks support only behaviorally relevant representations Journal Article In: eLife, vol. 5, no. NOVEMBER2016, pp. 1–22, 2016. @article{Antzoulatos2016,Categorization has been associated with distributed networks of the primate brain, including the prefrontal (PFC) and posterior parietal cortices (PPC). Although category-selective spiking in PFC and PPC has been established, the frequency-dependent dynamic interactions of frontoparietal networks are largely unexplored. We trained monkeys to perform a delayed-match-to-spatial-category task while recording spikes and local field potentials from the PFC and PPC with multiple electrodes. We found category-selective beta- and delta-band synchrony between and within the areas. However, in addition to the categories, delta synchrony and spiking activity also reflected irrelevant stimulus dimensions. By contrast, beta synchrony only conveyed information about the task-relevant categories. Further, category-selective PFC neurons were synchronized with PPC beta oscillations, while neurons that carried irrelevant information were not. These results suggest that long-range beta-band synchrony could act as a filter that only supports neural representations of the variables relevant to the task at hand. |
Mehmet N. Ağaoğlu; Aaron M. Clarke; Michael H. Herzog; Haluk Ögmen Motion-based nearest vector metric for reference frame selection in the perception of motion Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 1–16, 2016. @article{Agaoglu2016a,We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects. |
Loes T. E. Kessels; Peter R. Harris; Robert A. C. Ruiter; William M. P. Klein Attentional effects of self-affirmation in response to graphic antismoking images Journal Article In: Health Psychology, vol. 35, no. 8, pp. 891–897, 2016. @article{Kessels2016,Objective: Self-affirmation has been shown to reduce defensive responding to threatening information. However, little is known about the cognitive and attentional processes underlying these effects. In the current eye-movement study, the authors explored whether self-affirmation affects attention allocation (i.e., number of fixations) among those for whom a threatening health message is self-relevant. Methods: After a self-affirmation manipulation, 47 smokers and 52 nonsmokers viewed a series of cigarette packs displaying high or low threat smoking-related images accompanied by a brief smoking message containing risk, coping or neutral textual information. Results: Self-affirmed smokers made more fixations to the cigarette packs than did nonaffirmed smokers (across both high and low threat images), whereas self-affirmed nonsmokers made fewer fixations to the cigarette packs than did nonaffirmed nonsmokers (again across both image types). The textual information did not moderate responses. Conclusions: Findings indicate attention-increasing effects of self-affirmation among those for whom the information is self-relevant (smokers) and attention-decreasing effects of self-affirmation among those for whom the information is not self-relevant (nonsmokers). Such findings are consistent with the calibration model of self-affirmation (Griffin & Harris, 2011) in which self-affirmation increases sensitivity to the self-relevance of health-risk information. The use of an implicit measure of visual orienting informs our understanding of the working mechanisms of self-affirmation when encoding health information, and may also hold practical implications for the design and delivery of graphic warning labels. |
Tobias Talanow; Anna-Maria Kasparbauer; Maria Steffens; Inga Meyhöfer; Bernd Weber; Nikolaos Smyrnis; Ulrich Ettinger Facing competition: Neural mechanisms underlying parallel programming of antisaccades and prosaccades Journal Article In: Brain and Cognition, vol. 107, pp. 37–47, 2016. @article{Talanow2016,The antisaccade task is a prominent tool to investigate the response inhibition component of cognitive control. Recent theoretical accounts explain performance in terms of parallel programming of exogenous and endogenous saccades, linked to the horse race metaphor. Previous studies have tested the hypothesis of competing saccade signals at the behavioral level by selectively slowing the programming of endogenous or exogenous processes e.g. by manipulating the probability of antisaccades in an experimental block. To gain a better understanding of inhibitory control processes in parallel saccade programming, we analyzed task-related eye movements and blood oxygenation level dependent (BOLD) responses obtained using functional magnetic resonance imaging (fMRI) at 3T from 16 healthy participants in a mixed antisaccade and prosaccade task. The frequency of antisaccade trials was manipulated across blocks of high (75%) and low (25%) antisaccade frequency. In blocks with high antisaccade frequency, antisaccade latencies were shorter and error rates lower whilst prosaccade latencies were longer and error rates were higher. At the level of BOLD, activations in the task-related saccade network (left inferior parietal lobe, right inferior parietal sulcus, left precentral gyrus reaching into left middle frontal gyrus and inferior frontal junction) and deactivations in components of the default mode network (bilateral temporal cortex, ventromedial prefrontal cortex) compensated increased cognitive control demands. These findings illustrate context dependent mechanisms underlying the coordination of competing decision signals in volitional gaze control. |
Ming Chen; Peichao Li; Shude Zhu; Chao Han; Haoran Xu; Yang Fang; Jiaming Hu; Anna W. Roe; Haidong D. Lu An orientation map for motion boundaries in macaque V2 Journal Article In: Cerebral Cortex, vol. 26, no. 1, pp. 279–287, 2016. @article{Chen2016e,The ability to extract the shape of moving objects is fundamental to visual perception. However, where such computations are processed in the visual system is unknown. To address this question, we used intrinsic signal optical imaging in awake monkeys to examine cortical response to perceptual contours defined by motion contrast (motion boundaries, MBs). We found that MB stimuli elicit a robust orientation response in area V2. Orientation maps derived from subtraction of orthogonal MB stimuli aligned well with the orientation maps obtained with luminance gratings (LGs). In contrast, area V1 responded well to LGs, but exhibited a much weaker orientation response to MBs. We further show that V2 direction domains respond to motion contrast, which is required in the detection of MB in V2. These results suggest that V2 represents MB information, an important prerequisite for shape recognition and figure-ground segregation. |
Stephen J. Heinen; Elena Potapchuk; Scott N. J. Watamaniuk A foveal target increases catch-up saccade frequency during smooth pursuit Journal Article In: Journal of Neurophysiology, vol. 115, no. 3, pp. 1220–1227, 2016. @article{Heinen2016a,Images that move rapidly across the retina of the human eye blur because$backslash$nthe retina has sluggish temporal dynamics. Voluntary smooth pursuit eye movements are modeled as matching object velocity to minimize retinal motion and prevent retinal blurring. However, "catch-up'' saccades that are ubiquitous during pursuit interrupt it and disrupt clear vision. But catch-up saccades may not be a common feature of ocular pursuit, because their existence has been documented with a small moving spot, the classic pursuit stimulus, which is a weak motion stimulus that may poorly emulate larger pursuit objects. We found that spot pursuit does not generalize to that of larger objects. Observers pursued a spot or a larger virtual object with or without a superimposed spot target. Single-spot targets produced lower pursuit acceleration than larger objects. Critically, more saccadic intrusions occurred when stimuli had a central dot, even when position and velocity errors were equated, suggesting that catch-up saccades result from pursuing a single, smallobject or a feature on a large one. To determine what differentiates a large object from a small one, we progressively shrank the featureless virtual object and found that catch-up saccade frequency was highestwhen it fit in the fovea. The results suggest that pursuit of a small target or an object feature recruits a saccade mechanism that does not compensate for a weak motion signal; rather, the target compelsfoveation. Furthermore, catch-up saccades are likely generated by neural circuitry typically used to foveate small objects or features. |
Jeremy M. Wolfe; Mia K. Markey; Gezheng Wen; Trafton Drew; Avigael Aizenman; Tamara Miner Haygood Computational assessment of visual search strategies in volumetric medical images Journal Article In: Journal of Medical Imaging, vol. 3, no. 1, pp. 1–12, 2016. @article{Wolfe2016,When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: “drilling” (restrict eye movements to a small region of the image while quickly scrolling through slices), or “scanning” (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either “drilling” or “scanning” when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimen- sional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus “drilling” may be more efficient than “scanning.” |
