All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2016 |
Yoshiyuki Ueda; Atsuko Tominaga; Shogo Kajimura; Michio Nomura Spontaneous eye blinks during creative task correlate with divergent processing Journal Article In: Psychological Research, vol. 80, no. 4, pp. 652–659, 2016. @article{Ueda2016, Creativity consists of divergent and convergent thinking, with both related to individual eye blinks at rest. To assess underlying mechanisms between eye blinks and traditional creativity tasks, we investigated the relationship between creativity performance and eye blinks at rest and during tasks. Participants performed an alternative uses and remote association task while eye blinks were recorded. Results showed that the relationship between eye blinks at rest and creativity performance was compatible with those of previous research. Interestingly, we found that the generation of ideas increased as a function of eye blink number during the alternative uses task. On the other hand, during the remote association task, accuracy was independent of eye blink number during the task, but response time increased with it. Moreover, eye blink changes in participants who responded quickly during the remote association task were different depending on their resting state eye blinks; that is, participants with many eye blinks during rest showed little increasing eye blinks and achieved solutions quickly. Positive correlations between eye blinks during creative tasks and yielding ideas on the alternative uses task and response time on the remote association task suggest that eye blinks during creativity tasks relate to divergent thinking processes such as conceptual reorganization. |
Sandra Utz; Claus-Christian Carbon Is the Thatcher illusion modulated by face familiarity? Evidence from an eye tracking study Journal Article In: PLoS ONE, vol. 11, no. 10, pp. e0163933, 2016. @article{Utz2016, Thompson (1980) first detected and described the Thatcher Illusion, where participants instantly perceive an upright face with inverted eyes and mouth as grotesque, but fail to do so when the same face is inverted. One prominent but controversial explanation is that the processing of configural information is disrupted in inverted faces. Studies investigating the Thatcher Illusion either used famous faces or non-famous faces. Highly familiar faces were often thought to be processed in a pronounced configural mode, so they seem ideal candi-dates to be tested in one Thatcher study against unfamiliar faces–but this has never been addressed so far. In our study, participants evaluated 16 famous and 16 non-famous faces for their grotesqueness. We tested whether familiarity (famous/non-famous faces) modu-lates reaction times, correctness of grotesqueness assessments (accuracy), and eye movement patterns for the factors orientation (upright/inverted) and Thatcherisation (Thatcherised/non-Thatcherised). On a behavioural level, familiarity effects were only observable via face inversion (higher accuracy and sensitivity for famous compared to non-famous faces) but not via Thatcherisation. Regarding eye movements, however, Thatcheri-sation influenced the scanning of famous and non-famous faces, for instance, in scanning the mouth region of the presented faces (higher number, duration and dwell time of fixa-tions for famous compared to non-famous faces if Thatcherised). Altogether, famous faces seem to be processed in a more elaborate, more expertise-based way than non-famous faces, whereas non-famous, inverted faces seem to cause difficulties in accurate and sen-sitive processing. Results are further discussed in the face of existing studies of familiar vs. unfamiliar face processing. |
Seppo Vainio; Anneli Pajunen; Jukka Hyönä Processing modifier–head agreement in L1 and L2 Finnish: An eye-tracking study Journal Article In: Second Language Research, vol. 32, no. 1, pp. 3–24, 2016. @article{Vainio2016, This study investigated the effect of first language (L1) on the reading of modifier-head case agreement in second language (L2) Finnish by native Russian and Chinese speakers. Russian is similar to Finnish in that both languages use case endings to mark grammatical roles, whereas such markings are absent in Chinese. The critical nouns were embedded in sentences, where the head noun was either preceded by an agreeing modifier or the modifier was absent. Readers' eye fixation patterns were used as indices of online processing. Both natives and non-natives showed a facilitatory effect of agreement; reading head nouns was easier when they were preceded by an agreeing modifier. Typological distance in terms of the structural complexity of words between L1 and L2 did not influence the processing. |
Marieke E. Nieuwenhuijzen; Eva W. P. Borne; Ole Jensen; Marcel A. J. Gerven Spatiotemporal dynamics of cortical representations during and after stimulus presentation Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 42, 2016. @article{Nieuwenhuijzen2016, Visual perception is a spatiotemporally complex process. In this study, we investigated cortical dynamics during and after stimulus presentation. We observed that visual category information related to the difference between faces and objects became apparent in the occipital lobe after 63 ms. Within the next 110 ms, activation spread out to include the temporal lobe before returning to residing mainly in the occipital lobe again. After stimulus offset, a peak in information was observed, comparable to the peak after stimulus onset. Moreover, similar processes, albeit not identical, seemed to underlie both peaks. Information about the categorical identity of the stimulus remained present until 677 ms after stimulus offset, during which period the stimulus had to be retained in working memory. Activation patterns initially resembled those observed during stimulus presentation. After about 200 ms, however, this representation changed and class-specific activity became more equally distributed over the four lobes. These results show that, although there are common processes underlying stimulus representation both during and after stimulus presentation, these representations change depending on the specific stage of perception and maintenance. |
Zehui Zhan; Lei Zhang; Hu Mei; Patrick S. W. Fong Online learners' reading ability detection based on eye-tracking sensors Journal Article In: Sensors, vol. 16, pp. 1457, 2016. @article{Zhan2016, © 2016 by the author; licensee MDPI, Basel, Switzerland.The detection of university online learners' reading ability is generally problematic and time-consuming. Thus the eye-tracking sensors have been employed in this study, to record temporal and spatial human eye movements. Learners' pupils, blinks, fixation, saccade, and regression are recognized as primary indicators for detecting reading abilities. A computational model is established according to the empirical eye-tracking data, and applying the multi-feature regularization machine learning mechanism based on a Low-rank Constraint. The model presents good generalization ability with an error of only 4.9% when randomly running 100 times. It has obvious advantages in saving time and improving precision, with only 20 min of testing required for prediction of an individual learner's reading ability. |
Yan Zhang; Xiaochuan Pan; Rubin Wang; Masamichi Sakagami Functional connectivity between prefrontal cortex and striatum estimated by phase locking value Journal Article In: Cognitive Neurodynamics, vol. 10, no. 3, pp. 245–254, 2016. @article{Zhang2016, The interplay between the prefrontal cortex (PFC) and striatum has an important role in cognitive processes. To investigate interactive functions between the two areas in reward processing, we recorded local field potentials (LFPs) simultaneously from the two areas of two monkeys performing a reward prediction task (large reward vs small reward). The power of the LFPs was calculated in three frequency bands: the beta band (15–29 Hz), the low gamma band (30–49 Hz), and the high gamma band (50–100 Hz). We found that both the PFC and striatum encoded the reward information in the beta band. The reward information was also found in the high gamma band in the PFC, not in the striatum. We further calculated the phase-locking value (PLV) between two LFP signals to measure the phase synchrony between the PFC and striatum. It was found that significant differences occurred between PLVs in different task periods and in different frequency bands. The PLVs in small reward condition were significant higher than that in large reward condition in the beta band. In contrast, the PLVs in the high gamma band were stronger in large reward trials than in small trials. These results suggested that the functional connectivity between the PFC and striatum depended on the task periods and reward conditions. The beta synchrony between the PFC and striatum may regulate behavioral outputs of the monkeys in the small reward condition. |
Huihui Zhou; Robert John Schafer; Robert Desimone Pulvinar-cortex interactions in vision and attention Journal Article In: Neuron, vol. 89, no. 1, pp. 209–220, 2016. @article{Zhou2016c, The ventro-lateral pulvinar is reciprocally connected with the visual areas of the ventral stream that are important for object recognition. To understand the mechanisms of attentive stimulus processing in this pulvinar-cortex loop, we investigated the interactions between the pulvinar, area V4, and IT cortex in a spatial-attention task. Sensory processing and the influence of attention in the pulvinar appeared to reflect its cortical inputs. However, pulvinar deactivation led to a reduction of attentional effects on firing rates and gamma synchrony in V4, a reduction of sensory-evoked responses and overall gamma coherence within V4, and severe behavioral deficits in the affected portion of the visual field. Conversely, pulvinar deactivation caused an increase in low-frequency cortical oscillations, often associated with inattention or sleep. Thus, cortical interactions with the ventro-lateral pulvinar are necessary for normal attention and sensory processing and for maintaining the cortex in an active state. The pulvinar is often proposed to modulate cortical processing with attention. Zhou et al. find that beyond any role in attention, the pulvinar input to cortex seems necessary to maintain the cortex in an active state. |
Jifan Zhou; Chia-Lin Lee; Kuei-An Li; Yung-Hsuan Tien; Su-Ling Yeh Does temporal integration occur for unrecognizable words in visual crowding? Journal Article In: PLoS ONE, vol. 11, no. 2, pp. e0149355, 2016. @article{Zhou2016d, ? 2016 Zhou et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Visual crowding - the inability to see an object when it is surrounded by flankers in the periphery - does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration - the simplest kind of temporal semantic integration - did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. |
Lei Zhou; Yang-Yang Zhang; Zuo-Jun Wang; Li-Lin Rao; Wei Wang; Shu Li; Xingshan Li; Zhu-Yuan Liang A scanpath analysis of the risky decision-making process Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 169–182, 2016. @article{Zhou2016, In the field of eye tracking, scanpath analysis can reflect the sequential and temporal properties of the cognitive process. However, the advantages of scanpath analysis have not yet been utilized in the study of risky decision making. We explored the methodological applicability of scanpath analysis to test models of risky decision making by analyzing published data from the eye-tracking studies of Su et al. (2013); Wang and Li (2012), and Sun, Rao, Zhou, and Li (2014). These studies used a proportion task, an outcome-matched presentation condition, and a multiple-play condition as the baseline for comparison with information search and processing in the risky decision-making condition. We found that (i) the similarity scores of the intra-conditions were significantly higher than those of the inter-condition; (ii) the scanpaths of the two conditions were separable; and (iii) based on an inspection of typical trials, the patterns of the scanpaths differed between the two conditions. These findings suggest that scanpath analysis is reliable and valid for examining the process of risky decision making. In line with the findings of the three original studies, our results indicate that risky decision making is unlikely to be based on a weighting and summing process, as hypothesized by the family of expectation models. The findings highlight a new methodological direction for research on decision making. |
Peiyun Zhou; Kiel Christianson Auditory perceptual simulation: Simulating speech rates or accents? Journal Article In: Acta Psychologica, vol. 168, pp. 85–90, 2016. @article{Zhou2016b, When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. |
Peiyun Zhou; Kiel Christianson I “hear” what you're “saying”: Auditory perceptual simulation, reading speed, and reading comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 5, pp. 972–995, 2016. @article{Zhou2016a, Auditory perceptual simulation (APS) during silent reading refers to situations in which the reader actively simulates the voice of a character or other person depicted in a text. In three eye-tracking experiments, APS effects were investigated as people read utterances attributed to a native English speaker, a non-native English speaker, or no speaker at all. APS effects were measured via online eye movements and offline comprehension probes. Results demonstrated that inducing APS during silent reading resulted in observable differences in reading speed when readers simulated the speech of faster compared to slower speakers and compared to silent reading without APS. Social attitude survey results indicated that readers' attitudes towards the native and non-native speech did not consistently influence APS-related effects. APS of both native speech and non-native speech increased reading speed, facilitated deeper, less good-enough sentence processing, and improved comprehension compared to normal silent reading. |
Kyoko Yoshida; Yasuhiro Go; Itaru Kushima; Atsushi Toyoda; Asao Fujiyama; Hiroo Imai; Nobuhito Saito; Atsushi Iriki; Norio Ozaki; Masaki Isoda Single-neuron and genetic correlates of autistic behavior in macaque Journal Article In: Science Advances, vol. 2, no. 9, pp. e1600558, 2016. @article{Yoshida2016, Atypical neurodevelopment in autism spectrum disorder is a mystery, defying explanation despite increasing attention. We report on a Japanese macaque that spontaneously exhibited autistic traits, namely, impaired social ability as well as restricted and repetitive behaviors, along with our single-neuron and genomic analyses. Its social ability was measured in a turn-taking task, where two monkeys monitor each other's actions for adaptive behavioral planning. In its brain, the medial frontal neurons responding to others' actions, abundant in the controls, were almost nonexistent. In its genes, whole-exome sequencing and copy number variation analyses identified rare coding variants linked to human neuropsychiatric disorders in 5-hydroxytryptamine (serotonin) receptor 2C (HTR2C) and adenosine triphosphate (ATP)–binding cassette subfamily A13 (ABCA13). This combination of systems neuroscience and cognitive genomics in macaques suggests a new, phenotype-to-genotype approach to studying mental disorders. |
Chen-Ping Yu; Justin T. Maxfield; Gregory J. Zelinsky Searching for category-consistent features: A computational approach to understanding visual category representation Journal Article In: Psychological Science, vol. 27, no. 6, pp. 870–884, 2016. @article{Yu2016a, This article introduces a generative model of category representation that uses computer vision methods to extract category-consistent features (CCFs) directly from images of category exemplars. The model was trained on 4,800 images of common objects, and CCFs were obtained for 68 categories spanning subordinate, basic, and superordinate levels in a category hierarchy. When participants searched for these same categories, targets cued at the subordinate level were preferentially fixated, but fixated targets were verified faster when they followed a basic-level cue. The subordinate- level advantage in guidance is explained by the number of target-category CCFs, a measure of category specificity that decreases with movement up the category hierarchy. The basic-level advantage in verification is explained by multiplying the number of CCFs by sibling distance, a measure of category distinctiveness. With this model, the visual representations of real-world object categories, each learned from the vast numbers of image exemplars accumulated throughout everyday experience, can finally be studied. |
Gongchen Yu; Baijie Xu; Yuchen Zhao; Beizhen Zhang; Mingpo Yang; Janis Y. Y. Kan; David M. Milstein; Dhushan Thevarajah; Michael C. Dorris Microsaccade direction reflects the economic value of potential saccade goals and predicts saccade choice Journal Article In: Journal of Neurophysiology, vol. 115, no. 2, pp. 741–751, 2016. @article{Yu2016b, Microsaccades are small-amplitude (typically <1°), ballistic eye movements that occur when attempting to fixate gaze. Initially thought to be generated randomly, it has recently been established that microsaccades are influenced by sensory stimuli, attentional processes, and certain cognitive states. Whether decision processes influence microsaccades, however, is unknown. Here, we adapted two classic economic tasks to examine whether microsaccades reflect evolving saccade decisions. Volitional saccade choices of monkey and human subjects provided a measure of the subjective value of targets. Importantly, analyses occurred during a period of complete darkness to minimize the known influence of sensory and attentional processes on microsaccades. As the time of saccadic choice approached, microsaccade direction became the following: 1) biased toward targets as a function of their subjective value and 2) predictive of upcoming, voluntary choice. Our results indicate that microsaccade direction is influenced by and is a reliable tell of evolving saccade decisions. Our results are consistent with dynamic decision processes within the midbrain superior colliculus; that is, microsaccade direction is influenced by the transition of activity toward caudal saccade regions associated with high saccade value and/or future saccade choice. |
Lili Yu; Michael G. Cutter; Guoli Yan; Xuejun Bai; Yu Fu; Denis Drieghe; Simon P. Liversedge Word n + 2 preview effects in three-character Chinese idioms and phrases Journal Article In: Language, Cognition and Neuroscience, vol. 31, no. 9, pp. 1130–1149, 2016. @article{Yu2016c, Prior research using the boundary paradigm suggests that Chinese readers only process word n +2 in the parafovea when word n + 1 is a single character, high-frequency word. We attempted to replicate these findings (Experiment 1), and investigated whether greater n + 2 preview effects are observed when word n + 1 and n + 2 form an idiom rather than a phrase (Experiment 2). Experiment 1 replicated prior findings, although additional analyses of word n + 1 and n + 2 as a single region revealed significant preview effects regardless of word n + 1 frequency. In Experiment 2 there was a main effect of phrase type, such that idioms were read more quickly than phrases, and significant n + 2 preview effects. There was no interaction between these variables, suggesting that idioms are not parafoveally processed to a greater extent than phrases. These results suggest that n + 2 preview effects in Chinese occur under several circumstances. Factors influencing the observation of these effects are discussed. |
Wan-Yun Yu; Jie-Li Tsai Modulation of scene consistency and task demand on language-driven eye movements for audio-visual integration Journal Article In: Acta Psychologica, vol. 171, pp. 1–16, 2016. @article{Yu2016, Previous psycholinguistic studies have demonstrated that people tend to direct fixations toward the visual object to which spoken input refers during language comprehension. However, it is still unclear how the visual scene, especially the semantic consistency between object and background, affects the word-object mapping process during comprehension. Two visual world paradigm experiments were conducted to investigate how the scene consistency dynamically influenced the language-driven eye movements in a speech comprehension and a scene comprehension task. In each trial, participants listened to a spoken sentence while viewing a picture with two critical objects: one is the mentioned target object (e.g., tiger), which was embedded in either a consistent (e.g., field), inconsistent (e.g., sky) or blank background; the other is an unmentioned non-target object (e.g., eagle), which was always consistent with its background. The results showed that the fixation proportion of the inconsistent target was higher than the consistent target, and the task demand can affect the strength and the direction of the inconsistency effect before and after the target had been mentioned. In summary, the spoken language, scene-based knowledge and task demand were intertwined to determine eye movements during audio-visual integration for comprehension. |
Lei Yuan; David Uttal; Steven L. Franconeri Are categorical spatial relations encoded by shifting visual attention between objects? Journal Article In: PLoS ONE, vol. 11, no. 10, pp. e0163141, 2016. @article{Yuan2016, We argue that people compare values in graphs with a visual routine – attending to data values in an ordered pattern over time. Do these visual routines exist to manage capacity limitations in how many values can be encoded at once, or do they actually affect the relations that are extracted? We measured eye movements while people judged configurations of a two-bar graph based on size only (“[short tall] or [tall short]?”) and contrast only (“[light dark] or [dark light]?”). Participants exhibited visual routines in which they systematically attended to a specific feature (or “anchor point”) in the graph; in the size task, most participants inspected the taller bar first, and in the contrast task, most participants attended to the darker bar first. Participants then judged configurations that varied in both size and contrast (e.g., [short-light tall-dark]); however, only one dimension was task-relevant (varied between subjects). During this orthogonal task, participants overwhelmingly relied on the same anchor point used in the single-dimension version, but only for the task-relevant dimension (e.g., taller bar for the size-relevant task). These results suggest that visual routines are associated with specific graph interpretations. Responses were also faster when task-relevant and task-irrelevant anchor points appeared on the same object (congruent) than on different objects (incongruent). This interference from the task-irrelevant dimension suggests that top-down control may be necessary to extract relevant relations from graphs. The effect of visual routines on graph comprehension has implications for both science, technology, engineering, and mathematics pedagogy and graph design. |
Tania S. Zamuner; Charlotte Moore; Félix Desmeules-Trudel Toddlers' sensitivity to within-word coarticulation during spoken word recognition: Developmental differences in lexical competition Journal Article In: Journal of Experimental Child Psychology, vol. 152, pp. 136–148, 2016. @article{Zamuner2016, To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. |
Tania S. Zamuner; Elizabeth Morin-Lessard; Stephanie Strahm; Michael P. A. Page Spoken word recognition of novel words, either produced or only heard during learning Journal Article In: Journal of Memory and Language, vol. 89, pp. 55–67, 2016. @article{Zamuner2016a, Psycholinguistic models of spoken word production differ in how they conceptualize the relationship between lexical, phonological and output representations, making different predictions for the role of production in language acquisition and language processing. This work examines the impact of production on spoken word recognition of newly learned non-words. In Experiment 1, adults were trained on non-words with visual referents; during training, they produced half of the non-words, with the other half being heard-only. Using a visual world paradigm at test, eye tracking results indicated faster recognition of non-words that were produced compared with heard-only during training. In Experiment 2, non-words were correctly pronounced or mispronounced at test. Participants showed a different pattern of recognition for mispronunciation on non-words that were produced compared with heard-only during training. Together these results indicate that production affects the representations of newly learned words. |
Chuanli Zang; Yongsheng Wang; Xuejun Bai; Guoli Yan; Denis Drieghe; Simon P. Liversedge The use of probabilistic lexicality cues for word segmentation in Chinese reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 3, pp. 548–560, 2016. @article{Zang2016, In an eye-tracking experiment we examined whether Chinese readers were sensitive to information concerning how often a Chinese character appears as a single-character word versus the first character in a two-character word, and whether readers use this information to segment words and adjust the amount of parafoveal processing of subsequent characters during reading. Participants read sentences containing a two-character target word with its first character more or less likely to be a single-character word. The boundary paradigm was used. The boundary appeared between the first character and the second character of the target word, and we manipulated whether readers saw an identity or a pseudocharacter preview of the second character of the target. Linear mixed-effects models revealed reduced preview benefit from the second character when the first character was more likely to be a single-character word. This suggests that Chinese readers use probabilistic combinatorial information about the likelihood of a Chinese character being single-character word or a two-character word online to modulate the extent of parafoveal processing. |
Chuanli Zang; Manman Zhang; Xuejun Bai; Guoli Yan; Kevin B. Paterson; Simon P. Liversedge Effects of word frequency and visual complexity on eye movements of young and older Chinese readers Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 7, pp. 1409–1425, 2016. @article{Zang2016a, Research using alphabetic languages shows that, compared to young adults, older adults employ a risky reading strategy in which they are more likely to guess word identities and skip words to compensate for their slower processing of text. However, little is known about how ageing affects reading behaviour for naturally unspaced, logographic languages like Chinese. Accordingly, to assess the generality of age-related changes in reading strategy across different writing systems we undertook an eye movement investigation of adult age differences in Chinese reading. Participants read sentences containing a target word (a single Chinese character) that had a high or low frequency of usage and was constructed from either few or many character strokes, and so either visually simple or complex. Frequency and complexity produced similar patterns of influence for both age groups on skipping rates and fixation times for target words. Both groups therefore demonstrated sensitivity to these manipulations. But compared to the young adults, the older adults made more and longer fixations and more forward and backward eye movements overall. They also fixated the target words for longer, especially when these were visually complex. Crucially, the older adults skipped words less and made shorter progressive saccades. Therefore, in contrast with findings for alphabetic languages, older Chinese readers appear to use a careful reading strategy according to which they move their eyes cautiously along lines of text and skip words infrequently. We propose they use this more careful reading strategy to compensate for increased difficulty processing word boundaries in Chinese. |
Hassan Zanganeh Momtaz; Mohammad Reza Daliri Predicting the eye fixation locations in the gray scale images in the visual scenes with different semantic contents Journal Article In: Cognitive Neurodynamics, vol. 10, no. 1, pp. 31–47, 2016. @article{ZanganehMomtaz2016, In recent years, there has been considerable interest in visual attention models (saliency map of visual attention). These models can be used to predict eye fixation locations, and thus will have many applications in various fields which leads to obtain better performance in machine vision systems. Most of these models need to be improved because they are based on bottom-up computation that does not consider top-down image semantic contents and often does not match actual eye fixation locations. In this study, we recorded the eye movements (i.e., fixations) of fourteen individuals who viewed images which consist natural (e.g., landscape, animal) and man-made (e.g., building, vehicles) scenes. We extracted the fixation locations of eye movements in two image categories. After extraction of the fixation areas (a patch around each fixation location), characteristics of these areas were evaluated as compared to non-fixation areas. The extracted features in each patch included the orientation and spatial frequency. After feature extraction phase, different statistical classifiers were trained for prediction of eye fixation locations by these features. This study connects eye-tracking results to automatic prediction of saliency regions of the images. The results showed that it is possible to predict the eye fixation locations by using of the image patches around subjects' fixation points. |
Theodoros P. Zanos; Patrick J. Mineault; Daniel Guitton; Christopher C. Pack Mechanisms of saccadic suppression in primate cortical area V4 Journal Article In: Journal of Neuroscience, vol. 36, no. 35, pp. 9227–9239, 2016. @article{Zanos2016, Psychophysical studies have shown that subjects are often unaware of visual stimuli presented around the time of an eye movement. This saccadic suppression is thought to be a mechanism for maintaining perceptual stability. The brain might accomplish saccadic suppression by reducing the gain of visual responses to specific stimuli or by simply suppressing firing uniformly for all stimuli. Moreover, the suppression might be identical across the visual field or concentrated at specific points. To evaluate these possibilities, we recorded from individual neurons in cortical area V4 of nonhuman primates trained to execute saccadic eye movements. We found that both modes of suppression were evident in the visual responses of these neurons and that the two modes showed different spatial and temporal profiles: while gain changes started earlier and were more widely distributed across visual space, nonspecific suppression was found more often in the peripheral visual field, after the completion of the saccade. Peripheral suppression was also associated with increased noise correlations and stronger local field potential oscillations in the α frequency band. This pattern of results suggests that saccadic suppression shares some of the circuitry responsible for allocating voluntary attention. SIGNIFICANCE STATEMENT We explore our surroundings by looking at things, but each eye movement that we make causes an abrupt shift of the visual input. Why doesn't the world look like a film recorded on a shaky camera? The answer in part is a brain mechanism called saccadic suppression, which reduces the responses of visual neurons around the time of each eye movement. Here we reveal several new properties of the underlying mechanisms. First, the suppression operates differently in the central and peripheral visual fields. Second, it appears to be controlled by oscillations in the local field potentials at frequencies traditionally associated with attention. These results suggest that saccadic suppression shares the brain circuits responsible for actively ignoring irrelevant stimuli. |
Tianyu Zeng; Liling Zheng; Lei Mo Shape representation of word was automatically activated in the encoding phase Journal Article In: PLoS ONE, vol. 11, no. 10, pp. e165534, 2016. @article{Zeng2016, Theories of embodied language comprehension have proposed that language processing includes perception simulation and activation of sensorimotor representation. Previous studies have used a numerical priming paradigm to test the priming effect of semantic size, and the negative result showed that the sensorimotor representation has not been activated during the encoding phase. Considering that the size property is unstable, here we changed the target property to examine the priming effect of semantic shape using the same paradigm. The participants would see three different object names successively, and then they were asked to decide whether the shape of the second referent was more similar to the first one or the third one. In the eye-movement experiment, the encoding time showed a distance-priming effect, as the similarity of shapes between the first referent and the second referent increased, the encoding time of the second word gradually decreased. In the event-related potentials experiment, when the difference of shapes between the first referent and the second referent increased, the N400 amplitude became larger. These findiings suggested that the shape information of a word was activated during the encoding phase, providing supportive evidence for the embodied theory of language comprehension. |
Alexandre Zenon; Sophie Devesse; Etienne Olivier Dopamine manipulation affects response vigor independently of opportunity cost Journal Article In: Journal of Neuroscience, vol. 36, no. 37, pp. 9516–9525, 2016. @article{Zenon2016, Dopamine is known to be involved in regulating effort investment in relation to reward, and the disruption of this mechanism is thought to be central in some pathological situations such as Parkinson's disease, addiction, and depression. According to an influential model, dopamine plays this role by encoding the opportunity cost, i.e., the average value of forfeited actions, which is an important parameter to take into account when making decisions about which action to undertake and how fast to execute it. We tested this hypothesis by asking healthy human participants to perform two effort-based decision-making tasks, following either placebo or levodopa intake in a double blind within-subject protocol. In the effort-constrained task, there was a trade-off between the amount of force exerted and the time spent in executing the task, such that investing more effort decreased the opportunity cost. In the time-constrained task, the effort duration was constant, but exerting more force allowed the subject to earn more substantial reward instead of saving time. Contrary to the model predictions, we found that levodopa caused an increase in the force exerted only in the time-constrained task, in which there was no trade-off between effort and opportunity cost. In addition, a computational model showed that dopamine manipulation left the opportunity cost factor unaffected but altered the ratio between the effort cost and reinforcement value. These findings suggest that dopamine does not represent the opportunity cost but rather modulates how much effort a given reward is worth. |
Alexandre Zénon; Yann Duclos; Romain Carron; Tatiana Witjas; Christelle Baunez; Jean Régis; Jean Philippe Azulay; Peter Brown; Alexandre Eusebio The human subthalamic nucleus encodes the subjective value of reward and the cost of effort during decision-making Journal Article In: Brain, vol. 139, no. 6, pp. 1830–1843, 2016. @article{Zenon2016a, Adaptive behaviour entails the capacity to select actions as a function of their energy cost and expected value and the disruption of this faculty is now viewed as a possible cause of the symptoms of Parkinsons disease. Indirect evidence points to the involvement of the subthalamic nucleus–the most common target for deep brain stimulation in Parkinsons disease–in cost-benefit computation. However, this putative function appears at odds with the current view that the subthalamic nucleus is important for adjusting behaviour to conflict. Here we tested these contrasting hypotheses by recording the neuronal activity of the subthalamic nucleus of patients with Parkinsons disease during an effort-based decision task. Local field potentials were recorded from the subthalamic nucleus of 12 patients with advanced Parkinsons disease (mean age 63.8 years +/- 6.8; mean disease duration 9.4 years +/- 2.5) both OFF and ON levodopa while they had to decide whether to engage in an effort task based on the level of effort required and the value of the reward promised in return. The data were analysed using generalized linear mixed models and cluster-based permutation methods. Behaviourally, the probability of trial acceptance increased with the reward value and decreased with the required effort level. Dopamine replacement therapy increased the rate of acceptance for efforts associated with low rewards. When recording the subthalamic nucleus activity, we found a clear neural response to both reward and effort cues in the 1-10 Hz range. In addition these responses were informative of the subjective value of reward and level of effort rather than their actual quantities, such that they were predictive of the participants decisions. OFF levodopa, this link with acceptance was weakened. Finally, we found that these responses did not index conflict, as they did not vary as a function of the distance from indifference in the acceptance decision. These findings show that low-frequency neuronal activity in the subthalamic nucleus may encode the information required to make cost-benefit comparisons, rather than signal conflict. The link between these neural responses and behaviour was stronger under dopamine replacement therapy. Our findings are consistent with the view that Parkinsons disease symptoms may be caused by a disruption of the processes involved in balancing the value of actions with their associated effort cost. |
Sandra A. Zerkle; Jennifer E. Arnold Discourse attention during utterance planning a昀fects referential form choice Journal Article In: Linguistics Vanguard, vol. 2, pp. 1–16, 2016. @article{Zerkle2016, An unstudied source of linguistic variation is the use of discourse-appropriate language. Sometimes individuals use linguistic devices (anaphors, connectors) to connect utterances to the discourse context, and sometimes not. We asked how this variation is related to utterance planning, using eyetracking with a narrative production task. Participants saw picture pairs depicting two events. They heard a description of the first event (Context picture), then added to the story by describing the second event (Target picture). We found that one group of participants produced utterances that connected with the discourse context (Context-Users), using pronouns/zeros and connectors ( and / then ) as appropriate, while another group consistently used definite NP descriptions and virtually no connectors (Context-Ignorers). Eyetracking measures reflected utterance planning within a discourse context: all participants shifted their attention from the Context picture to the Target picture throughout a trial. We also observed group differences: Context-Users directed their attention in a more systematic way than Context-Ignorers. At trial onset, Context-Users looked more at the Context picture than Context-Ignorers. Right before speaking, they looked more at the Target picture than Context-Ignorers. The Context-Users also had shorter latency to begin speaking. This study provides a first step toward characterizing individual differences in terms of utterance planning. |
Paul Zerr; Katharine N. Thakkar; Siarhei Uzunbajakau; Stefan Van der Stigchel Error compensation in random vector double step saccades with and without global adaptation Journal Article In: Vision Research, vol. 127, pp. 141–151, 2016. @article{Zerr2016, In saccade sequences without visual feedback endpoint errors pose a problem for subsequent saccades. Accurate error compensation has previously been demonstrated in double step saccades (DSS) and is thought to rely on a copy of the saccade motor vector. However, these studies typically use fixed target vectors on each trial, calling into question the generalizability of the findings due to the high stimulus predictability. We present a random walk DSS paradigm (random target vector amplitudes and directions) to provide a more complete, realistic and generalizable description of error compensation in saccade sequences. We regressed the vector between the endpoint of the second saccade and the endpoint of a hypothetical second saccade that does not take first saccade error into account on the ideal compensation vector. This provides a direct and complete estimation of error compensation in DSS. We observed error compensation with varying stimulus displays that was comparable to previous findings. We also employed this paradigm to extend experiments that showed accurate compensation for systematic undershoots after specific-vector saccade adaptation. Utilizing the random walk paradigm for saccade adaptation by Rolfs et al. (2010) together with our random walk DSS paradigm we now also demonstrate transfer of adaptation from reactive to memory guided saccades for global saccade adaptation. We developed a new, generalizable DSS paradigm with unpredictable stimuli and successfully employed it to verify, replicate and extend previous findings, demonstrating that endpoint errors are compensated for saccades in all directions and variable amplitudes. |
Jeremy M. Wolfe; Mia K. Markey; Gezheng Wen; Trafton Drew; Avigael Aizenman; Tamara Miner Haygood Computational assessment of visual search strategies in volumetric medical images Journal Article In: Journal of Medical Imaging, vol. 3, no. 1, pp. 1–12, 2016. @article{Wolfe2016, When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: “drilling” (restrict eye movements to a small region of the image while quickly scrolling through slices), or “scanning” (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either “drilling” or “scanning” when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimen- sional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus “drilling” may be more efficient than “scanning.” |
Elizabeth Wonnacott; Holly S. S. L. Joseph; James S. Adelman; Kate Nation Is children's reading “good enough”? Links between online processing and comprehension as children read syntactically ambiguous sentences Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 5, pp. 855–879, 2016. @article{Wonnacott2016, We monitored 8- and 10-year-old children's eye movements as they read sentences containing a temporary syntactic ambiguity to obtain a detailed record of their online processing. Children showed the classic garden-path effect in online processing. Their reading was disrupted following disambiguation, relative to control sentences containing a comma to block the ambiguity, although the disruption occurred somewhat later than would be expected for mature readers. We also asked children questions to probe their comprehension of the syntactic ambiguity offline. They made more errors following ambiguous sentences than following control sentences, demonstrating that the initial incorrect parse of the garden-path sentence influenced offline comprehension. These findings are consistent with "good enough" processing effects seen in adults. While faster reading times and more regressions were generally associated with better comprehension, spending longer reading the question predicted comprehension success specifically in the ambiguous condition. This suggests that reading the question prompted children to reconstruct the sentence and engage in some form of processing, which in turn increased the likelihood of comprehension success. Older children were more sensitive to the syntactic function of commas, and, overall, they were faster and more accurate than younger children. |
Jeffrey S. Wood; Matthew Haigh; Andrew J. Stewart “This Isn't a Promise, It's a Threat” Journal Article In: Experimental Psychology, vol. 63, no. 2, pp. 89–97, 2016. @article{Wood2016, Participants had their eye movements recorded as they read vignettes containing implied promises and threats. We observed a reading time penalty when participants read the word “threat” when it anaphorically referred to an implied promise. There was no such penalty when the word “promise” was used to refer to an implied threat. On a later measure of processing we again found a reading time penalty when the word “threat” was used to refer to a promise, but also when the word “promise” was used to refer to a threat. These results suggest that anaphoric processing of such expressions is driven initially by sensitivity to the semantic scope differences of “threats” versus “promises.” A threat can be understood as a type of promise, but a promise cannot be understood as a type of threat. However, this effect was short lived; readers were ultimately sensitive to mismatched meaning, regardless of speech act performed. |
Helen Wray; Jeffrey S. Wood; Matthew Haigh; Andrew J. Stewart Threats may be negative promises (but warnings are more than negative tips) Journal Article In: Journal of Cognitive Psychology, vol. 28, no. 5, pp. 593–600, 2016. @article{Wray2016, In everyday situations conditional promises, threats, tips, and warnings are commonplace. Previous research has reported disruption to eye movements during reading when conditional promises are produced by someone who does not have control over the conditional outcome event, but no such disruption for the processing of conditional tips. In the present paper, we examine how readers process conditional threats and warnings. We compare one account which views conditional threats and warnings simply as promises and tips with negative outcomes, with an alternative account which highlights their broader pragmatic differences. In an eye-tracking experiment we find evidence suggesting that, in processing terms, while threats operate like negative promises, warnings are more than negative tips. |
Daw-An Wu; Patrick Cavanagh Where are you looking? Pseudogaze in afterimages Journal Article In: Journal of Vision, vol. 16, no. 5, pp. 1–10, 2016. @article{Wu2016, How do we know where we are looking? A frequent assumption is that the subjective experience of our direction of gaze is assigned to the location in the world that falls on our fovea. However, we find that observers can shift their subjective direction of gaze among different nonfoveal points in an afterimage. Observers were asked to look directly at different corners of a diamond-shaped afterimage. When the requested corner was 3.5° in the periphery, the observer often reported that the image moved away in the direction of the attempted gaze shift. However, when the corner was at 1.75° eccentricity, most reported successfully fixating at the point. Eye-tracking data revealed systematic drift during the subjective fixations on peripheral locations. For example, when observers reported looking directly at a point above the fovea, their eyes were often drifting steadily upwards. We then asked observers to make a saccade from a subjectively fixated, nonfoveal point to another point in the afterimage, 7° directly below their fovea. The observers consistently reported making appropriately diagonal saccades, but the eye movement traces only occasionally followed the perceived oblique direction. These results suggest that the perceived direction of gaze can be assigned flexibly to an attended point near the fovea. This may be how the visual world acquires its stability during fixation of an object, despite the drifts and microsaccades that are normal characteristics of visual fixation. |
Esther X. W. Wu; Fook-Kee Chua; Shih-Cheng Yen Saccade plan overlap and cancellation during free viewing Journal Article In: Vision Research, vol. 127, pp. 122–131, 2016. @article{Wu2016a, In the current study, we examined how the saccadic system responds when visual information changes dynamically in our environment. Previous studies, using the double-step task, have shown that (a) saccade plans could overlap, such that saccade preparation to an object started even while the saccade preparation to another object was ongoing, and (b) saccade plans could be cancelled before they were completed. In these studies, saccade targets were restricted to a few, experimenter-defined locations. Here, we examined whether saccade plan overlap and cancellation mechanisms could be observed in free-viewing conditions. For each trial, we constructed sets of two images, each containing five objects. All objects have unique positions. Image 1 was presented for several fixations, before Image 2 was presented during a fixation, presumably while a saccade plan to an object in Image 1 was ongoing. There were two crucial findings: (a) First, the saccade immediately following the transition was sometimes executed towards objects in Image 2, and not an object in Image 1, suggesting that the earlier saccade plan to an Image 1 object had been cancelled. Second, analysis of the temporal data also suggested that preparation of the first post-transition saccade started before an earlier saccade plan to an Image 1 object was executed, implying that saccade plans overlapped. |
Yingying Wu; Xiaohong Yang; Yufang Yang Eye movement evidence for hierarchy effects on memory representation of discourses Journal Article In: PLoS ONE, vol. 11, no. 1, pp. e0147313, 2016. @article{Wu2016b, In this study, we applied the text-change paradigm to investigate whether and how discourse hierarchy affected the memory representation of a discourse. Three kinds of three-sentence discourses were constructed. In the hierarchy-high condition and the hierarchy-low condition, the three sentences of the discourses were hierarchically organized and the last sentence of each discourse was located at the high level and the low level of the discourse hierarchy, respectively. In the linear condition, the three sentences of the discourses were linearly organized. Critical words were always located at the last sentence of the discourses. These discourses were successively presented twice and the critical words were changed to semantically related words in the second presentation. The results showed that during the early processing stage, the critical words were read for longer times when they were changed in the hierarchy-high and the linear conditions, but not in the hierarchy-low condition. During the late processing stage, the changed-critical words were again found to induce longer reading times only when they were in the hierarchy-high condition. These results suggest that words in a discourse have better memory representation when they are located at the higher rather than at the lower level of the discourse hierarchy. Global discourse hierarchy is established as an important factor in constructing the mental representation of a discourse. |
Andreas Wutz; Jan Drewes; David Melcher Nonretinotopic perception of orientation: Temporal integration of basic features operates in object-based coordinates Journal Article In: Journal of Vision, vol. 16, no. 10, pp. 1–15, 2016. @article{Wutz2016, Early, feed-forward visual processing is organized in a retinotopic reference frame. In contrast, visual feature integration on longer time scales can involve objectbased or spatiotopic coordinates. For example, in the Ternus-Pikler (T-P) apparent motion display, object identity is mapped across the object motion path. Here, we report evidence from three experiments supporting nonretinotopic feature integration even for the most paradigmatic example of retinotopically-defined features: orientation. We presented observers with a repeated series of T-P displays in which the perceived rotation of Gabor gratings indicates processing in either retinotopic or object-based coordinates. In Experiment 1, the frequency of perceived retinotopic rotations decreased exponentially for longer interstimulus intervals (ISIs) between T-P display frames, with objectbased percepts dominating after about 150-250 ms. In a second experiment, we show that motion and rotation judgments depend on the perception of a moving object during the T-P display ISIs rather than only on temporal factors. In Experiment 3, we cued the observers' attenti onal state either toward a retinotopic or object motion-based reference frame and then tracked both the observers' eye position and the time course of the perceptual bias while viewing identical T-P display sequences. Overall, we report novel evidence for spatiotemporal integration of even basic visual features such as orientation in nonretinotopic coordinates, in order to support perceptual constancy across self- and object motion. |
Yi Xia; Yusuke Morimoto; Yasuki Noguchi Retrospective triggering of conscious perception by an interstimulus interaction Journal Article In: Journal of Vision, vol. 16, pp. 1–8, 2016. @article{Xia2016, Attention facilitates conscious perception of a visual stimulus at an attended location. Interestingly, a recent study (using the Posner spatial-cueing task) reported that attention facilitated conscious perception even when it was cued after a stimulus was gone (postcued- attention or retroperception effect). Here, we show that this effect can be induced without any contribution of attention. Contrary to previous situations, we fixed a position of a target (Gabor patch) and cue (luminance change of a circle encompassing the target) across trials so that subjects always could allocate their full attention to the target position. The cue (luminance change) improved objective and subjective visibility of the nearby target even when it was given ;200 ms after the target's offset. This retrospective improvement was diminished when a shape of the cue was changed from a circle to a dot pattern, suggesting that the improvement emerged from a visual interaction (combinations of shapes) between the circular cue and target. Those results indicated that a local visual interaction between the target and cue is sufficient to trigger consciousness of the target, revealing a new type of retroperception effect mediated by sensory (nonattentional) mechanisms. |
Jue Xie; Camillo Padoa-Schioppa Neuronal remapping and circuit persistence in economic decisions Journal Article In: Nature Neuroscience, vol. 19, no. 6, pp. 855–861, 2016. @article{Xie2016, The orbitofrontal cortex plays a central role in good-based economic decisions. When subjects make choices, neurons in this region represent the identities and values of offered and chosen goods. Notably, choices in different behavioral contexts may involve a potentially infinite variety of goods. Thus a fundamental question concerns the stability versus flexibility of the decision circuit. Here we show in rhesus monkeys that neurons encoding the identity or the subjective value of particular goods in a given context 'remap' and become associated with different goods when the context changes. At the same time, the overall organization of the decision circuit and the function of individual cells remain stable across contexts. In particular, two neurons supporting the same decision in one context also support the same decision in different contexts. These results demonstrate how the same neural circuit can underlie economic decisions involving a large variety of goods. |
Ying-Zi Xiong; Xin-Yu Xie; Cong Yu Location and direction specificity in motion direction learning associated with a single-level method of constant stimuli Journal Article In: Vision Research, vol. 119, pp. 9–15, 2016. @article{Xiong2016, Recent studies reported significantly less location specificity in motion direction learning than in previous classical studies. The latter performed training with the method of constant stimuli containing a single level of direction difference. In contrast the former used staircase methods that varied the direction difference trial by trial. We suspect that extensive practice with a single direction difference could allow an observer to use some subtle local cues for direction discrimination. Such local cues may be unavailable at a new stimulus location, leading to higher location specificity. To test this hypothesis, we jittered slightly the directions of a stimulus pair by the same amount while keeping the direction difference constant, so as to disturb the potential local cues. We observed significantly more transfer of learning to untrained locations. The local cue effects may also explain the recent controversies regarding the finding that foveal motion direction learning becomes significantly more transferrable to a new direction with TPE (training-plus-exposure) training. One specific study by Zili Liu and collaborators that challenges this finding also used a single-level direction difference for training. We first replicated their results. But we found that if the directions of the stimulus pair were again jittered while the direction difference was kept constant, motion direction learning transferred significantly more to an orthogonal direction with TPE training. Our results thus demonstrate the importance of using appropriate psychophysical methods in training to reduce local-cue related specificity in perceptual learning. |
Chuyao Yan; Tao He; Raymond M. Klein; Zhiguo Wang Predictive remapping gives rise to environmental inhibition of return Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 6, pp. 1860–1866, 2016. @article{Yan2016a, Neurons in various brain regions predictively respond to stimuli that will be brought to their receptive fields by an impending eye movement. This neural mechanism, known as predictive remapping, has been suggested to underlie spatial constancy. Inhibition of return (IOR) is a bias against recently attended locations. The present study examined whether predictive remapping is a mechanism underlying IOR effects observed in environmental coordinates. The participant made saccades to a peripheral location after an IOR effect had been elicited by an onset cue and discriminated a target presented around the time of saccade onset. Immediately before the required saccade, IOR emerged at the retinal locus that would be brought to the cued location. A second task in which the participant maintained fixation during the entire trial ruled out the possibility that this IOR effect was simply the spillover of IOR from the cued location. These findings, for the first time, provide direct behavioral evidence that predictive remapping is a mechanism underlying environmental IOR. |
Ming Yan; Reinhold Kliegl CarPrice versus CarpRice : Word Boundary Ambiguity Influences Saccade Target Selection During the Reading of Chinese Sentences Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 11, pp. 1832–1838, 2016. @article{Yan2016, As a contribution to a theoretical debate about the degree of high-level influences on saccade targeting during sentence reading, we investigated eye movements during the reading of structurally ambiguous Chinese character strings and examined whether parafoveal word segmentation could influence saccade- target selection. As expected, ambiguous strings took longer to process. More critically there were theoretically relevant interactions between ambiguity and launch site when first-fixation location and saccade amplitude served as dependent variables: Ambiguous strings in the parafovea triggered longer saccades and more rightward fixations for close launch sites than unambiguous ones; the reverse result was obtained for far launch sites. These crossover interactions indicate that parafoveal word segmentation influences saccade generation in Chinese and provide support of the hypothesis that high-level infor- mation can be involved in the decision about where to fixate next. |
Scott Cheng Hsin Yang; Máté Lengyel; Daniel M. Wolpert Active sensing in the categorization of visual patterns Journal Article In: eLife, vol. 5, no. FEBRUARY2016, pp. 1–22, 2016. @article{Yang2016, Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. |
Tao Yao; Stefan Treue; B. Suresh Krishna An attention-sensitive memory trace in macaque MT following saccadic eye movements Journal Article In: PLoS Biology, vol. 14, no. 2, pp. e1002390, 2016. @article{Yao2016, We experience a visually stable world despite frequent retinal image displacements induced by eye, head, and body movements. The neural mechanisms underlying this remain unclear. One mechanism that may contribute is transsaccadic remapping, in which the responses of some neurons in various attentional, oculomotor, and visual brain areas appear to anticipate the consequences of saccades. The functional role of transsaccadic remapping is actively debated, and many of its key properties remain unknown. Here, recording from two monkeys trained to make a saccade while directing attention to one of two spatial locations, we show that neurons in the middle temporal area (MT), a key locus in the motion-processing pathway of humans and macaques, show a form of transsaccadic remapping called a memory trace. The memory trace in MT neurons is enhanced by the allocation of top-down spatial attention. Our data provide the first demonstration, to our knowledge, of the influence of top-down attention on the memory trace anywhere in the brain. We find evidence only for a small and transient effect of motion direction on the memory trace (and in only one of two monkeys), arguing against a role for MT in the theoretically critical yet empirically contentious phenomenon of spatiotopic feature-comparison and adaptation transfer across saccades. Our data support the hypothesis that transsaccadic remapping represents the shift of attentional pointers in a retinotopic map, so that relevant locations can be tracked and rapidly processed across saccades. Our results resolve important issues concerning the perisaccadic representation of visual stimuli in the dorsal stream and demonstrate a significant role for top-down attention in modulating this representation. |
Sang-Hoon Yeo; David W. Franklin; Daniel M. Wolpert When optimal feedback control is not enough: Feedforward strategies are required for optimal control with active sensing Journal Article In: PLoS Computational Biology, vol. 12, no. 12, pp. e1005190, 2016. @article{Yeo2016, Movement planning is thought to be primarily determined by motor costs such as inaccuracy and effort. Solving for the optimal plan that minimizes these costs typically leads to specifying a time-varying feedback controller which both generates the movement and can optimally correct for errors that arise within a movement. However, the quality of the sensory feedback during a movement can depend substantially on the generated movement. We show that by incorporating such state-dependent sensory feedback, the optimal solution incorporates active sensing and is no longer a pure feedback process but includes a significant feedforward component. To examine whether people take into account such state-dependency in sensory feedback we asked people to make movements in which we controlled the reliability of sensory feedback. We made the visibility of the hand state-dependent, such that the visibility was proportional to the component of hand velocity in a particular direction. Subjects gradually adapted to such a sensory perturbation by making curved hand movements. In particular, they appeared to control the late visibility of the movement matching predictions of the optimal controller with state-dependent sensory noise. Our results show that trajectory planning is not only sensitive to motor costs but takes sensory costs into account and argues for optimal control of movement in which feedforward commands can play a significant role. |
H. Henny Yeung; Stephanie Denison; Scott P. Johnson Infants' looking to surprising events: When eye-tracking reveals more than looking time Journal Article In: PLoS ONE, vol. 11, no. 12, pp. e0164277, 2016. @article{Yeung2016, Research on infants' reasoning abilities often rely on looking times, which are longer to surprising and unexpected visual scenes compared to unsurprising and expected ones. Few researchers have examined more precise visual scanning patterns in these scenes, and so, here, we recorded 8- to 11-month-olds' gaze with an eye tracker as we presented a sampling event whose outcome was either surprising, neutral, or unsurprising: A red (or yellow) ball was drawn from one of three visible containers populated 0%, 50%, or 100% with identically colored balls. When measuring looking time to the whole scene, infants were insensitive to the likelihood of the sampling event, replicating failures in similar paradigms. Nevertheless, a new analysis of visual scanning showed that infants did spend more time fixating specific areas-of-interest as a function of the event likelihood. The drawn ball and its associated container attracted more looking than the other containers in the 0% condition, but this pattern was weaker in the 50% condition, and even less strong in the 100% condition. Results suggest that measuring where infants look may be more sensitive than simply how much looking there is to the whole scene. The advantages of eye tracking measures over traditional looking measures are discussed. |
Ulrike Zimmer; M H"ofler; Karl Koschutnig; Anja Ischebeck; Margit Höfler; Karl Koschutnig; Anja Ischebeck Neuronal interactions in areas of spatial attention reflect avoidance of disgust, but orienting to danger Journal Article In: NeuroImage, vol. 134, pp. 94–104, 2016. @article{Zimmer2016, For survival, it is necessary to attend quickly towards dangerous objects, but to turn away from something that is disgusting. We tested whether fear and disgust sounds direct spatial attention differently. Using fMRI, a sound cue (disgust, fear or neutral) was presented to the left or right ear. The cue was followed by a visual target (a small arrow) which was located on the same (valid) or opposite (invalid) side as the cue. Participants were required to decide whether the arrow pointed up- or downwards while ignoring the sound cue. Behaviorally, responses were faster for invalid compared to valid targets when cued by disgust, whereas the opposite pattern was observed for targets after fearful and neutral sound cues. During target presentation, activity in the visual cortex and IPL increased for targets invalidly cued with disgust, but for targets validly cued with fear which indicated a general modulation of activation due to attention. For the TPJ, an interaction in the opposite direction was observed, consistent with its role in detecting targets at unattended positions and in relocating attention. As a whole our results indicate that a disgusting sound directs spatial attention away from its location, in contrast to fearful and neutral sounds. |
Eckart Zimmermann Spatiotopic buildup of saccade target representation depends on target size Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 11, 2016. @article{Zimmermann2016, How we maintain spatial stability across saccade eye movements is an open question in visual neuroscience. A phenomenon that has received much attention in the field is our seemingly poor ability to discriminate the direction of transsaccadic target displacements. We have recently shown that discrimination performance increases the longer the saccade target has been previewed before saccade execution (Zimmermann, Morrone, & Burr, 2013). We have argued that the spatial representation of briefly presented stimuli is weak but that a strong representation is needed for transsaccadic, i.e., spatiotopic localization. Another factor that modulates the representation of saccade targets is stimulus size. The representation of spatially extended targets is more noisy than that of point-like targets. Here, I show that theincreaseintranssaccadic displacement discrimination as a function of saccade target preview duration depends on target size. This effect was found for spatially extended targets—thus replicating the results of Zimmermann et al. (2013)— but not for point-like targets. An analysis of saccade parameters revealed that the constant error for reaching the saccade target was bigger for spatially extended than for point-like targets, consistent with weaker representation of bigger targets. These results show that transsaccadic displacement discrimination becomes accurate when saccade targets are spatially extended and presented longer, thus resembling closer stimuli in real-world environments. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr Adaptation to size affects saccades with long but not short latencies Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 2, 2016. @article{Zimmermann2016a, Maintained exposure to a specific stimulus property— such as size, color, or motion—induces perceptual adaptation aftereffects, usually in the opposite direction to that of the adaptor. Here we studied how adaptation to size affects perceived position and visually guided action (saccadic eye movements) to that position. Subjects saccaded to the border of a diamond-shaped object after adaptation to a smaller diamond shape. For saccades in the normal latency range, amplitudes decreased, consistent with saccading to a larger object. Short-latency saccades, however, tended to be affected less by the adaptation, suggesting that they were only partly triggered by a signal representing the illusory target position. We also tested size perception after adaptation, followed by a mask stimulus at the probe location after various delays. Similar size adaptation magnitudes were found for all probe-mask delays. In agreement with earlier studies, these results suggest that the duration of the saccade latency period determines the reference frame that codes the probe location. |
Eckart Zimmermann; Ralph Weidner; R. O. Abdollahi; Gereon R. Fink Spatiotopic adaptation in visual areas Journal Article In: Journal of Neuroscience, vol. 36, no. 37, pp. 9526–9534, 2016. @article{Zimmermann2016b, The ability to perceive the visual world around us as spatially stable despite frequent eye movements is one of the long-standing mysteries of neuroscience. The existence of neural mechanisms processing spatiotopic information is indispensable for a successful interaction with the external world. However, how the brain handles spatiotopic information remains a matter of debate. We here combined behavioral and fMRI adaptation to investigate the coding of spatiotopic information in the human brain. Subjects were adapted by a prolonged presentation of a tilted grating. Thereafter, they performed a saccade followed by the brief presentation of a probe. This procedure allowed dissociating adaptation aftereffects at retinal and spatiotopic positions. We found significant behavioral and functional adaptation in both retinal and spatiotopic positions, indicating information transfer into a spatiotopic coordinate system. The brain regions involved were located in ventral visual areas V3, V4, and VO. Our findings suggest that spatiotopic representations involved in maintaining visual stability are constructed by dynamically remapping visual feature information between retinotopic regions within early visual areas. |
Lesya Y. Ganushchak; Yiya Chen Incrementality in planning of speech during speaking and reading aloud: Evidence from eye-tracking Journal Article In: Frontiers in Psychology, vol. 7, pp. 33, 2016. @article{Ganushchak2016, Speaking is an incremental process where planning and articulation interleave. While incrementality has been studied in reading and online speech production separately, it has not been directly compared within one investigation. This study set out to compare the extent of planning incrementality in online sentence formulation versus reading aloud and how discourse context may constrain the planning scope of utterance preparation differently in these two modes of speech planning. Two eye-tracking experiments are reported: participants either described pictures of transitive events (Experiment 1) or read aloud the written descriptions of those events (Experiment 2). In both experiments, the information status of an object character was manipulated in the discourse preceding each picture or sentence. In the Literal condition, participants heard a story where object character was literally mentioned (e.g., fly). In the No Mention condition, stories did not literally mention nor prime the object character depicted on the picture or written in the sentence. The target response was expected to have the same structure and content in all conditions (The frog catches the fly). During naming, the results showed shorter speech onset latencies in the Literal condition than in the No Mention condition. However, no significant differences in gaze durations were found. In contrast, during reading, there were no significant differences in speech onset latencies but there were significantly longer gaze durations to the target picture/word in the Literal than in the No Mention condition. Our results shot that planning is more incremental during reading than during naming and that discourse context can be helpful during speaker but may hinder during reading aloud. Taken together our results suggest that on-line planning of response is affected by both linguistic and non-linguistic factors. |
Ray Garza; Roberto R. Heredia; Anna B. Cieślicka Male and female perception of physical attractiveness: An eye movement study Journal Article In: Evolutionary Psychology, vol. 14, no. 1, pp. 1–16, 2016. @article{Garza2016, Waist-to-hip ratio (WHR) and breast size are morphological traits that are associated with female attractiveness. Previous studies using line drawings of women have shown that men across cultures rate low WHRs (0.6 and 0.7) as most attractive. In this study, we used additional viewing measurements (i.e., first fixation duration and visual regressions) to measure visual attention and record how long participants first focused on the female body and whether they regressed back to an area of interest. Additionally, we manipulated skin tone to determine whether they preferred light- or dark-skinned women. In two eye tracking experiments, participants rated the attractiveness of female nude images varying in WHR (0.5–0.9), breast size, and skin tone. We measured first fixation duration, gaze duration, and total time. The overall results of both studies revealed that visual attention fell mostly on the face, the breasts, and the midriff of the female body, supporting the evolutionary view that reproductively relevant regions of the female body are important to female attractiveness. Because the stimuli varied in skin tone and the participants were mainly Hispanic of Mexican American descent, the findings from these studies also support a preference for low WHRs and reproductively relevant regions of the female body. |
Josselin Gautier; Harold E. Bedell; John Siderov; Sarah J. Waugh Monocular microsaccades are visual-task related Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–16, 2016. @article{Gautier2016, During visual fixation, we constantly move our eyes. These microscopic eye movements are composed of tremor, drift, and microsaccades. Early studies concluded that microsaccades, like larger saccades, are binocular and conjugate, as expected from Hering's law of equal innervation. Here, we document the existence of monocular microsaccades during both fixation and a discrimination task, reporting the location of the gap in a foveal, low-contrast letter C. Monocular microsaccades differ in frequency, amplitude, and peak velocity from binocular microsaccades. Our analyses show that these differences are robust to different velocity and duration criteria that have been used previously to identify microsaccades. Also, the frequency of monocular microsaccades differs systematically according to the task: monocular microsaccades occur more frequently during fixation than discrimination, the opposite of their binocular equivalents. However, during discrimination, monocular microsaccades occur more often around the discrimination threshold, particularly for each subject's dominant eye and in case of successful discrimination. We suggest that monocular microsaccades play a functional role in the production of fine corrections of eye position and vergence during demanding visual tasks. |
Anjith George; Aurobinda Routray A score level fusion method for eye movement biometrics Journal Article In: Pattern Recognition Letters, vol. 82, pp. 207–215, 2016. @article{George2016, This paper proposes a novel framework for the use of eye movement patterns for biometric applications. Eye movements contain abundant information about cognitive brain functions, neural pathways, etc. In the proposed method, eye movement data is classified into fixations and saccades. Features extracted from fixations and saccades are used by a Gaussian Radial Basis Function Network (GRBFN) based method for biometric authentication. A score fusion approach is adopted to classify the data in the output layer. In the evaluation stage, the algorithm has been tested using two types of stimuli: random dot following on a screen and text reading. The results indicate the strength of eye movement pattern as a biometric modality. The algorithm has been evaluated on BioEye 2015 database and found to outperform all the other methods. Eye movements are generated by a complex oculomotor plant which is very hard to spoof by mechanical replicas. Use of eye movement dynamics along with iris recognition technology may lead to a robust counterfeit-resistant person identification system. |
Franziska Geringswald; Eleonora Porracin; Stefan Pollmann Impairment of visual memory for objects in natural scenes by simulated central scotomata Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–12, 2016. @article{Geringswald2016, Because of the close link between foveal vision and the spatial deployment of attention, typically only objects that have been foveated during scene exploration may form detailed and persistent memory representations. In a recent study on patients suffering from age-related macular degeneration, however, we found surprisingly accurate visual long-term memory for objects in scenes. Normal exploration patterns that the patients had learned to rereference saccade targets to an extrafoveal retinal location. This rereferencing may allow use of an extrafoveal location as a focus of attention for efficient object encoding into long-term memory. Here, we tested this hypothesis in normal-sighted observers with gaze-contingent central scotoma simulations. As these observers were inexperienced in scene exploration with central vision loss and had not developed saccadic rereferencing, we expected deficits in long-termmemory for objects.We used the same change detection task as in our patient study, probing sensitivity to object changes after a period of free scene exploration. Change detection performance was significantly reduced for two types of scotoma simulation diminishing foveal and parafoveal vision—a visible gray disc and a more subtle image warping—compared with unimpaired controls, confirming our hypothesis. The impact of a smaller scotoma covering specifically foveal vision was less distinct, leading to a marginally significant decrease of long-term memory performance compared with controls. We conclude that attentive encoding of objects is deficient when central vision is lost as long as successful saccadic rereferencing has not yet developed. |
Hanna Gertz; Maximilian Hilger; Mathias Hegele; Katja Fiehler Violating instructed human agency: An fMRI study on ocular tracking of biological and nonbiological motion stimuli Journal Article In: NeuroImage, vol. 138, pp. 109–122, 2016. @article{Gertz2016, Previous studies have shown that beliefs about the human origin of a stimulus are capable of modulating the coupling of perception and action. Such beliefs can be based on top-down recognition of the identity of an actor or bottom-up observation of the behavior of the stimulus. Instructed human agency has been shown to lead to superior tracking performance of a moving dot as compared to instructed computer agency, especially when the dot followed a biological velocity profile and thus matched the predicted movement, whereas a violation of instructed human agency by a nonbiological dot motion impaired oculomotor tracking (Zwickel et al., 2012). This suggests that the instructed agency biases the selection of predictive models on the movement trajectory of the dot motion. The aim of the present fMRI study was to examine the neural correlates of top-down and bottom-up modulations of perception–action couplings by manipulating the instructed agency (human action vs. computer-generated action) and the observable behavior of the stimulus (biological vs. nonbiological velocity profile). To this end, participants performed an oculomotor tracking task in an MRI environment. Oculomotor tracking activated areas of the eye movement network. A right-hemisphere occipito-temporal cluster comprising the motion-sensitive area V5 showed a preference for the biological as compared to the nonbiological velocity profile. Importantly, a mismatch between instructed human agency and a nonbiological velocity profile primarily activated medial–frontal areas comprising the frontal pole, the paracingulate gyrus, and the anterior cingulate gyrus, as well as the cerebellum and the supplementary eye field as part of the eye movement network. This mismatch effect was specific to the instructed human agency and did not occur in conditions with a mismatch between instructed computer agency and a biological velocity profile. Our results support the hypothesis that humans activate a specific predictive model for biological movements based on their own motor expertise. A violation of this predictive model causes costs as the movement needs to be corrected in accordance with incoming (nonbiological) sensory information. |
Fatema F. Ghasia; George Wilmot; Anwar Ahmed; Aasef G. Shaikh Strabismus and micro-opsoclonus in Machado-Joseph disease Journal Article In: Cerebellum, vol. 15, no. 4, pp. 491–497, 2016. @article{Ghasia2016, We describe novel deficits ofgaze holding and ocular alignment in patients with spinocerebellar ataxia type 3, also known as Machado-Joseph disease (MJD). Twelve MJD patients were studied. Clinical assessments and quantitative ocular alignment measures were performed. Eye movements were quantitatively assessed with corneal curvature tracker and video-oculography. Strabismus was seen in ten MJD patients. Four patients had mild to moderate intermittent exotropia, three had esotropia, one had skew deviation, one had hypotropia, and one patient had moderate exophoria. Three strabismic patients had V-pattern. Near point ofconvergence was normal in two out ofthree patients with exotropia. Gaze holding deficits were also common. Eight patients had gaze-evoked nystagmus, and five had micro-opsoclonus. Other ocular motor deficits included saccadic dysmetria in eight patients, whereas all had saccadic interruption ofsmooth pursuit. Strabismus and micro-opsoclonus are common in MJD. Coexisting ophthalmoplegia or vergence abnormalities in our patients with exotropia that comprised 50 % of the cohort could not explain the type ofstrabismus in our patients. Therefore, it is possible that involvement ofthe brainstem, the deep cerebellar nuclei, and the superior cerebellar peduncle are the physiological basis for exotropia in these patients. Micro-opsoclonus was also common in MJD. Brainstem and deep cerebellar nuclei lesion also explains micro-opsoclonus, whereas brainstem deficits can describe slow saccades seen in our patients with MJD. |
Tobias Heed; Jenny Backhaus; Brigitte Röder; Stephanie Badde Disentangling the external reference frames relevant to tactile localization Journal Article In: PLoS ONE, vol. 11, no. 7, pp. e0158829, 2016. @article{Heed2016, Different reference frames appear to be relevant for tactile spatial coding. When participants give temporal order judgments (TOJ) of two tactile stimuli, one on each hand, performance declines when the hands are crossed. This effect is attributed to a conflict between anatomi- cal and external location codes: hand crossing places the anatomically right hand into the left side of external space. However, hand crossing alone does not specify the anchor of the external reference frame, such as gaze, trunk, or the stimulated limb. Experiments that used explicit localization responses, such as pointing to tactile stimuli rather than crossing manipulations, have consistently implicated gaze-centered coding for touch. To test whether crossing effects can be explained by gaze-centered coding alone, participants made TOJ while the position of the hands wasmanipulated relative to gaze and trunk. The two hands either lay on different sides of space relative to gaze or trunk, or they both lay on one side of the respective space. In the latter posture, one hand was on its "regular side of space" despite hand crossing, thus reducing overall conflict between anatomical and exter- nal codes. TOJ crossing effects were significantly reduced when the hands were both located on the same side of space relative to gaze, indicating gaze-centered coding. Evi- dence for trunk-centered coding was tentative, with an effect in reaction time but not in accu- racy. These results link paradigms that use explicit localization and TOJ, and corroborate the relevance of gaze-related coding for touch. Yet, gaze and trunk-centered coding did not account for the total size of crossing effects, suggesting that tactile localization relies on additional, possibly limb-centered, reference frames. Thus, tactile location appears to be estimated by integrating multiple anatomical and external reference frames. |
Jessica Heeman; Tanja C. W. Nijboer; Nathan Van der Stoep; Jan Theeuwes; Stefan Van der Stigchel Oculomotor interference of bimodal distractors Journal Article In: Vision Research, vol. 123, pp. 46–55, 2016. @article{Heeman2016, When executing an eye movement to a target location, the presence of an irrelevant distracting stimulus can influence the saccade metrics and latency. The present study investigated the influence of distractors of different sensory modalities (i.e. auditory, visual and audiovisual) which were presented at various distances (i.e. close or remote) from a visual target. The interfering effects of a bimodal distractor were more pronounced in the spatial domain than in the temporal domain. The results indicate that the direction of interference depended on the spatial layout of the visual scene. The close bimodal distractor caused the saccade endpoint and saccade trajectory to deviate towards the distractor whereas the remote bimodal distractor caused a deviation away from the distractor. Furthermore, saccade averaging and trajectory deviation evoked by a bimodal distractor was larger compared to the effects evoked by a unimodal distractor. This indicates that a bimodal distractor evoked stronger spatial oculomotor competition compared to a unimodal distractor and that the direction of the interference depended on the distance between the target and the distractor. Together, these findings suggest that the oculomotor vector to irrelevant bimodal input is enhanced and that the interference by multisensory input is stronger compared to unisensory input. |
Karin Heidlmayr; Karine Dore-Mazars; Xavier Aparico; Frederic Isel In: PLoS ONE, vol. 11, no. 11, pp. e0165029, 2016. @article{Heidlmayr2016, In the present electroencephalographical study, we asked to which extent executive control processes are shared by both the language and motor domain. The rationale was to examine whether executive control processes whose efficiency is reinforced by the frequent use of a second language can lead to a benefit in the control of eye movements, i.e. a non-linguistic activity. For this purpose, we administrated to 19 highly proficient late French-German bilingual participants and to a control group of 20 French monolingual participants an antisaccade task, i.e. a specific motor task involving control. In this task, an automatic saccade has to be suppressed while a voluntary eye movement in the opposite direction has to be carried out. Here, our main hypothesis is that an advantage in the antisaccade task should be observed in the bilinguals if some properties of the control processes are shared between linguistic and motor domains. ERP data revealed clear differences between bilinguals and monolinguals. Critically, we showed an increased N2 effect size in bilinguals, thought to reflect better efficiency to monitor conflict, combined with reduced effect sizes on markers reflecting inhibitory control, i.e. cue-locked positivity, the target-locked P3 and the saccade-locked presaccadic positivity (PSP). Moreover, effective connectivity analyses (dynamic causal modelling; DCM) on the neuronal source level indicated that bilinguals rely more strongly on ACC-driven control while monolinguals rely on PFC-driven control. Taken together, our combined ERP and effective connectivity findings may reflect a dynamic interplay between strengthened conflict monitoring, associated with subsequently more efficient inhibition in bilinguals. Finally, L2 proficiency and immersion experience constitute relevant factors of the language background that predict efficiency of inhibition. To conclude, the present study provided ERP and effective connectivity evidence for domain-general executive control involvement in handling multiple language use, leading to a control advantage in bilingualism. |
Sarah R. Heilbronner; Benjamin Y. Hayden The description-experience gap in risky choice in nonhuman primates Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 593–600, 2016. @article{Heilbronner2016, Risk attitudes in humans depend on the format used to present the gamble: we are more risk-averse for common gambles in the gains domain whose properties are described to us verbally than for those whose properties we learned about solely through experience. This difference, which constitutes part ofthe description-experience gap,is important, because it highlights the role ofknowledge acquisition in decision-mak- ing. The reasons for the gap remain obscure, but could depend upon uniquely human cognitive abilities, such as those asso- ciated with language. Thus, the gap may or may not extend to nonhuman animals. For this study, rhesus monkeys performed a novel task in which the properties of some gambles were explicitly cued (described), whereas others were learned through previous choices (experienced). Our monkeys displayed a description-experience gap. Overall, monkeys were more risk-seeking for experienced than for described gambles. This difference was observed for a range ofgamble probabilities (from 20% to 80% likelihood of payoff), indicating that it is not limited to low probability events. These results suggest that the description- experience gap does not depend on uniquely human cognitive abilities, such as those associated with lan- guage, and support the idea that epistemic influences on risk attitudes are evolutionarily ancient. |
Klaartje Heinen; Laura Sagliano; Michela Candini; Masud Husain; Marinella Cappelletti; Nahid Zokaei Cathodal transcranial direct current stimulation over posterior parietal cortex enhances distinct aspects of visual working memory Journal Article In: Neuropsychologia, vol. 87, pp. 35–42, 2016. @article{Heinen2016, In this study, we investigated the effects of tDCS over the posterior parietal cortex (PPC) during a visual working memory (WM) task, which probes different sources of response error underlying the precision of WM recall. In two separate experiments, we demonstrated that tDCS enhanced WM precision when applied bilaterally over the PPC, independent of electrode configuration. In a third experiment, we demonstrated with unilateral electrode configuration over the right PPC, that only cathodal tDCS enhanced WM precision and only when baseline performance was low. Looking at the effects on underlying sources of error, we found that cathodal stimulation enhanced the probability of correct target response across all participants by reducing feature-misbinding. Only for low-baseline performers, cathodal stimulation also reduced variability of recall. We conclude that cathodal- but not anodal tDCS can improve WM precision by preventing feature-misbinding and hereby enhancing attentional selection. For low-baseline performers, cathodal tDCS also protects the memory trace. Furthermore, stimulation over bilateral PPC is more potent than unilateral cathodal tDCS in enhancing general WM precision. |
Stephen J. Heinen; Elena Potapchuk; Scott N. J. Watamaniuk A foveal target increases catch-up saccade frequency during smooth pursuit Journal Article In: Journal of Neurophysiology, vol. 115, no. 3, pp. 1220–1227, 2016. @article{Heinen2016a, Images that move rapidly across the retina of the human eye blur because$backslash$nthe retina has sluggish temporal dynamics. Voluntary smooth pursuit eye movements are modeled as matching object velocity to minimize retinal motion and prevent retinal blurring. However, "catch-up'' saccades that are ubiquitous during pursuit interrupt it and disrupt clear vision. But catch-up saccades may not be a common feature of ocular pursuit, because their existence has been documented with a small moving spot, the classic pursuit stimulus, which is a weak motion stimulus that may poorly emulate larger pursuit objects. We found that spot pursuit does not generalize to that of larger objects. Observers pursued a spot or a larger virtual object with or without a superimposed spot target. Single-spot targets produced lower pursuit acceleration than larger objects. Critically, more saccadic intrusions occurred when stimuli had a central dot, even when position and velocity errors were equated, suggesting that catch-up saccades result from pursuing a single, smallobject or a feature on a large one. To determine what differentiates a large object from a small one, we progressively shrank the featureless virtual object and found that catch-up saccade frequency was highestwhen it fit in the fovea. The results suggest that pursuit of a small target or an object feature recruits a saccade mechanism that does not compensate for a weak motion signal; rather, the target compelsfoveation. Furthermore, catch-up saccades are likely generated by neural circuitry typically used to foveate small objects or features. |
Daphna Heller; Christopher Parisien; Suzanne Stevenson Perspective-taking behavior as the probabilistic weighing of multiple domains Journal Article In: Cognition, vol. 149, pp. 104–120, 2016. @article{Heller2016, Our starting point is the apparently-contradictory results in the psycholinguistic literature regarding whether, when interpreting a definite referring expressions, listeners process relative to the common ground from the earliest moments of processing. We propose that referring expressions are not interpreted relative solely to the common ground or solely to one's Private (or egocentric) knowledge, but rather reflect the simultaneous integration of the two perspectives. We implement this proposal in a Bayesian model of reference resolution, focusing on the model's predictions for two prior studies: Keysar, Barr, Balin, and Brauner (2000) and Heller, Grodner and Tanenhaus (2008). We test the model's predictions in a visual-world eye-tracking experiment, demonstrating that the original results cannot simply be attributed to different perspective-taking strategies, and showing how they can arise from the same perspective-taking behavior. |
Andrea Helo; Pia Rämä; Sebastian Pannasch; David Meary Eye movement patterns and visual attention during scene viewing in 3-to 12-month-olds Journal Article In: Visual Neuroscience, vol. 33, pp. e014, 2016. @article{Helo2016, Recently, two attentional modes have been associated with specific eye movement patterns during scene processing. Ambient mode, characterized by short fixations and long saccades during early scene inspection, is associated with localization of objects. Focal mode, characterized by longer fixations, is associated with more detailed object feature processing during later inspection phase. The aim of the present study was to investigate the development of these attentional modes. More specifically, we examined whether indications of ambient and focal attention modes are similar in infants and adults. Therefore, we measured eye movements in 3-to 12-months-old infants while exploring visual scenes. Our results show that both adults and 12-month-olds had shorter fixation durations within the first 1.5 s of scene viewing compared with later time phases (>2.5 s); indicating that there was a transition from ambient to focal processing during image inspection. In younger infants, fixation durations between two viewing phases did not differ. Our results suggest that at the end of the first year of life, infants have developed an adult-like scene viewing behavior. The evidence for the existence of distinct attentional processing mechanisms during early infancy furthermore underlines the importance of the concept of the two modes. |
John M. Henderson; Wonil Choi; Matthew W. Lowder; Fernanda Ferreira Language structure in the brain: A fixation-related fMRI study of syntactic surprisal in reading Journal Article In: NeuroImage, vol. 132, pp. 293–300, 2016. @article{Henderson2016, How is syntactic analysis implemented by the human brain during language comprehension? The current study combined methods from computational linguistics, eyetracking, and fMRI to address this question. Subjects read passages of text presented as paragraphs while their eye movements were recorded in an MRI scanner. We parsed the text using a probabilistic context-free grammar to isolate syntactic difficulty. Syntactic difficulty was quantified as syntactic surprisal, which is related to the expectedness of a given word's syntactic category given its preceding context. We compared words with high and low syntactic surprisal values that were equated for length, frequency, and lexical surprisal, and used fixation-related (FIRE) fMRI to measure neural activity associated with syntactic surprisal for each fixated word. We observed greater neural activity for high than low syntactic surprisal in two predicted cortical regions previously identified with syntax: left inferior frontal gyrus (IFG) and less robustly, left anterior superior temporal lobe (ATL). These results support the hypothesis that left IFG and ATL play a central role in syntactic analysis during language comprehension. More generally, the results suggest a broader cortical network associated with syntactic prediction that includes increased activity in bilateral IFG and insula, as well as fusiform and right lingual gyri. |
Roberto R. Heredia; Anna B. Cieślicka Metaphoric reference: An eye movement analysis of Spanish-English and English-Spanish bilingual readers Journal Article In: Frontiers in Psychology, vol. 7, pp. 439, 2016. @article{Heredia2016, This study examines the processing of metaphoric reference by bilingual speakers. English dominant, Spanish dominant, and balanced bilinguals read passages in English biasing either a figurative (e.g., describing a weak and soft fighter that always lost and everyone hated) or a literal (e.g., describing a donut and bakery shop that made delicious pastries) meaning of a critical metaphoric referential description (e.g., 'creampuff'). We recorded the eye movements (first fixation, gaze duration, go-past duration, and total reading time) for the critical region, which was a metaphoric referential description in each passage. The results revealed that literal vs. figurative meaning activation was modulated by language dominance, where Spanish dominant bilinguals were more likely to access the literal meaning, and English dominant and balanced bilinguals had access to both the literal and figurative meanings of the metaphoric referential description. Overall, there was a general tendency for the literal interpretation to be more active, as revealed by shorter reading times for the metaphoric reference used literally, in comparison to when it was used figuratively. Results are interpreted in terms of the Graded Salience Hypothesis (Giora, 2002, 2003) and the Literal Salience Model (Cieślicka, 2006, 2015). |
Ehab W. Hermena; Simon P. Liversedge; Denis Drieghe Parafoveal processing of Arabic diacritical marks Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 12, pp. 2021–2038, 2016. @article{Hermena2016, Diacritics are glyph-like marks on letters that convey vowel information in Arabic, thus allowing for accurate pronunciation and disambiguation of homographs. For skilled readers, diacritics are usually omitted except when their omission causes ambiguity. Undiacritized homographs are very common in Arabic and are predominantly heterophones (where each meaning sounds different), with 1 version more common (dominant) than the others (subordinate). In this study the authors investigated parafoveal processing of diacritics during reading. They presented native readers with heterophonic homographs embedded in sentences with diacritization that instantiated either dominant or subordinate pronunciations of the homographs. Using the boundary paradigm, they presented previews of these words carrying either: identical diacritization to the target; inaccurate diacritization, such that if the target had dominant diacritization, the preview contained subordinate diacritization, and vice versa; or no diacritics. The results showed that readers processed the identity of diacritics parafoveally, such that inaccurate previews of the diacritics resulted in inflated fixation durations, particularly for fixations originating at close launch sites. Moreover, our results clearly indicate that readers' expectation for dominant or subordinate diacritization patterns influences their parafoveal and foveal processing of diacritics. Specifically, a perceived absence of diacritics (either in no-diacritics previews, or because the eyes were too far away to process the presence of diacritics) induced an expectation for the dominant pronunciation, whereas the perceived presence of diacritics induced an expectation for the subordinate meaning. |
Frouke Hermens; Robin Walker The influence of social and symbolic cues on observers' gaze behaviour Journal Article In: British Journal of Psychology, vol. 107, no. 3, pp. 484–502, 2016. @article{Hermens2016, Research has shown that social and symbolic cues presented in isolation and at fixation have strong effects on observers, but it is unclear how cues compare when they are presented away from fixation and embedded in natural scenes. We here compare the effects of two types of social cue (gaze and pointing gestures) and one type of symbolic cue (arrow signs) on eye movements of observers under two viewing conditions (free viewing vs. a memory task). The results suggest that social cues are looked at more quickly, for longer and more frequently than the symbolic arrow cues. An analysis of saccades initiated from the cue suggests that the pointing cue leads to stronger cueing than the gaze and the arrow cue. While the task had only a weak influence on gaze orienting to the cues, stronger cue following was found for free viewing compared to the memory task. |
Alyssa S. Hess; Andrew J. Wismer; Corey J. Bohil; Mark B. Neider On the hunt: Searching for poorly defined camouflaged targets Journal Article In: PLoS ONE, vol. 11, no. 3, pp. e0152502, 2016. @article{Hess2016, As camouflaged targets share visual characteristics with the environment within which they are embedded, searchers rarely have access to a perfect visual template of such targets. Instead, they must rely on less specific representations to guide search. Although search for camouflaged and non-specified targets have both received attention in the literature, to date they have not been explored in a combined context. Here we introduce a new paradigm for characterizing behavior during search for camouflaged targets in natural scenes, while also exploring how the fidelity of the target template affects search processes. Search scenes were created from forest images, with targets a distortion (varied size) of that image at a random location. In Experiment 1 a preview of the target was provided; in Experiment 2 there was no preview. No differences were found between experiments on nearly all measures. Generally, reaction times and accuracy improved with familiarity on the task (more so for small targets). Analysis of eye movements indicated that performance benefits were related to improvements in both Search and Target Verification time. Combined, our data suggest that search for camouflaged targets can be improved over a short time-scale, even when targets are poorly defined. |
Philipp N. Hesse; Katja Fiehler; Frank Bremmer SNARC effect in different effectors Journal Article In: Perception, vol. 45, no. 1-2, pp. 180–195, 2016. @article{Hesse2016, The SNARC (spatial numerical association of response codes) effect, indicating that subjects react faster to the left for small numbers and to the right for large numbers, is used as evidence for the idea that humans use space to organize number representations. Previous studies compared the SNARC effect across sensory modalities within participants and concluded modality independence. So far, it is unknown what sensory-to-motor mappings are involved in generating the SNARC effect and whether these mappings are identical for different effectors within subjects. Hence, we tested whether the SNARC effect is effector specific. Participants performed an auditory parity judgment task and responded with three different effectors: finger (button release), eyes (saccades), and arm (pointing). The SNARC effect occurred in each effector but varied in strength across the effectors. Across subjects, we found a significant correlation of SNARC strength for finger and arm responses suggesting the use of a shared sensory-to-motor mapping. SNARC strength did not correlate, however, between finger and eyes or arm and eyes. An additional statistical analysis based on conditional probabilities provided further evidence for SNARC-effector specificity. Taken together, our results suggest that the sensory-to-motor mapping is not as tight as it would be expected if the SNARC effect was effector independent. |
Béryl Hilberink-Schulpen; Ulrike Nederstigt; Frank Meurs; Emmie Alem Does the use of a foreign language influence attention and genre-specific viewing patterns for job advertisements? An eye-tracking study Journal Article In: Information Processing and Management, vol. 52, no. 6, pp. 1018–1030, 2016. @article{HilberinkSchulpen2016, The aim of this online experiment was to find evidence for both the alleged attention-getting function of the use of L2 English in job advertisements and for a possible genre–specific viewing pattern for job advertisements. A mixed design eye–track experiment among 30 native speakers of Dutch who saw all-Dutch and mixed Dutch– English job advertisements tested whether the use of foreign language English in Dutch ads changed the viewing pattern compared to all-Dutch job advertisements. That is, it investigated whether the use of a foreign language attracted more attention (in terms of first fixation, number and duration of fixations, and returned views), and altered the genre–specific viewing pattern for job ads. Overall, no evidence for the attention–getting ability of foreign language use in jobs ads was found. On the contrary, English used in the company information seemed to have a deterring effect. Support was found for a genre–specific viewing pattern for job ads, which, however, was not altered by the use of a foreign language. Our results suggest that use of English is not necessarily a good option to attract attention. Findings for genre-specific viewing patterns suggest that makers of job ads should make the job description as attractive as possible, since this is the first element viewed. This is the first online study to investigate the effect of language choice on attention in job ads and the viewing patterns specific to this ad genre. |
Matthew D. Hilchey; Deniz Dohmen; Nathan Crowder; Raymond M. Klein When is inhibition of return input- or output-based? It depends on how you look at it Journal Article In: Canadian Journal of Experimental Psychology, vol. 70, no. 4, pp. 325–334, 2016. @article{Hilchey2016, Two important diagnostics have been used to infer whether the effect of inhibition of return, when preceded by a saccade, is primarily upon input (i.e., attentional/perceptual level) or output (i.e., response/decision level) processes. Data from antisaccade paradigms involving luminance targets in peripheral vision suggest input effects whereas data from spatially compatible manual responses to centrally presented arrow targets suggest output effects. Here, we combine these diagnostics to resolve the discrepancy. In separate conditions participants made a pro- or antisaccade to a peripheral stimulus. Upon returning gaze to the original fixation, left and right manual responses were made to left- and right-pointing arrows at fixation, respectively. The primary objective of the prosaccade condition was to determine whether an eye movement toward a visual stimulus that was not associated with a manual localization response would bias spatially compatible manual responses against the prior saccade vector. Manual responses were slowest in the direction of the prior saccade, consistent with an output-based attribution (e.g., Posner, Rafal, Choate, & Vaughan, 1985). The primary objective of the antisaccade condition was to determine whether an eye movement away from a visual stimulus would also bias subsequent manual responses. No apparent response bias was detected, consistent with an input-based attribution (e.g., Fecteau, Au, Armstrong, & Munoz, 2004). Collectively, the findings indicate that there are 2, dissociable forms of inhibition depending on saccadic response demands. Converging evidence from other paradigms is discussed. |
Matthew D. Hilchey; Jay Pratt; John Christie Placeholders dissociate two forms of inhibition of return Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 2, pp. 360–371, 2016. @article{Hilchey2016a, Decades of research using Posner's classic spatial cueing paradigm has uncovered at least two forms of inhibition of return (IOR) in the aftermath of an exogenous, peripheral orienting cue. One prominent dissociation concerns the role of covert and overt orienting in generating IOR effects that relate to perception- and action-oriented processes, respectively. Another prominent dissociation concerns the role of covert and overt orienting in generating IOR effects that depend on object- and space-based representation, respectively. Our objective was to evaluate whether these dichotomies are functionally equivalent by manipulating placeholder object presence in the cueing paradigm. By discouraging eye movements throughout, Experiments 1A and 1B validated a perception-oriented form of IOR that depended critically on placeholders. Experiment 2A demonstrated that IOR was robust without placeholders when eye movements went to the cue and back to fixation before the manual response target. In Experiment 2B, we replicated Experiment 2A's procedures except we discouraged eye movements. IOR was observed, albeit only weakly and significantly diminished relative to when eye movements were involved. We conclude that action-oriented IOR is robust against placeholders but that the magnitude of perception-oriented IOR is critically sensitive to placeholder presence when unwanted oculomotor activity can be ruled out. |
Michael Hanke; Nico Adelhöfer; Daniel Kottke; Vittorio Iacovella; Ayan Sengupta; Falko R. Kaule; Roland Nigbur; Alexander Q. Waite; Florian Baumgartner; Jörg Stadler A studyforrest extension, simultaneous fMRI and eye gaze recordings during prolonged natural stimulation Journal Article In: Scientific Data, vol. 3, pp. 160092, 2016. @article{Hanke2016, Here we present an update of the studyforrest (http://studyforrest.org) dataset that complements the previously released functional magnetic resonance imaging (fMRI) data for natural language processing with a new two-hour 3 Tesla fMRI acquisition while 15 of the original participants were shown an audio-visual version of the stimulus motion picture. We demonstrate with two validation analyses that these new data support modeling specific properties of the complex natural stimulus, as well as a substantial within-subject BOLD response congruency in brain areas related to the processing of auditory inputs, speech, and narrative when compared to the existing fMRI data for audio-only stimulation. In addition, we provide participants' eye gaze location as recorded simultaneously with fMRI, and an additional sample of 15 control participants whose eye gaze trajectories for the entire movie were recorded in a lab setting-to enable studies on attentional processes and comparative investigations on the potential impact of the stimulation setting on these processes. |
Nina M. Hanning; Donatas Jonikaitis; Heiner Deubel; Martin Szinte Oculomotor selection underlies feature retention in visual working memory Journal Article In: Journal of Neurophysiology, vol. 115, no. 2, pp. 1071–1076, 2016. @article{Hanning2016, Oculomotor selection, spatial task relevance and visual working memory (WM) are described as three processes highly intertwined and sustained by similar cortical structures. However, as task relevant locations always constitute potential saccade targets, no study so far has been able to distinguish between oculomotor selection and spatial task relevance. Here, we designed an experiment that allowed us to dissociate in humans the contribution of task relevance, oculomotor selection and oculomotor execution to the retention of feature representations in WM. We report that task relevance and oculomotor selection lead to dissociable effects on feature WM maintenance. In a first task, in which an objects location was encoded as a saccade target, its feature representations were successfully maintained in WM, while they declined at non-saccade target locations. Likewise, we observed a similar WM benefit at the target of saccades that were prepared but never executed. In a second task, when an objects location was marked as task relevant but constituted a non-saccade target (a location to avoid), feature representations maintained at that location did not benefit. Combined, our results demonstrate that oculomotor selection is consistently associated with WM, whereas task relevance is not. This provides evidence for an overlapping circuitry serving saccade target selection and feature based WM, that can be dissociated from processes encoding task relevant locations. |
Jesse A. Harris Processing let alone coordination in silent reading Journal Article In: Lingua, vol. 169, pp. 70–94, 2016. @article{Harris2016, Processing research on coordination indicates that simpler conjuncts are preferred over more complex ones, and that positing ellipsis structure in the second conjunct is taxing to process when a simpler non-ellipsis structure exists. The present study investigates let alone coordination, which is argued to require clausal ellipsis in the second conjunct. It is proposed that the processor always projects a clausal structure for the second conjunct for the ellipsis, obviating a general preference for a less complex conjunct. Experiment 1 consists of several sentence-completion questionnaires testing whether a DP or VP conjunct is preferred in let alone structures as in John doesn't like Mary, let alone (Sue | love her). The results found a bias towards VP remnants that was weakly affected by syntactic placement of the focus particle even, as well as by prior context. Experiment 2 examined the effect of remnant type on eye movements during silent reading, revealing only distinct processing patterns, rather than major processing penalties, for different remnant types, and a general facilitation when even was present to signal upcoming scalar contrast. |
Ben M. Harvey; Serge O. Dumoulin Visual motion transforms visual space representations similarly throughout the human visual hierarchy Journal Article In: NeuroImage, vol. 127, pp. 173–185, 2016. @article{Harvey2016, Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7. T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT. + map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. |
Alexander Niklas Hausler; Sergio Oroz Artigas; Peter Trautner; Bernd Weber Gain- and loss-related brain activation are associated with information search differences in risky gambles: An fMRI and eye-tracking study Journal Article In: eNeuro, vol. 3, no. 5, pp. 1–13, 2016. @article{Hausler2016, People differ in the way they approach and handle choices with unsure outcomes. In this study, we demonstrate that individual differences in the neural processing of gains and losses relates to attentional differences in the way individuals search for information in gambles. Fifty subjects participated in two independent experiments. Participants first completed an fMRI experiment involving financial gains and losses. Subsequently, they performed an eye-tracking experiment on binary choices between risky gambles, each displaying monetary outcomes and their respective probabilities. We find that individual differences in gain and loss processing relate to attention distribution. Individuals with a stronger reaction to gains in the ventromedial prefrontal cortex paid more attention to monetary amounts, while a stronger reaction in the ventral striatum to losses was correlated with an increased attention to probabilities. Reaction in the posterior cingulate cortex to losses was also found to correlate with an increased attention to probabilities. Our data show that individual differences in brain activity and differences in information search processes are closely linked. |
Jarkko Hautala; Otto Loberg; Piia Astikainen; Lauri Nummenmaa; Jari K. Hietanen Effects of conversation content on viewing dyadic conversations Journal Article In: Journal of Eye Movement Research, vol. 9, no. 7, pp. 1–12, 2016. @article{Hautala2016, People typically follow conversations closely with their gaze. We asked whether this viewing is influenced by what is actually said in the conversation and by the viewer's psychological condition. We recorded the eye movements of healthy (N = 16) and de- pressed (N = 25) participants while they were viewing video clips. Each video showed two people, each speaking one line of dialogue about socio-emotionally important (i.e., per- sonal) or unimportant topics (matter-of-fact). Between the spoken lines, the viewers made more saccadic shifts between the discussants, and looked more at the second speaker, in personal vs. matter-of-fact conversations. Higher depression scores were correlated with less looking at the currently speaking discussant. We conclude that subtle social attention dynamics can be detected from eye movements and that these dynamics are sensitive to the observer's psychological condition, such as depression. |
R. A. Hayes; Michael Walsh Dickey; Tessa Warren Looking for a location: Dissociated effects of event-related plausibility and verb–argument information on predictive processing in aphasia Journal Article In: American Journal of Speech-Language Pathology, vol. 25, no. 3, pp. S758–S775, 2016. @article{Hayes2016a, PURPOSE: This study examined the influence of verb-argument information and event-related plausibility on prediction of upcoming event locations in people with aphasia, as well as older and younger, neurotypical adults. It investigated how these types of information interact during anticipatory processing and how the ability to take advantage of the different types of information is affected by aphasia. METHOD: This study used a modified visual-world task to examine eye movements and offline photo selection. Twelve adults with aphasia (aged 54-82 years) as well as 44 young adults (aged 18-31 years) and 18 older adults (aged 50-71 years) participated. RESULTS: Neurotypical adults used verb argument status and plausibility information to guide both eye gaze (a measure of anticipatory processing) and image selection (a measure of ultimate interpretation). Argument status did not affect the behavior of people with aphasia in either measure. There was only limited evidence of interaction between these 2 factors in eye gaze data. CONCLUSIONS: Both event-related plausibility and verb-based argument status contributed to anticipatory processing of upcoming event locations among younger and older neurotypical adults. However, event-related likelihood had a much larger role in the performance of people with aphasia than did verb-based knowledge regarding argument structure. |
Taylor R. Hayes; Alexander A. Petrov Mapping and correcting the influence of gaze position on pupil size measurements Journal Article In: Behavior Research Methods, vol. 48, no. 2, pp. 510–527, 2016. @article{Hayes2016, Pupil size is correlated with a wide variety of important cognitive variables and is increasingly being used by cognitive scientists. Pupil data can be recorded inexpensively and non-invasively by many commonly used video-based eye-tracking cameras. Despite the relative ease of data collection and increasing prevalence of pupil data in the cognitive literature, researchers often underestimate the methodological challenges associated with controlling for confounds that can result in misinterpretation of their data. One serious confound that is often not properly controlled is pupil foreshortening error (PFE)-the foreshortening of the pupil image as the eye rotates away from the camera. Here we systematically map PFE using an artificial eye model and then apply a geometric model correction. Three artificial eyes with different fixed pupil sizes were used to systematically measure changes in pupil size as a function of gaze position with a desktop EyeLink 1000 tracker. A grid-based map of pupil measurements was recorded with each artificial eye across three experimental layouts of the eye-tracking camera and display. Large, systematic deviations in pupil size were observed across all nine maps. The measured PFE was corrected by a geometric model that expressed the foreshortening of the pupil area as a function of the cosine of the angle between the eye-to-camera axis and the eye-to-stimulus axis. The model reduced the root mean squared error of pupil measurements by 82.5 % when the model parameters were pre-set to the physical layout dimensions, and by 97.5 % when they were optimized to fit the empirical error surface. |
Peter C. Gordon; Renske S. Hoedemaker Effective scheduling of looking and talking during rapid automatized naming Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 5, pp. 742–760, 2016. @article{Gordon2016, Rapid automatized naming (RAN) is strongly related to literacy gains in developing readers, reading disabilities, and reading ability in children and adults. Because successful RAN performance depends on the close coordination of a number of abilities, it is unclear what specific skills drive this RAN-reading relationship. The current study used concurrent recordings of young adult participants' vocalizations and eye movements during the RAN task to assess how individual variation in RAN performance depends on the coordination of visual and vocal processes. Results showed that fast RAN times are facilitated by having the eyes 1 or more items ahead of the current vocalization, as long as the eyes do not get so far ahead of the voice as to require a regressive eye movement to an earlier item. These data suggest that optimizing RAN performance is a problem of scheduling eye movements and vocalization given memory constraints and the efficiency of encoding and articulatory control. Both RAN completion time (con- ventionally used to indicate RAN performance) and eye-voice relations predicted some aspects of participants' eye movements on a separate sentence reading task. However, eye-voice relations predicted additional features of first-pass reading that were not predicted by RAN completion time. This shows that measurement of eye-voice patterns can identify important aspects of individual variation in reading that are not identified by the standard measure of RAN performance. We argue that RAN performance predicts reading ability because both tasks entail challenges of scheduling cognitive and linguistic processes that operate simultaneously on multiple linguistic inputs. |
Martin Gorges; Hans Peter Müller; Dorothée Lulé; LANDSCAPE Consortium; Elmar H. Pinkhardt; Albert C. Ludolph; Jan Kassubek The association between alterations of eye movement control and cerebral intrinsic functional connectivity in Parkinson's disease Journal Article In: Brain Imaging and Behavior, vol. 10, no. 1, pp. 79–91, 2016. @article{Gorges2016, Patients with Parkinson's disease (PD) present with eye movement disturbances that accompany the cardinal motor symptoms. Previous studies have consistently found evidence that large-scale functional networks are critically involved in eye movement control. We challenged the hypothesis that altered eye movement control in patients with PD is closely related to alterations of whole-brain functional connectivity in association with the neurodegenerative process. Saccadic and pursuit eye movements by video-oculography and 'resting-state' functional MRI (3 Tesla) were recorded from 53 subjects, i.e. 31 patients with PD and 22 matched healthy controls. Video-oculographically, a broad spectrum of eye movement impairments was demonstrated in PD patients vs. controls, including interrupted smooth pursuit, hypometric saccades, and a high distractibility in anti-saccades. Significant correlations between altered oculomotor parameters and functional connectivity measures were observed, i.e. the worse the oculomotor performance was, the more the regional functional connectivity in cortical, limbic, thalamic, cerebellar, and brainstem areas was decreased. Remarkably, decreased connectivity between major nodes of the default mode network was tightly correlated with the prevalence of saccadic intrusions as a measure for distractability. In conclusion, dysfunctional eye movement control in PD seems to be primarily associated with (cortical) executive deficits, rather than being related to the ponto-cerebellar circuits or the oculomotor brainstem nuclei. Worsened eye movement performance together with the potential pathophysiological substrate of decreased intrinsic functional connectivity in predominantly oculomotor-associated cerebral functional networks may constitute a behavioral marker in PD. |
Dan J. Graham; Christina A. Roberto In: Health Education and Behavior, vol. 43, no. 4, pp. 389–398, 2016. @article{Graham2016, Background. The U.S. Food and Drug Administration (FDA) has proposed modifying the Nutrition Facts Label (NFL) on food packages to increase consumer attention to this resource and to promote healthier dietary choices. Aims. The present study sought to determine whether the proposed NFL changes will affect consumer attention to the NFL or purchase intentions. Method. This study compared purchase intentions (yes/no responses to “would you purchase this food?” for 64 products) and attention to NFLs (measured via high-speed eye-tracking camera) among 155 young adults randomly assigned to view products with existing versus modified NFLs. Attention to all individual components of the NFL (e.g., calories, fats, sugars) were analyzed separately to assess the impact of each proposed NFL modification on attention to that region. Data were collected in 2014; analysis was conducted in 2015. Results. Modified NFLs did not elicit significantly more visual attention or lead to more healthful purchase intentions than did existing NFLs. Relocating the percent daily value component from the right side of the NFL to the left side, as proposed by the FDA, actually reduced participants' attention to this information. The proposed “added sugars” component was viewed on at least one label by a majority (58%) of participants. Discussion. Results suggest that the proposed NFL changes may not achieve FDA's goals. Changes to nutrition labeling may need to take a different form to meaningfully influence dietary behavior. Conclusion. Young adults' visual attention and purchase intentions do not appear to be meaningfully affected by the proposed NFL modifications. |
Julie Gregg; Albrecht W. Inhoff Misperception of orthographic neighbors during silent and oral reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 6, pp. 799–820, 2016. @article{Gregg2016, The study examined whether words are misperceived during natural fluent reading and the extent to which contextual and lexical properties bias perception. Target words were pairs of orthographic neighbors that differed in frequency. Pretarget context was neutral (Experiment 1) or biased toward the higher frequency member of the pair (Experiments 2 and 3), and posttarget context was neutral, congruent, or incongruent. Critically, incongruent context was constructed so that it was congruent with the target's neighbor. First-pass viewing showed only effects of target frequency. During silent reading (Experiments 1 and 2), rereading measures showed that the target frequency effect was smaller in the incongruent posttarget context condition than in the neutral and congruent conditions, and this occurred irrespective of prior context. Presumably, lower frequency words were less impeded by incongruent context because they were often misperceived as a congruent higher frequency neighbor. An oral reading task (Experiment 3) showed that the lower frequency target was more often misread than the higher frequency neighbor, and this proneness to error was influenced by posttarget context. Although target frequency influenced proneness to error, biased prior sentence context appeared to influence the construal of sentence meaning to accommodate incongruent targets and posttarget context. |
Nicola J. Gregory; Frouke Hermens; Rebecca Facey; Timothy L. Hodgson The developmental trajectory of attentional orienting to socio-biological cues Journal Article In: Experimental Brain Research, vol. 234, no. 6, pp. 1351–1362, 2016. @article{Gregory2016, It has been proposed that the orienting of attention in the same direction as another's point of gaze relies on innate brain mechanisms which are present from birth, but direct evidence relating to the influence of eye gaze cues on attentional orienting in young children is limited. In two experiments, 137 children aged 3–10 years old performed an adapted pro-saccade task with centrally presented uninformative eye gaze, finger pointing and arrow pre-cues which were either congruent or incongruent with the direction of target presentations. When the central cue overlapped with presentation of the peripheral target (Experiment 1), children up to 5 years old had difficulty disengaging fixation from central fixation in order to saccade to the target. This effect was found to be particularly marked for eye gaze cues. When central cues were extinguished simultaneously with peripheral target onset (Experiment 2), this effect was greatly reduced. In both experiments finger pointing cues (image of pointing index finger presented at fixation) exerted a strong influence on saccade reaction time to the peripheral stimulus for the youngest group of children (<5 years). Overall the results suggest that although young children are strongly engaged by centrally presented eye gaze cues, the directional influence of such cues on overt attentional orienting is only present in older children, meaning that the effect is unlikely to be dependent upon an innate brain module. Instead, the results are consistent with the existence of stimulus–response associations which develop with age and environmental experience. |
Sarah Gregory; Marco Fusca; Geraint Rees; D. Samuel Schwarzkopf; Gareth Barnes Gamma frequency and the spatial tuning of primary visual cortex Journal Article In: PLoS ONE, vol. 11, no. 6, pp. e0157374, 2016. @article{Gregory2016a, Visual stimulation produces oscillatory gamma responses in human primary visual cortex (V1) that also relate to visual perception. We have shown previously that peak gamma frequency positively correlates with central V1 cortical surface area. We hypothesized that people with larger V1 would have smaller receptive fields and that receptive field size, not V1 are, might explain this relationship. Here we set out to test this hypothesis directly by investigating the relationship between fMRI estimated population receptive field (pRF) size and gamma frequency in V1. We stimulated both the near-centre and periphery of the visual field using both large and small stimuli in each location and replicated our previous finding of a positive correlation between V1 surface area and peak gamma frequency. Counter to our expectation, we found that between participants V1 size (and not pRF size) accounted for most of the variability in gamma frequency. Within-participants we found that gamma frequency increased, rather than decreased, with stimulus eccentricity directly contradicting our initial hypothesis. |
Svenja Gremmler; Markus Lappe Saccadic adaptation is associated with starting eye position Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 322, 2016. @article{Gremmler2016, Saccadic adaptation is the motor learning process that keeps saccade amplitudes on target. This process is eye position specific: amplitude adaptation that is induced for a saccade at one particular location in the visual field transfers incompletely to saccades at other locations. In our current study, we investigated wether this eye position signal corresponds to the initial or to the final eye position of the saccade. Each case would have different implications on the mechanisms of adaptation. The initial eye position is not directly available, when the adaptation driving post saccadic error signal is received. On the other hand the final eye position signal is not available, when the motor command for the saccade is calculated. In six human subjects we adapted a saccade of 15 degree amplitude that started at a constant position. We then measured the transfer of adaptation to test saccades of 10 and 20 degree amplitude. In each case we compared test saccades that matched the start position of the adapted saccade to those that matched the target of the adapted saccade. We found significantly more transfer of adaptation to test saccades with the same start position than to test saccades with the same target position. The results indicate that saccadic adaptation is specific to the initial eye position. This is consistent with a previously proposed effect of gain field modulated input from areas like the frontal eye field, the lateral intraparietal area and the superior colliculus into the cerebellar adaptation circuitry. |
Ann Kathrin Grohe; Andrea Weber The penefit of salience: Salient accented, but not unaccented words reveal accent adaptation effects Journal Article In: Frontiers in Psychology, vol. 7, pp. 864, 2016. @article{Grohe2016, In two eye-tracking experiments, the effects of salience in accent training and speech accentedness on spoken-word recognition were investigated. Salience was expected to increase a stimulus' prominence and therefore promote learning. A training-test paradigm was used on native German participants utilizing an artificial German accent. Salience was elicited by two different criteria: Production and listening training as a subjective criterion and accented (Experiment 1) and canonical test words (Experiment 2) as an objective criterion. During training in Experiment 1, participants either read single German words out loud and deliberately devoiced initial voiced stop consonants (e.g., Balken-"beam" pronounced as *Palken), or they listened to pre-recorded words with the same accent. In a subsequent eye-tracking experiment, looks to auditorily presented target words with the accent were analyzed. Participants from both training conditions fixated accented target words more often than a control group without training. Training was identical in Experiment 2, but during test, canonical German words that overlapped in onset with the accented words from training were presented as target words (e.g., Palme-"palm tree" overlapped in onset with the training word *Palken) rather than accented words. This time, no training effect was observed; recognition of canonical word forms was not affected by having learned the accent. Therefore, accent learning was only visible when the accented test tokens in Experiment 1, which were not included in the test of Experiment 2, possessed sufficient salience based on the objective criterion "accent." These effects were not modified by the subjective criterion of salience from the training modality. |
Klaske A. Glashouwer; Nienke C. Jonker; Karen Thomassen; Peter J. Jong Take a look at the bright side: Effects of positive body exposure on selective visual attention in women with high body dissatisfaction Journal Article In: Behaviour Research and Therapy, vol. 83, pp. 19–25, 2016. @article{Glashouwer2016, Women with high body dissatisfaction look less at their 'beautiful' body parts than their 'ugly' body parts. This study tested the robustness of this selective viewing pattern and examined the influence of positive body exposure on body-dissatisfied women's attention for 'ugly' and 'beautiful' body parts. In women with high body dissatisfaction (N = 28) and women with low body dissatisfaction (N = 14) eye-tracking was used to assess visual attention towards pictures of their own and other women's bodies. Participants with high body dissatisfaction were randomly assigned to 5 weeks positive body exposure (n = 15) or a no-treatment condition (n = 13). Attention bias was assessed again after 5 weeks. Body-dissatisfied women looked longer at 'ugly' than 'beautiful' body parts of themselves and others, while participants with low body dissatisfaction attended equally long to own/others' 'beautiful' and 'ugly' body parts. Although positive body exposure was very effective in improving participants' body satisfaction, it did not systematically change participants' viewing pattern. The tendency to preferentially allocate attention towards one's 'ugly' body parts seems a robust phenomenon in women with body dissatisfaction. Yet, modifying this selective viewing pattern seems not a prerequisite for successfully improving body satisfaction via positive body exposure. |
Lauren R. Godier; Jessica C. Scaife; Sven Braeutigam; Rebecca J. Park Enhanced early neuronal processing of food pictures in Anorexia Nervosa: A magnetoencephalography study Journal Article In: Psychiatry Journal, vol. 2016, pp. 1–13, 2016. @article{Godier2016, Neuroimaging studies in Anorexia Nervosa (AN) have shown increased activation in reward and cognitive control regions in response to food, and a behavioral attentional bias (AB) towards food stimuli is reported. This study aimed to further investigate the neural processing of food using magnetoencephalography (MEG). Participants were 13 females with restricting-type AN, 14 females recovered from restricting-type AN, and 15 female healthy controls. MEG data was acquired whilst participants viewed high- and low-calorie food pictures. Attention was assessed with a reaction time task and eye tracking. Time-series analysis suggested increased neural activity in response to both calorie conditions in the AN groups, consistent with an early AB. Increased activity was observed at 150 ms in the current AN group. Neuronal activity at this latency was at normal level in the recovered group; however, this group exhibited enhanced activity at 320 ms after stimulus. Consistent with previous studies, analysis in source space and behavioral data suggested enhanced attention and cognitive control processes in response to food stimuli in AN. This may enable avoidance of salient food stimuli and maintenance of dietary restraint in AN. A later latency of increased activity in the recovered group may reflect a reversal of this avoidance, with source space and behavioral data indicating increased visual and cognitive processing of food stimuli. |
David C. Godlove; Jeffrey D. Schall Microsaccade production during saccade cancelation in a stop-signal task Journal Article In: Vision Research, vol. 118, pp. 5–16, 2016. @article{Godlove2016, We obtained behavioral data to evaluate two alternative hypotheses about the neural mechanisms of gaze control. The "fixation" hypothesis states that neurons in rostral superior colliculus (SC) enforce fixation of gaze. The "microsaccade" hypothesis states that neurons in rostral SC encode microsaccades rather than fixation per se. Previously reported neuronal activity in monkey SC during the saccade stop-signal task leads to specific, dissociable behavioral predictions of these two hypotheses. When subjects are required to cancel partially-prepared saccades, imbalanced activity spreads across rostral and caudal SC with a reliable temporal profile. The microsaccade hypothesis predicts that this imbalance will lead to elevated microsaccade production biased toward the target location, while the fixation hypothesis predicts reduced microsaccade production. We tested these predictions by analyzing the microsaccades produced by 4 monkeys while they voluntarily canceled partially prepared eye movements in response to explicit stop signals. Consistent with the fixation hypothesis and contradicting the microsaccade hypothesis, we found that each subject produced significantly fewer microsaccades when normal saccades were successfully canceled. The few microsaccades escaping this inhibition tended to be directed toward the target location. We additionally investigated interactions between initiating microsaccades and inhibiting normal saccades. Reaction times were longer when microsaccades immediately preceded target presentation. However, pre-target microsaccade production did not affect stop-signal reaction time or alter the probability of canceling saccades following stop signals. These findings demonstrate that imbalanced activity within SC does not necessarily produce microsaccades and add to evidence that saccade preparation and cancelation are separate processes. |
Hayward J. Godwin; Tamaryn Menneer; Charlotte A. Riggs; Dominic Taunton; Kyle R. Cave; Nick Donnel Understanding the contribution of target repetition and target expectation to the emergence of the prevalence effect in visual search Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 3, pp. 809–816, 2016. @article{Godwin2016, Behavior in visual search tasks is influenced by the proportion of trials on which a target is presented (the target prevalence). Previous research has shown that when target prevalence is low (2 % prevalence), participants tend to miss targets, as compared with higher prevalence levels (e.g., 50 % prevalence). There is an ongoing debate regarding the relative contributions of target repetition and the expectation that a target will occur in the emergence of prevalence effects. In order to disentangle these two factors, we went beyond previ- ous studies by directly manipulating participants' expectations regarding how likely a target was to appear on a given trial. This we achieved without using cues or feedback. Our results indicated that both target repetition and target expectation contribute to the emergence of the prevalence effect. |
Tal Golan; Ido Davidesco; Meir Meshulam; David M. Groppe; Pierre Mégevand; Erin M. Yeagle; Matthew S. Goldfinger; Michal Harel; Lucia Melloni; Charles E. Schroeder; D. L. Deouell; Ashesh D. Mehta; Rafael Malach Human intracranial recordings link suppressed transients rather than 'filling-in' to perceptual continuity across blinks Journal Article In: eLife, vol. 5, pp. 1–28, 2016. @article{Golan2016, We hardly notice our eye blinks, yet an externally generated retinal interruption of a similar duration is perceptually salient. We examined the neural correlates of this perceptual distinction using intracranially measured ECoG signals from human visual cortex in 14 patients. In early visual areas (V1 and V2), the disappearance of the stimulus due to either invisible blinks or salient blank video frames ('gaps') led to a similar drop in activity level, followed by a positive overshoot beyond baseline, triggered by stimulus reappearance. Ascending the visual hierarchy, the reappearance-related overshoot gradually subsided for blinks but not for gaps. By contrast, the disappearance-related drop did not follow the perceptual distinction - it was actually slightly more pronounced for blinks than for gaps. These findings suggest that blinks' limited visibility compared with gaps is correlated with suppression of blink-related visual activity transients, rather than with 'filling-in' of the occluded content during blinks. |
Gil Gonen-Yaacovi; Ayelet Arazi; Nitzan Shahar; Anat Karmon; Shlomi Haar; Nachshon Meiran; Ilan Dinstein Increased ongoing neural variability in ADHD Journal Article In: Cortex, vol. 81, pp. 50–63, 2016. @article{GonenYaacovi2016, Attention Deficit Hyperactivity Disorder (ADHD) has been described as a disorder where frequent lapses of attention impair the ability of an individual to focus/attend in a sustained manner, thereby generating abnormally large intra-individual behavioral variability across trials. Indeed, increased reaction time (RT) variability is a fundamental behavioral characteristic of individuals with ADHD found across a large number of cognitive tasks. But what is the underlying neurophysiology that might generate such behavioral instability? Here, we examined trial-by-trial EEG response variability to visual and auditory stimuli while subjects' attention was diverted to an unrelated task at the fixation cross. Comparisons between adult ADHD and control participants revealed that neural response variability was significantly larger in the ADHD group as compared with the control group in both sensory modalities. Importantly, larger trial-by-trial variability in ADHD was apparent before and after stimulus presentation as well as in trials where the stimulus was omitted, suggesting that ongoing (rather than stimulus-evoked) neural activity is continuously more variable (noisier) in ADHD. While the patho-physiological mechanisms causing this increased neural variability remain unknown, they appear to act continuously rather than being tied to a specific sensory or cognitive process. |
Claudia C. Gonzalez; Jac Billington; Melanie R. Burke The involvement of the fronto-parietal brain network in oculomotor sequence learning using fMRI Journal Article In: Neuropsychologia, vol. 87, pp. 1–11, 2016. @article{Gonzalez2016a, The basis of motor learning involves decomposing complete actions into a series of predictive individual components that form the whole. The present fMRI study investigated the areas of the human brain important for oculomotor short-term learning, by using a novel sequence learning paradigm that is equivalent in visual and temporal properties for both saccades and pursuit, enabling more direct comparisons between the oculomotor subsystems. In contrast with previous studies that have implemented a series of discrete ramps to observe predictive behaviour as evidence for learning, we presented a continuous sequence of interlinked components that better represents sequences of actions. We implemented both a classic univariate fMRI analysis, followed by a further multivariate pattern analysis (MVPA) within a priori regions of interest, to investigate oculomotor sequence learning in the brain and to determine whether these mechanisms overlap in pursuit and saccades as part of a higher order learning network. This study has uniquely identified an equivalent frontal-parietal network (dorsolateral prefrontal cortex, frontal eye fields and posterior parietal cortex) in both saccades and pursuit sequence learning. In addition, this is the first study to investigate oculomotor sequence learning during fMRI brain imaging, and makes significant contributions to understanding the role of the dorsal networks in motor learning. |
Claudia C. Gonzalez; Mark Mon-Williams; Siobhan Burke; Melanie R. Burke Cognitive control of saccadic eye movements in children with developmental coordination disorder Journal Article In: PLoS ONE, vol. 11, no. 11, pp. e0165380, 2016. @article{Gonzalez2016b, The ability to use advance information to prepare and execute a movement requires cognitive control of behaviour (e.g., anticipation and inhibition). Our aim was to explore the integrity of saccadic eye movement control in developmental coordination disorder (DCD) and typically developing (TD) children (8–12 years) and assess how these children plan and inhibit saccadic responses, the principal mechanisms within visual attention control. Eye movements and touch responses were measured (separately and concurrently) in Cued and Non-Cued conditions. We found that children with DCD had similar saccade kinematics to the TD group during saccade initiation. Advance information decreased hand movement duration in both groups during Cued trials, but decrements in accuracy were significantly worse in the DCD group. In addition, children with DCD exhibited greater inhibitory errors and inaccurate fixation during the Cued trials. Thus, children with DCD were reasonably proficient in executing saccades during reflexive (Non-Cued) conditions, but showed deficits in more complex control processes involving prediction and inhibition. These findings have implications for our understanding of motor control in children with DCD. |
David A. Gonzalez; Ewa Niechwiej-Szwedo The effects of monocular viewing on hand-eye coordination during sequential grasping and placing movements Journal Article In: Vision Research, vol. 128, pp. 30–38, 2016. @article{Gonzalez2016, The contribution of binocular vision to the performance of reaching and grasping movements has been examined previously using single reach-to-grasp movements. However, most of our daily activities consist of more complex action sequences, which require precise temporal linking between the gaze behaviour and manual action phases. Many previous studies found a stereotypical hand-eye coordination pattern, such that the eyes move prior to the reach initiation. Moving the eyes to the target object provides information about its features and location, which can facilitate the predictive control of reaching and grasping. This temporal coordination pattern has been established for the performance of sequential movements performed during binocular viewing. Here we manipulated viewing condition and examined the temporal hand-eye coordination pattern during the performance of a sequential reaching, grasping, and placement task. Fifteen participants were tested on a sequencing task while eye and hand movements were recorded binocularly using a video-based eyetracker and a motion capture system. Our results showed that monocular viewing disrupted the temporal coordination between the eyes and the hand during the place-to-reach transition phase. Specifically, the gaze shift was delayed during monocular compared to binocular viewing. The shift in gaze behaviour may be due to increased uncertainty associated with the performance of the placement task because of increased vergence error during monocular viewing, which was evident in all participants. These findings provide insight into the role of binocular vision in predictive control of sequential reaching and grasping movements. |
Qian Guo; Young-Suk Grace Kim; Li Yang; Lihui Liu Does previewing answer choice options improve performance on reading tests? Journal Article In: Reading and Writing, vol. 29, no. 4, pp. 745–760, 2016. @article{Guo2016, Previewing answer-choice options before finishing reading the text is a widely employed test-taking behavior. In the present study we examined whether previewing is related to item response accuracy and response time, using data from Chinese learners of varying English proficiency levels and English native speakers. We examined eye movement patterns of participants who completed online multiple-choice sentence completion tasks, and how previewing was related to reading performance and whether the relation varied as a function of English proficiency level. The results showed that, relative to no previewing, previewing was associated with a significantly lower probability of answering an item correctly but not with significantly longer response time. Importantly, these relations varied across English proficiency levels such that participants with higher proficiency performed better without previewing, but there was no difference for lower-intermediate learners of English. These findings suggest that previewing does not facilitate performance on a sentence comprehension task, but instead interferes with the comprehension process, particularly for individuals with relatively high language proficiency. |
Tjerk P. Gutteling; W. Pieter Medendorp Role of alpha-band oscillations in spatial updating across whole body motion Journal Article In: Frontiers in Psychology, vol. 7, pp. 671, 2016. @article{Gutteling2016, When moving around in the world, we have to keep track of important locations in our surroundings. In this process, called spatial updating, we must estimate our body motion and correct representations of memorized spatial locations in accordance with this motion. While the behavioral characteristics of spatial updating across whole body motion have been studied in detail, its neural implementation lacks detailed study. Here we use electro-encephalography (EEG) to distinguish various spectral components of this process. Subjects gazed at a central body-fixed point in otherwise complete darkness, while a target was briefly flashed, either left or right from this point. Subjects had to remember the location of this target as either moving along with the body or remaining fixed in the world while being translated sideways on a passive motion platform. After the motion, subjects had to indicate the remembered target location in the instructed reference frame using a mouse response. While the body motion, as detected by the vestibular system, should not affect the representation of body-fixed targets, it should interact with the representation of a world-centered target to update its location relative to the body. We show that the initial presentation of the visual target induced a reduction of alpha band power in contralateral parieto-occipital areas, which evolved to a sustained increase during the subsequent memory period. Motion of the body led to a reduction of alpha band power in central parietal areas extending to lateral parieto-temporal areas, irrespective of whether the targets had to be memorized relative to world or body. When updating a world-fixed target, its internal representation shifts hemispheres, only when subjects' behavioral responses suggested an update across the body midline. Our results suggest that parietal cortex is involved in both self-motion estimation and the selective application of this motion information to maintaining target locations as fixed in the world or fixed to the body. |