All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2017 |
Elio M. Santos; Eileen Kowler Anticipatory smooth pursuit eye movements evoked by probabilistic cues Journal Article In: Journal of Vision, vol. 17, no. 13, pp. 1–16, 2017. @article{Santos2017, Anticipatory smooth eye movements (ASEM) (smooth eye movements in the direction of 22 anticipated target motion) are elicited by cues that signal the direction of future target motion 23 with high levels of certainty. Natural cues, however, rarely convey information with perfect 24 certainty, and responses to uncertainty provide insights about how predictive behaviors are 25 generated. Subjects smoothly pursued targets that moved to the right or left with varying cued 26 probabilities. ASEM strength in a given direction increased with the probability level. The type 27 of cue also played a role. ASEM elicited by symbolic visual cues tended to underweight low 28 probabilities and overweight high probabilities. Cues based on memory (varying the proportion 29 of trials with left or right motion) produced the opposite pattern, overweighting low probabilities 30 and underweighting high probabilities. Finally, cues whose perceptual structure depicted the 31 motion path produced a bias in ASEM in the depicted direction that was maintained across levels 32 of cue congruency. The results show that the smooth pursuit system relies on a combination of 33 signals, including memory for recent target motions, interpretation of cues, and prior beliefs 34 about the relationship between the perceptual configuration and the motion path to determine the 35 anticipatory response in the presence of uncertainty. |
Ritch C. Savin-Williams; Brian M. Cash; Mark McCormack; Gerulf Rieger Gay, mostly gay, or bisexual leaning gay? An exploratory study distinguishing gay sexual orientations among young men Journal Article In: Archives of Sexual Behavior, vol. 46, no. 1, pp. 265–272, 2017. @article{SavinWilliams2017, This exploratory study assessed physiological, behavioral, and self-report measures of sexual and romantic indicators of sexual orientation identities among young men (mean age = 21.9 years) with predominant same-sex sexual and romantic interests: those who described themselves as bisexual leaning gay (n = 11), mostly gay (n = 17), and gay (n = 47). Although they were not significantly distinguishable based on physiological (pupil dilation) responses to nude stimuli, on behavioral and self-report measures a descending linear trend toward the less preferred sex (female) was significant regarding sexual attraction, fantasy, genital contact, infatuation, romantic relationship, sex appeal, and gazing time to the porn stimuli. Results supported a continuum of sexuality with distinct subgroups only for the self-report measure of sexual attraction. The other behavioral and self-report measures followed the same trend but did not significantly differ between the bisexual leaning gay and mostly gay groups, likely the result of small sample size. Results suggest that romantic indicators are as good as sexual measures in assessing sexual orientation and that a succession of logically following groups from bisexual leaning gay, mostly gay, to gay. Whether these three groups are discrete or overlapping needs further research. |
Bilge Sayim; Johan Wagemans Appearance changes and error characteristics in crowding revealed by drawings Journal Article In: Journal of Vision, vol. 17, no. 11, pp. 1–16, 2017. @article{Sayim2017, Peripheral vision is strongly limited by crowding: Targets that are easily recognized in isolation are unrecognizable when flanked by close-by objects. Crowding does not only impair target recognition but also changes appearance. Here we investigated appearance changes and errors in crowding by letting observers draw crowded stimuli. Observers drew stimuli presented at 6degree and 12degree eccentricity. Stimuli consisted of characters and letter-like symbols. Targets were presented with either a flanker on each side or in isolation. To characterize appearance changes and errors in crowding, we developed a scoring system that captured differences between the drawings and the stimuli. The resulting drawings revealed strong appearance changes under crowding. Importantly, our results reveal crowding errors that are usually not shown in standard crowding paradigms. We found high rates of element Omissions and element Truncations, indicating a central role of target "diminishment" in crowding. Furthermore, we show that a subset of the observed element Omissions and Additions was possibly caused by feature migration. Relatively high rates of position errors, in particular element Translations, reflected the often reported location uncertainty in crowding. Virtually no complete target-flanker substitutions were observed. We suggest a new classification system for errors in crowding, and propose drawing as a useful appearance-based method to investigate crowding. |
Robbie Schepers; C. Rob Markus Gene by cognition interaction on stress-induced attention bias for food: Effects of 5-HTTLPR and ruminative thinking Journal Article In: Biological Psychology, vol. 128, pp. 21–28, 2017. @article{Schepers2017, Introduction Stress is often found to increase the preference and intake of high caloric foods. This effect is known as emotional eating and is influenced by cognitive as well as biological stress vulnerabilities. An S-allele of the 5-HTTLPR gene has been linked to decreased (brain) serotonin efficiency, leading to decreased stress resilience and increased risks for negative affect and eating related disturbances. Recently it has been proposed that a cognitive ruminative thinking style can further exacerbate the effect of this gene by prolonging the already increased stress response, thereby potentially increasing the risk of compensating by overeating high palatable foods. Objective This study was aimed at investigating whether there is an increased risk for emotional eating in high ruminative S/S-allele carriers reflected by an increased attention bias for high caloric foods during stress. Methods From a large (N = 827) DNA database, participants (N = 100) were selected based on genotype (S/S or L/L) and ruminative thinking style and performed an eye-tracking visual food-picture probe task before and after acute stress exposure. A significant Genotype x Rumination x Stress-interaction was found on attention bias for savory food; indicating that a stress-induced attention bias for specifically high-caloric foods is moderated by a gene x cognitive risk factor. Conclusion Both a genetic (5-HTTLPR) and cognitive (ruminative thinking) stress vulnerability may mutually increase the risk for stress-related abnormal eating patterns. |
Bernhard Schlagbauer; Maurice Mink; Hermann J. Müller; Thomas Geyer Independence of long-term contextual memory and short-term perceptual hypotheses: Evidence from contextual cueing of interrupted search Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 2, pp. 508–521, 2017. @article{Schlagbauer2017, Observers are able to resume an interrupted search trial faster relative to responding to a new, unseen display. This finding of rapid resumption is attributed to short-term perceptual hypotheses generated on the current look and confirmed upon subsequent looks at the same display. It has been suggested that the contents of perceptual hypotheses are similar to those of other forms of memory acquired long-term through repeated exposure to the same search displays over the course of several trials, that is, the memory supporting “contextual cueing.” In three experiments, we investigated the relationship between short-term perceptual hypotheses and long-term contextual memory. The results indicated that long-term, contextual memory of repeated displays neither affected the generation nor the confirmation of short-term perceptual hypotheses for these displays. Furthermore, the analysis of eye movements suggests that long-term memory provides an initial benefit in guiding attention to the target, whereas in subsequent looks guidance is entirely based on short-term perceptual hypotheses. Overall, the results reveal a picture of both long- and short-term memory contributing to reliable performance gains in interrupted search, while exerting their effects in an independent manner. |
Joseph Schmidt; Gregory J. Zelinsky Adding details to the attentional template offsets search difficulty: Evidence from contralateral delay activity Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 3, pp. 429–437, 2017. @article{Schmidt2017a, We investigated how expected search difficultly affects the attentional template by having participants search for a teddy bear target among either other teddy bears (difficult search, high target-distractor similarity) or random nonbear objects (easy search, low target-distractor similarity). Target previews were identical in these 2 blocked conditions, and target-related visual working memory (VWM) load was measured using contralat- eral delay activity (CDA), an event-related potential indicating VWM load. CDA was assessed after target designation but before search display onset. Shortly after preview offset, the expectation of a difficult search produced a target-related CDA, suggesting the encoding and maintenance of target details inVWM.However, no differences in CDA were found immediately before search onset, suggesting a flexible and efficient weighting of the templates' features to reflect the expected demands of the search task. Moreover, CDA amplitude correlated with eye movement measures of search guidance in difficult search trials but not easy trials, suggesting that the utility of the attentional template is greater for more difficult searches. These findings are evidence that attentional templates depend on expected task difficulty, and that people may compensate for a more difficult search by adding details to their target representation in VWM, as measured by CDA. |
Lisette J. Schmidt; Artem V. Belopolsky; Jan Theeuwes The time course of attentional bias to cues of threat and safety Journal Article In: Cognition and Emotion, vol. 31, no. 5, pp. 845–857, 2017. @article{Schmidt2017, It is well known that relative to neutral stimuli, attention is biased towards processing stimuli that convey threat. In a previous study in which a particular stimulus (e.g. a blue diamond) was associated with the delivery of an electrical shock, the presence of the fear-conditioned stimulus interfered with the execution of voluntary eye movements to other locations. Here, we show that this effect not only occurs early in time, but remains present long after the fear-conditioned stimulus was removed from the screen. In a subsequent experiment, we associated the presence of a particular stimulus with safety, that is, when this stimulus was present it was certain that no electrical shock would be delivered. The presence of the safety signalling stimulus also interfered with the execution of voluntary saccades, but only when the time between stimulus and cue presentation was relatively long. The results indicate that both signals of threat and signals of safety interfere with execution of a saccade long after the source of threat or safety has been removed. However, only threatening stimuli affect saccade execution early in time, suggesting that threatening stimuli drive selection exogenously. |
Daniel Schmidtke; Kazunaga Matsuki; Victor Kuperman Surviving blind decomposition: A distributional analysis of the time-course of complex word recognition Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 11, pp. 1793–1820, 2017. @article{Schmidtke2017, The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1–4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness; bad ⫹ ness), English pseudoderived words (e.g., wander; wand ⫹ er) and morphologically simple control words (e.g., ballad; ball ⫹ ad). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1–2; 5–7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. |
Arthur Portron; Jean Lorenceau Sustained smooth pursuit eye movements with eye-induced reverse-phi motion Journal Article In: Journal of Vision, vol. 17, no. 1, pp. 1–19, 2017. @article{Portron2017, The gain and speed of smooth pursuit eye movements quickly drop whenever a moving tracked target disappears behind an occluder. The present study tests to what extent pursuit maintenance after target disappearance depends on the occluder's characteristics. In all experiments, a target moving for 2500 ms, (or 1250 ms) at 13.38/s (or 26.68/s), disappears behind an occluder for 700 ms (or 350 ms). Participants are asked to maintain their pursuit eye movements as long as possible after target disappearance. Experiment 1 compares smooth pursuit with four types of occluders and shows that a texture of flickering disks allows maintaining pursuit for long durations. Experiment 2 investigates the capability to maintain pursuit with occluders of varying flickering frequencies (3, 5, 10, 20, and 30 Hz). It is found that after target disappearance, smooth pursuit is maintained for longer durations with flicker at 10 and 20 Hz, relative to other flickering frequencies (3, 5, and 30 Hz). Experiment 3 tests whether disk size and disk density of a flickering occluding texture influence smooth pursuit maintenance. Finally, Experiment 4 tests the influence of the contrast distribution of the flickering disks on pursuit maintenance. Altogether, the results show that individuals can maintain smooth pursuit for long durations after target disappearance behind an occluding texture of disks flickering at temporal frequency above 5 Hz with balanced contrast. It is suggested that eye- induced reverse-phi motion responses in MT/MST neurons provide a positive visual feedback to the pursuit system, allowing generating smooth pursuit in the absence of explicit stimulus motion. |
Joshua D. Pratt; Scott B. Stevenson; Harold E. Bedell Scotoma visibility and reading rate with bilateral central scotomas Journal Article In: Optometry and Vision Science, vol. 94, no. 3, pp. 279–284, 2017. @article{Pratt2017, PURPOSE: In this experiment, we tested whether perceptually delineating the scotoma location and border with a gaze contingent polygon overlay improves reading speed and reading eye movements in patients with bilateral central scotomas. METHODS: Eight patients with age-related macular degeneration and bilateral central scotomas read aloud MNRead style sentences with their preferred eye. Eye movement signals from an EyeLink II eyetracker were used to create a gaze contingent display in which a polygon overlay delineating the area of the patient's scotoma was superimposed on the text during 18 of the 42 trials. Blocks of six trials with the superimposed polygon were alternated with blocks of six trials without the polygon. Reading speed and reading eye movements were assessed before and after the subjects practiced reading with the polygon overlay. RESULTS: All of the subjects but one showed an increase in reading speed. A paired-samples t-test for the group as a whole revealed a statistically significant increase in reading speed of 0.075 ± 0.060 (SD) log wpm after reading with the superimposed polygon. Individual subjects demonstrated significant changes in reading eye movements, with the greatest number of subjects demonstrating a shift in the average vertical fixation locus. Across subjects, there was no significant difference between the initial and final reading eye movements in terms of saccades per second, average fixation duration, average amplitude of saccades, or proportion of non-horizontal saccades. CONCLUSIONS: The improvement in reading speed (0.075 log wpm or 19%) over the short experimental session for the majority of subjects indicates that making the scotoma location more visible is potentially beneficial for improving reading speed in patients with bilateral central scotomas. Additional research to examine the efficacy of more extended training with this paradigm is warranted. |
Gavin R. Price; Eric D. Wilkey; Darren J. Yeo Eye-movement patterns during nonsymbolic and symbolic numerical magnitude comparison and their relation to math calculation skills Journal Article In: Acta Psychologica, vol. 176, pp. 47–57, 2017. @article{Price2017, A growing body of research suggests that the processing of nonsymbolic (e.g. sets of dots) and symbolic (e.g. Arabic digits) numerical magnitudes serves as a foundation for the development of math competence. Performance on magnitude comparison tasks is thought to reflect the precision of a shared cognitive representation, as evidence by the presence of a numerical ratio effect for both formats. However, little is known regarding how visuo-perceptual processes are related to the numerical ratio effect, whether they are shared across numerical formats, and whether they relate to math competence independently of performance outcomes. The present study investigates these questions in a sample of typically developing adults. Our results reveal a pattern of associations between eye-movement measures, but not their ratio effects, across formats. This suggests that ratio-specific visuo-perceptual processing during magnitude processing is different across nonsymbolic and symbolic formats. Furthermore, eye movements are related to math performance only during symbolic comparison, supporting a growing body of literature suggesting symbolic number processing is more strongly related to math outcomes than nonsymbolic magnitude processing. Finally, eye-movement patterns, specifically fixation dwell time, continue to be negatively related to math performance after controlling for task performance (i.e. error rate and reaction time) and domain general cognitive abilities (IQ), suggesting that fluent visual processing of Arabic digits plays a unique and important role in linking symbolic number processing to formal math abilities. |
Iya Khelm Price; Jeffrey Witzel Sources of relative clause processing difficulty: Evidence from Russian Journal Article In: Journal of Memory and Language, vol. 97, pp. 208–244, 2017. @article{Price2017a, This study investigates the sources of processing difficulty in complex sentences involving relative clauses (RCs). Self-paced reading and eye tracking were used to test the comprehension of Russian subject-and object-extracted RCs (SRCs and ORCs) that had the same word-order configuration, but different noun phrase (NP) types (full NPs vs. pronouns) in the embedded clause. In both SRCs and ORCs, this NP intervened between the modified noun and the RC verb. A corpus analysis and acceptability rating experiment indicated different frequency/preference profiles for this word order depending on RC type and embedded NP type. In line with these profiles, processing difficulty was revealed early in the embedded clause for less frequent/dispreferred constructions. Later in the embedded clause, the processing of the RC verb was comparable for both SRCs and ORCs when the same number of NP arguments was available for integration. While there were no indications of an ORC penalty at or after this verb, late-stage comprehension difficulty was found for full-NP ORCs, but not for their pronominal counterparts, suggesting that similarity-based interference in combination with ORC structure influences the overall comprehension of these sentences. Taken together, these findings support a hybrid model under which independent sources of processing difficulty affect different stages of RC comprehension. |
Silvia Primativo; Camilla Clark; Keir X. X. Yong; Nicholas C. Firth; Jennifer M. Nicholas; Daniel C. Alexander; Jason D. Warren; Jonathan D. Rohrer; Sebastian J. Crutch Eyetracking metrics reveal impaired spatial anticipation in behavioural variant frontotemporal dementia Journal Article In: Neuropsychologia, vol. 106, pp. 328–340, 2017. @article{Primativo2017, Eyetracking technology has had limited application in the dementia field to date, with most studies attempting to discriminate syndrome subgroups on the basis of basic oculomotor functions rather than higher-order cognitive abilities. Eyetracking-based tasks may also offer opportunities to reduce or ameliorate problems associated with standard paper-and-pencil cognitive tests such as the complexity and linguistic demands of verbal test instructions, and the problems of tiredness and attention associated with lengthy tasks that generate few data points at a slow rate. In the present paper we adapted the Brixton spatial anticipation test to a computerized instruction-less version where oculomotor metrics, rather than overt verbal responses, were taken into account as indicators of high level cognitive functions. Twelve bvFTD (in whom spatial anticipation deficits were expected), six SD patients (in whom deficits were predicted to be less frequent) and 38 healthy controls were presented with a 10 × 7 matrix of white circles. During each trial (N = 24) a black dot moved across seven positions on the screen, following 12 different patterns. Participants' eye movements were recorded. Frequentist statistical analysis of standard eye movement metrics were complemented by a Bayesian machine learning (ML) approach in which raw eyetracking time series datasets were examined to explore the ability to discriminate diagnostic group performance not only on the overall performance but also on individual trials. The original pen and paper Brixton test identified a spatial anticipation deficit in 7/12 (58%) of bvFTD and in 2/6 (33%) of SD patients. The eyetracking frequentist approach reported the deficit in 11/12 (92%) of bvFTD and in none (0%) of the SD patients. The machine learning approach had the main advantage of identifying significant differences from controls in 24/24 individual trials for bvFTD patients and in only 12/24 for SD patients. Results indicate that the fine grained rich datasets obtained from eyetracking metrics can inform us about high level cognitive functions in dementia, such as spatial anticipation. The ML approach can help identify conditions where subtle deficits are present and, potentially, contribute to test optimisation and the reduction of testing times. The absence of instructions also favoured a better distinction between different clinical groups of patients and can help provide valuable disease-specific markers. |
Silvia Primativo; Jamie Reilly; Sebastian J. Crutch Abstract conceptual feature ratings predict gaze within written word arrays: Evidence from a visual wor(l)d paradigm Journal Article In: Cognitive Science, vol. 41, no. 3, pp. 659–685, 2017. @article{Primativo2017a, The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF framework to abstract words using eyetracking via an adaptation of the classical "visual word paradigm" (VWP). Healthy adults (n = 20) selected the lexical item most related to a probe word in a 4-item written word array comprising the target and three distractors. The relation between the probe and each of the four words was determined using the semantic distance metrics derived from ACF ratings. Eye movement data indicated that the word that was most semantically related to the probe received more and longer fixations relative to distractors. Importantly, in sets where participants did not provide an overt behavioral response, the fixation rates were nonetheless significantly higher for targets than distractors, closely resembling trials where an expected response was given. Furthermore, ACF ratings which are based on individual words predicted eye fixation metrics of probe-target similarity at least as well as latent semantic analysis ratings which are based on word co-occurrence. The results provide further validation of Euclidean distance metrics derived from ACF ratings as a measure of one facet of the semantic relatedness of abstract words and suggest that they represent a reasonable approximation of the organization of abstract conceptual space. The data are also compatible with the broad notion that multiple sources of information (not restricted to sensorimotor and emotion information) shape the organization of abstract concepts. While the adapted "VWP" is potentially a more metacognitive task than the classical visual world paradigm, we argue that it offers potential utility for studying abstract word comprehension. |
Malcolm Proudfoot; Gustavo Rohenkohl; Andrew Quinn; Giles L. Colclough; Joanne Wuu; Kevin Talbot; Mark W. Woolrich; Michael Benatar; Anna C. Nobre; Martin R. Turner Altered cortical beta-band oscillations reflect motor system degeneration in amyotrophic lateral sclerosis Journal Article In: Human Brain Mapping, vol. 38, pp. 237–254, 2017. @article{Proudfoot2017, Continuous rhythmic neuronal oscillations underpin local and regional cortical communication. The impact of the motor system neurodegenerative syndrome amyotrophic lateral sclerosis (ALS) on the neuronal oscillations subserving movement might therefore serve as a sensitive marker of disease activity. Movement preparation and execution are consistently associated with modulations to neuronal oscillation beta (15–30 Hz) power. Cortical beta-band oscillations were measured using magnetoencephalography (MEG) during preparation for, execution, and completion of a visually cued, lateralized motor task that included movement inhibition trials. Eleven “classical” ALS patients, 9 with the primary lateral sclerosis (PLS) phenotype, and 12 asymptomatic carriers of ALS-associated gene mutations were compared with age-similar healthy control groups. Augmented beta desynchronization was observed in both contra- and ipsilateral motor cortices of ALS patients during motor preparation. Movement execution coincided with excess beta desynchronization in asymptomatic mutation carriers. Movement completion was followed by a slowed rebound of beta power in all symptomatic patients, further reflected in delayed hemispheric lateralization for beta rebound in the PLS group. This may correspond to the particular involvement of interhemispheric fibers of the corpus callosum previously demonstrated in diffusion tensor imaging studies. We conclude that the ALS spectrum is characterized by intensified cortical beta desynchronization followed by delayed rebound, concordant with a broader concept of cortical hyperexcitability, possibly through loss of inhibitory interneuronal influences. MEG may potentially detect cortical dysfunction prior to the development of overt symptoms, and thus be able to contribute to the assessment of future neuroprotective strategies. |
Maria Solé Puig; August Romeo; Jose Cañete Crespillo; Hans Supèr Eye vergence responses during a visual memory task Journal Article In: NeuroReport, vol. 28, no. 3, pp. 123–127, 2017. @article{Puig2017, In a previous report it was shown that covertly attending visual stimuli produce small convergence of the eyes, and that visual stimuli can give rise to different modulations of the angle of eye vergence, depending on their power to capture attention. Working memory is highly dependent on attention. Therefore, in this study we assessed vergence responses in a memory task. Participants scanned a set of 8 or 12 images for 10 s, and thereafter were presented with a series of single images. One half were repeat images - that is, they belonged to the initial set - and the other half were novel images. Participants were asked to indicate whether or not the images were included in the initial image set. We observed that eyes converge during scanning the set of images and during the presentation of the single images. The convergence was stronger for remembered images compared with the vergence for nonremembered images. Modulation in pupil size did not correspond to behavioural responses. The correspondence between vergence and coding/retrieval processes of memory strengthen the idea of a role for vergence in attention processing of visual information. |
Erdem Pulcu; Michael Browning Affective bias as a rational response to the statistics of rewards and punishments Journal Article In: eLife, vol. 6, pp. 1–15, 2017. @article{Pulcu2017, Affective bias, the tendency to differentially prioritise the processing of negative relative to positive events, is commonly observed in clinical and non-clinical populations. However, why such biases develop is not known. Using a computational framework, we investigated whether affective biases may reflect individuals' estimates of the information content of negative relative to positive events. During a reinforcement learning task, the information content of positive and negative outcomes was manipulated independently by varying the volatility of their occurrence. Human participants altered the learning rates used for the outcomes selectively, preferentially learning from the most informative. This behaviour was associated with activity of the central norepinephrine system, estimated using pupilometry, for loss outcomes. Humans maintain independent estimates of the information content of distinct positive and negative outcomes which may bias their processing of affective events. Normalising affective biases using computationally inspired interventions may represent a novel approach to treatment development. |
Michael Puntiroli; C. Tandonnet; D. Kerzel; S. Born Race to accumulate evidence for few and many saccade alternatives: An exception to speed–accuracy trade-off Journal Article In: Experimental Brain Research, vol. 235, no. 2, pp. 507–515, 2017. @article{Puntiroli2017, Hick's law states that increasing the number of response alternatives increases reaction time. Lawrence and colleagues report an exception to the law, whereby more alternatives lead to shorter saccadic reaction times (SRTs). Usher and McClelland (Psychol Rev 108(3):550–592. doi:10.1037/0033-295X.108.3.550, 2001) predict such an anti-Hick's effect when accuracy is not prioritized in a task, which should result in higher error rates with more response alternatives, and in turn to a shorter right tail of the SRT distribution. In the current study, we aim to replicate the original controversial findings and we compare them to these predictions by examining error rates and SRT distributions. Two experiments were conducted where participants made rapid eye movements to one of few or many alternatives. In Experiment 1, the saccade target was an onset and participants started either with few or many possible target locations and then alternated between conditions. An anti-Hick's effect emerged only when participants had started with a small set-size block. In Experiment 2, placeholders were displayed at the possible target locations and independent groups were used. A reliable anti-Hick's effect in SRTs was observed. However, results did not meet the stated predictions: anticipations and false direction errors were never more frequent when the set size was larger and SRT differences between the two set-size conditions were not more pronounced at the slower end of the distributions. In line with Lawrence and colleagues, we speculate that initial motor preparation, and the subsequent inhibition to counteract a premature response, may induce the anti-Hick's effect. |
Cheng S. Qian; Jan W. Brascamp How to build a dichoptic presentation system that includes an eye tracker Journal Article In: Journal of Visualized Experiments, no. 127, pp. 1–9, 2017. @article{Qian2017, The presentation of different stimuli to the two eyes, dichoptic presentation, is essential for studies involving 3D vision and interocular suppression. There is a growing literature on the unique experimental value of pupillary and oculomotor measures, especially for research on interocular suppression. Although obtaining eye-tracking measures would thus benefit studies that use dichoptic presentation, the hardware essential for dichoptic presentation (e.g. mirrors) often interferes with high-quality eye tracking, especially when using a video-based eye tracker. We recently described an experimental setup that combines a standard dichoptic presentation system with an infrared eye tracker by using infrared-transparent mirrors1. The setup is compatible with standard monitors and eye trackers, easy to implement, and affordable (on the order of US$1,000). Relative to existing methods it has the benefits of not requiring special equipment and posing few limits on the nature and quality of the visual stimulus. Here we provide a visual guide to the construction and use of our setup. |
Carolyn Quam; Sarah C. Creel Tone attrition in Mandarin speakers of varying English proficiency Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 60, pp. 293–305, 2017. @article{Quam2017, Purpose: The purpose of this study was to determine whether the degree of dominance of Mandarin–English bilinguals' languages affects phonetic processing of tone content in their native language, Mandarin. Method: We tested 72 Mandarin–English bilingual college students with a range of language-dominance profiles in the 2 languages and ages of acquisition of English. Participants viewed 2 photographs at a time while hearing a familiar Mandarin word referring to 1 photograph. The names of the 2 photographs diverged in tone, vowels, or both. Word recognition was evaluated using clicking accuracy, reaction times, and an online recognition measure (gaze) and was compared in the 3conditions. Results: Relative proficiency in English was correlated with reduced word recognition success in tone-disambiguated trials, but not in vowel-disambiguated trials, across all 3 dependent measures. This selective attrition for tone content emerged even though all bilinguals had learned Mandarin from birth. Lengthy experience with English thus weakened tone use. Conclusions: This finding has implications for the question of the extent to which bilinguals' 2 phonetic systems interact. It suggests that bilinguals may not process pitch information language-specifically and that processing strategies from the dominant language may affect phonetic processing in the nondominant language—even when the latter was learned natively. |
Carolyn Quam; Sarah C. Creel Mandarin-English bilinguals process lexical tones in newly learned words in accordance with the language context Journal Article In: PLoS ONE, vol. 12, no. 1, pp. e0169001, 2017. @article{Quam2017a, Previous research has mainly considered the impact of tone-language experience on ability to discriminate linguistic pitch, but proficient bilingual listening requires differential processing of sound variation in each language context. Here, we ask whether Mandarin-English bilinguals, for whom pitch indicates word distinctions in one language but not the other, can process pitch differently in a Mandarin context vs. an English context. Across three eye-tracked word-learning experiments, results indicated that tone-intonation bilinguals process tone in accordance with the language context. In Experiment 1, 51 Mandarin-English bilinguals and 26 English speakers without tone experience were taught Mandarin-compatible novel words with tones. Mandarin-English bilinguals out-performed English speakers, and, for bilinguals, overall accuracy was correlated with Mandarin dominance. Experiment 2 taught 24 Mandarin-English bilinguals and 25 English speakers novel words with Mandarin-like tones, but English-like phonemes and phonotactics. The Mandarin-dominance advantages observed in Experiment 1 disappeared when words were English-like. Experiment 3 contrasted Mandarin-like vs. English-like words in a within-subjects design, providing even stronger evidence that bilinguals can process tone language-specifically. Bilinguals (N = 58), regardless of language dominance, attended more to tone than English speakers without Mandarin experience (N = 28), but only when words were Mandarin-like—not when they were English-like. Mandarin-English bilinguals thus tailor tone processing to the within-word language context. |
Daniel J. Olson Bilingual language switching costs in auditory comprehension Journal Article In: Language, Cognition and Neuroscience, vol. 32, no. 4, pp. 494–513, 2017. @article{Olson2017, Previous research on bilingual language switching and lexical access has demonstrated a consistent reaction time cost associated with producing a switched token. While some studies have shown these costs to be asymmetrical, with bilinguals evidencing a greater delay when producing switches into their dominant language relative to the non-dominant language, others have shown symmetrical costs, depending on individual (e.g. proficiency) and contextual (e.g. language mode) factors. The current study, employing an eye-tracking paradigm, extends this line of research by examining the potential for switch costs during auditory comprehension. Paralleling previous production-oriented research, results of the current study demonstrate flexible switch costs during auditory comprehension. Switch costs were asymmetrical in monolingual mode, with greater costs incurred when switching into the dominant language, and uniformly absent in bilingual mode. Results are discussed with respect to bilingual language selection mechanisms in both production and comprehension. |
Seiji Ono; Tomohiro Kizuka Effects of visual error timing on smooth pursuit gain adaptation in humans Journal Article In: Journal of Motor Behavior, vol. 49, no. 2, pp. 229–234, 2017. @article{Ono2017, Smooth pursuit (SP) is one of the precise oculomotor behaviors when tracking a moving object. Adaptation of SP is based on a visual-error driven motor learning process associated with predictable changes in the visual environment. Proper timing of a sensory signal is an important factor for adaptation of fine motor control. In this study, we investigated whether visual error timing affects SP gain adaptation. An adaptive change in SP gain is produced experimentally by repeated trials of a step-ramp tracking with 2 different velocities (double-velocity paradigm). The authors used the double-velocity paradigm where target speed changes 400 or 800 ms after the target onset. The results show that SP gain changed in a certain time window following adaptation. The authors suggest that SP adaptation shown in this study is associated with timing control mechanisms. |
Eduard Ort; Johannes J. Fahrenfort; Christian N. L. Olivers Lack of free choice reveals the cost of having to search for more than one object Journal Article In: Psychological Science, vol. 28, no. 8, pp. 1137–1147, 2017. @article{Ort2017, It is debated whether people can concurrently search for more than one object or whether this results in switch costs. Using a gaze-contingent eye-tracking paradigm we reveal a crucial role for cognitive control. We instructed participants to simultaneously look for two color-defined objects presented among distractors. In one condition, both targets were available, giving the observer free choice on what to look for, allowing for proactive control. In other conditions, only one of the two targets was made available, so that the choice was imposed and reactive control would be required. No switch costs emerged when target choice was free, but reliable switch costs emerged when targets were imposed. Bridging contradictory findings, the results are consistent with a model of visual selection in which only one attentional template is active, and in which the efficiency of switching targets depends on the type of cognitive control allowed for by the environment. |
Marte Otten; Yaïr Pinto; Chris L. E. Paffen; Anil K. Seth; Ryota Kanai The uniformity illusion: Central stimuli can determine peripheral perception Journal Article In: Psychological Science, vol. 28, no. 1, pp. 56–68, 2017. @article{Otten2017, Vision in the fovea, the center of the visual field, is much more accurate and detailed than vision in the periphery. This is not in line with the rich phenomenology of peripheral vision. Here, we investigated a visual illusion that shows that detailed peripheral visual experience is partially based on a reconstruction of reality. Participants fixated on the center of a visual display in which central stimuli differed from peripheral stimuli. Over time, participants perceived that the peripheral stimuli changed to match the central stimuli, so that the display seemed uniform. We showed that a wide range of visual features, including shape, orientation, motion, luminance, pattern, and identity, are susceptible to this uniformity illusion. We argue that the uniformity illusion is the result of a reconstruction of sparse visual information (from the periphery) based on more readily available detailed visual information (from the fovea), which gives rise to a rich, but illusory, experience of peripheral vision. |
Tom Nissens; Katja Fiehler Saccades and reaches curve away from the other effector's target in simultaneous eye and hand movements Journal Article In: Journal of Neurophysiology, vol. 119, pp. 118–123, 2017. @article{Nissens2017a, Simultaneous eye and hand movements are highly coordinated and tightly coupled. This raises the question whether the selection of eye and hand targets relies on a shared attentional mechanism or separate attentional systems. Previous studies have revealed conflicting results by reporting evidence for both a shared as well as separate systems. Movement properties such as movement curvature can provide novel insights into this question as they provide a sensitive measure for attentional allocation during target selection. In the current study, participants performed simultaneous eye and hand movements to the same or different visual target locations. We show that both saccade and reaching movements curve away from the other effector's target location when they are simultaneously performed to spatially distinct locations. We argue that there is a shared attentional mechanism involved in selecting eye and hand targets which may be found on the level of effector independent priority maps. |
Anna Nowakowska; Alasdair D. F. Clarke; Amelia R. Hunt Human visual search behaviour is far from ideal Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 284, no. 1849, pp. 1–6, 2017. @article{Nowakowska2017, Evolutionary pressures have made foraging behaviours highly efficient in many species. Eye movements during search present a useful instance of foraging behaviour in humans. We tested the efficiency of eye movements during search using homogeneous and heterogeneous arrays of line segments. The search target is visible in the periphery on the homogeneous array, but requires central vision to be detected on the heterogeneous array. For a compound search array that is heterogeneous on one side and homogeneous on the other, eye movements should be directed only to the heterogeneous side. Instead, participants made many fixations on the homogeneous side. By comparing search of compound arrays to an estimate of search performance based on uniform arrays, we isolate two contributions to search inefficiency. First, participants make superfluous fixations, sacrificing speed for a perceived (but not actual) gain in response certainty. Second, participants fixate the homogeneous side even more frequently than predicted by inefficient search of uniform arrays, suggesting they also fail to direct fixations to locations that yield the most new information. |
Abigail L. Noyce; Nishmar Cestero; Samantha W. Michalka; Barbara G. Shinn-Cunningham; David C. Somers Sensory-biased and multiple-demand processing in human lateral frontal cortex Journal Article In: Journal of Neuroscience, vol. 37, no. 36, pp. 8755– 8766, 2017. @article{Noyce2017, The functionality of much of human lateral frontal cortex (LFC) has been characterized as 'multiple demand' as these regions appear to support a broad range of cognitive tasks. In contrast to this domain-general account, recent evidence indicates that portions of LFC are consistently selective for sensory modality. Michalka et al. (2015) reported two bilateral regions that are biased for visual attention, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), interleaved with two bilateral regions that are biased for auditory attention, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). In the present study, we employ functional MRI to examine both the multiple-demand and sensory-bias hypotheses within caudal portions of human LFC (both men and women participated). Using visual and auditory 2-back tasks, we replicate the finding of two bilateral visual-biased and two bilateral auditory-biased LFC regions, corresponding to sPCS & iPCS and to tgPCS & cIFS, and demonstrate high within-subject reliability of these regions over time and across tasks. In addition, we assess multiple demand responsiveness using BOLD signal recruitment and vector space analysis. In both, we find that the two visual-biased regions, sPCS & iPCS, exhibit stronger multiple demand responsiveness than do the auditory-biased LFC regions, tgPCS & cIFS; however, neither reaches the degree of multiple demand responsiveness exhibited by dorsal anterior cingulate/pre-supplemental motor area or by anterior insula. These results reconcile two competing views of LFC by demonstrating the coexistence of sensory specialization and multiple demand functionality, especially in visual-biased LFC structures. |
Lauri Nummenmaa; Lauri Oksama; Enrico Glerean; Jukka Hyönä Cortical circuit for binding object identity and location during multiple-object tracking Journal Article In: Cerebral Cortex, vol. 27, no. 1, pp. 162–172, 2017. @article{Nummenmaa2017, Sustained multifocal attention for moving targets requires binding object identities with their locations. The brain mechanisms of identity-location binding during attentive tracking have remained unresolved. In 2 functional magnetic resonance imaging experiments, we measured participants' hemodynamic activity during attentive tracking of multiple objects with equivalent (multiple-object tracking) versus distinct (multiple identity tracking, MIT) identities. Task load was manipulated parametrically. Both tasks activated large frontoparietal circuits. MIT led to significantly increased activity in frontoparietal and temporal systems subserving object recognition and working memory. These effects were replicated when eye movements were prohibited. MIT was associated with significantly increased functional connectivity between lateral temporal and frontal and parietal regions. We propose that coordinated activity of this network subserves identity-location binding during attentive tracking. |
Antje Nuthmann Fixation durations in scene viewing: Modeling the effects of local image features, oculomotor parameters, and task Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 2, pp. 370–392, 2017. @article{Nuthmann2017a, Scene perception requires the orchestration of image- and task-related processes with oculomotor constraints. The present study was designed to investigate how these factors influence how long the eyes remain fixated on a given location. Linear mixed models (LMMs) were used to test whether local image statistics (including luminance, luminance contrast, edge density, visual clutter, and the number of homogeneous segments), calculated for 1$backslash$textdegree circular regions around fixation locations, modulate fixation durations, and how these effects depend on task-related control. Fixation durations and locations were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, preference judgment, and search). Along with the image-related predictors, the LMMs simultaneously considered a number of oculomotor and spatiotemporal covariates, including the amplitudes of the previous and next saccades, and viewing time. As a key finding, the local image features around the current fixation predicted this fixation's duration. For instance, greater luminance was associated with shorter fixation durations. Such immediacy effects were found for all three viewing tasks. Moreover, in the memorization and preference tasks, some evidence for successor effects emerged, such that some image characteristics of the upcoming location influenced how long the eyes stayed at the current location. In contrast, in the search task, scene processing was not distributed across fixation durations within the visual span. The LMM-based framework of analysis, applied to the control of fixation durations in scenes, suggests important constraints for models of scene perception and search, and for visual attention in general. |
Antje Nuthmann; Wolfgang Einhäuser; Immo Schütz In: Frontiers in Human Neuroscience, vol. 11, pp. 491, 2017. @article{Nuthmann2017, Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead (“central bias”). This problem is further exacerbated in the context of model comparisons, because some—but not all—models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox “GridFix” available. |
Marcus Nyström; Richard Andersson; Diederick C. Niehorster; Ignace T. C. Hooge Searching for monocular microsaccades – A red Hering of modern eye trackers? Journal Article In: Vision Research, vol. 140, pp. 44–54, 2017. @article{Nystroem2017, Despite early reports and the contemporary consensus on microsaccades as purely binocular phenomena, recent work has proposed not only the existence of monocular microsaccades, but also that they serve functional purposes. We take a critical look at the detection of monocular microsaccades from a signal perspective, using raw data and a state-of-the-art, video-based eye tracker. In agreement with previous work, monocular detections were present in all participants using a standard microsaccade detection algorithm. However, a closer look at the raw data invalidates the vast majority of monocular detections. These results again raise the question of the existence of monocular microsaccades, as well as the need for improved methods to study small eye movements recorded with video-based eye trackers. |
Verena A. Oberlader; Ulrich Ettinger; Rainer Banse; Alexander F. Schmidt Development of a cued pro- and antisaccade paradigm: An indirect measure to explore automatic components of sexual interest Journal Article In: Archives of Sexual Behavior, vol. 46, no. 8, pp. 2377–2388, 2017. @article{Oberlader2017, We developed a cued pro- and antisaccade paradigm (CPAP) to explore automatic components of sexual interest. Heterosexual participants (n = 32 women |
E. Oberwelland; Leonhard Schilbach; I. Barisic; Sarah C. Krall; K. Vogeley; Gereon R. Fink; B. Herpertz-Dahlmann; Kerstin Konrad; Martin Schulte-Rüther Young adolescents with autism show abnormal joint attention network: A gaze contingent fMRI study Journal Article In: NeuroImage: Clinical, vol. 14, pp. 112–121, 2017. @article{Oberwelland2017, Behavioral research has revealed deficits in the development of joint attention (JA) as one of the earliest signs of autism. While the neural basis of JA has been studied predominantly in adults, we recently demonstrated a protracted development of the brain networks supporting JA in typically developing children and adolescents. The present eye-tracking/fMRI study now extends these findings to adolescents with autism. Our results show that in adolescents with autism JA is subserved by abnormal activation patterns in brain areas related to social cognition abnormalities which are at the core of ASD including the STS and TPJ, despite behavioral maturation with no behavioral differences. Furthermore, in the autism group we observed increased neural activity in a network of social and emotional processing areas during interactions with their mother. Moreover, data indicated that less severely affected individuals with autism showed higher frontal activation associated with self-initiated interactions. Taken together, this study provides first-time data of JA in children/adolescents with autism incorporating the interactive character of JA, its reciprocity and motivational aspects. The observed functional differences in adolescents ASD suggest that persistent developmental differences in the neural processes underlying JA contribute to social interaction difficulties in ASD. |
Andrew D. Ogle; Dan J. Graham; Rachel G. Lucas-Thompson; Christina A. Roberto Influence of cartoon media characters on children's attention to and preference for food and beverage products Journal Article In: Journal of the Academy of Nutrition and Dietetics, vol. 117, no. 2, pp. 265–270, 2017. @article{Ogle2017, Background: Over-consuming unhealthful foods and beverages contributes to pediatric obesity and associated diseases. Food marketing influences children's food preferences, choices, and intake. Objective: To examine whether adding licensed media characters to healthful food/beverage packages increases children's attention to and preference for these products. We hypothesized that children prefer less- (vs more-) healthful foods, and pay greater attention to and preferentially select products with (vs without) media characters regardless of nutritional quality. We also hypothesized that children prefer more-healthful products when characters are present over less-healthful products without characters. Design: On a computer, participants viewed food/beverage pairs of more-healthful and less-healthful versions of similar products. The same products were shown with and without licensed characters on the packaging. An eye-tracking camera monitored participant gaze, and participants chose which product they preferred from each of 60 pairs. Participants/setting: Six- to 9-year-old children (n=149; mean age=7.36, standard deviation=1.12) recruited from the Twin Cities, MN, area in 2012-2013. Main outcome measures: Visual attention and product choice. Statistical analyses performed Attention to products was compared using paired-samples t tests, and product choice was analyzed with single-sample t tests. Analyses of variance were conducted to test for interaction effects of specific characters and child sex and age. Results: Children paid more attention to products with characters and preferred less-healthful products. Contrary to our prediction, children chose products without characters approximately 62% of the time. Children's choices significantly differed based on age, sex, and the specific cartoon character displayed, with characters in this study being preferred by younger boys. Conclusions: Results suggest that putting licensed media characters on more-healthful food/beverage products might not encourage all children to make healthier food choices, but could increase selection of healthy foods among some, particularly younger children, boys, and those who like the featured character(s). Effective use likely requires careful demographic targeting. |
Sven Ohl; Clara Kuper; Martin Rolfs Selective enhancement of orientation tuning before saccades Journal Article In: Journal of Vision, vol. 17, no. 13, pp. 1–11, 2017. @article{Ohl2017a, Saccadic eye movements cause a rapid sweep of the visual image across the retina and bring the saccade's target into high-acuity foveal vision. Even before saccade onset, visual processing is selectively prioritized at the saccade target. To determine how this presaccadic attention shift exerts its influence on visual selection, we compare the dynamics of perceptual tuning curves before movement onset at the saccade target and in the opposite hemifield. Participants monitored a 30-Hz sequence of randomly oriented gratings for a target orientation. Combining a reverse correlation technique previously used to study orientation tuning in neurons and general additive mixed modeling, we found that perceptual reports were tuned to the target orientation. The gain of orientation tuning increased markedly within the last 100 ms before saccade onset. In addition, we observed finer orientation tuning right before saccade onset. This increase in gain and tuning occurred at the saccade target location and was not observed at the incongruent location in the opposite hemifield. The present findings suggest, therefore, that presaccadic attention exerts its influence on vision in a spatially and feature-selective manner, enhancing performance and sharpening feature tuning at the future gaze location before the eyes start moving. |
Sven Ohl; Martin Rolfs Saccadic eye movements impose a natural bottleneck on visual short-term memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 5, pp. 736–748, 2017. @article{Ohl2017, Visual short-term memory (VSTM) is a crucial repository of information when events unfold rapidly before our eyes, yet it maintains only a fraction of the sensory information encoded by the visual system. Here, we tested the hypothesis that saccadic eye movements provide a natural bottleneck for the transition of fragile content in sensory memory to VSTM. In 4 experiments, we show that saccades, planned and executed after the disappearance of a memory array, markedly bias visual memory performance. First, items that had appeared at the saccade target were more readily remembered than items that had appeared elsewhere, even though the saccade was irrelevant to the memory task (Experiment 1). Second, this influence was strongest for saccades elicited right after the disappearance of the memory array and gradually declined over the course of a second (Experiment 2). Third, the saccade stabilized memory representations: The imposed bias persisted even several seconds after saccade execution (Experiment 3). Finally, the advantage for stimuli congruent with the saccade target occurred even when that stimulus was far less likely to be probed in the memory test than any other stimulus in the array, ruling out a strategic effort of observers to memorize information presented at the saccade target (Experiment 4). Together, these results make a strong case that saccades inadvertently determine the content of VSTM, and highlight the key role of actions for the fundamental building blocks of cognition. |
Sabine Öhlschläger; Melissa L. -H. Võ SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes Journal Article In: Behavior Research Methods, vol. 49, no. 5, pp. 1780–1791, 2017. @article{Oehlschlaeger2017, Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules – a scene grammar – enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind – SCEGRAM – we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ . |
Dekel Abeles; Shlomit Yuval-Greenberg Just look away: Gaze aversions as an overt attentional disengagement mechanism Journal Article In: Cognition, vol. 168, pp. 99–109, 2017. @article{Abeles2017, During visual exploration of a scene, the eye-gaze tends to be directed toward more salient image-locations, containing more information. However, while performing non-visual tasks, such information-seeking behavior could be detrimental to performance, as the perception of irrelevant but salient visual input may unnecessarily increase the cognitive-load. It would be therefore beneficial if during non-visual tasks, eye-gaze would be governed by a drive to reduce saliency rather than maximize it. The current study examined the phenomenon of gaze-aversion during non-visual tasks, which is hypothesized to act as an active avoidance mechanism. In two experiments, gaze-position was monitored by an eye-tracker while participants performed an auditory mental arithmetic task, and in a third experiment they performed an undemanding naming task. Task-irrelevant simple motion stimuli (drifting grating and random dot kinematogram) were centrally presented, moving at varying speeds. Participants averted their gaze away from the moving stimuli more frequently and for longer proportions of the time when the motion was faster than when it was slower. Additionally, a positive correlation was found between the task's difficulty and this aversion behavior. When the task was highly undemanding, no gaze aversion behavior was observed. We conclude that gaze aversion is an active avoidance strategy, sensitive to both the physical features of the visual distractions and the cognitive load imposed by the non-visual task. |
Daniel J. Acheson; Peter Hagoort Stimulating the brainʼs language network: Syntactic ambiguity resolution after TMS to the inferior frontal gyrus and middle temporal gyrus Journal Article In: Journal of Cognitive Neuroscience, vol. 25, no. 10, pp. 1664–1677, 2017. @article{Acheson2017, The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brainʼs language network. Previous neuroimaging evidence has supported a dis- sociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic informa- tion and the IFG in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dis- sociation by modulating activity in IFG and MTG using an off- line TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambig- uating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to the control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension. |
Hossein Adeli; Françoise Vitu; Gregory J. Zelinsky A model of the superior colliculus predicts fixation locations during scene viewing and visual aearch Journal Article In: Journal of Neuroscience, vol. 37, no. 6, pp. 1453–1467, 2017. @article{Adeli2017, Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts. |
Mehmet N. Ağaoğlu; Susana T. L. Chung Interaction between stimulus contrast and pre-saccadic crowding Journal Article In: Royal Society Open Science, vol. 4, no. 2, pp. 1–17, 2017. @article{Agaoglu2017, Objects that are briefly flashed around the time of saccades are mislocalized. Previously, robust interactions between saccadic perceptual distortions and stimulus contrast have been reported. It is also known that crowding depends on the contrast of the target and flankers. Here, we investigated how stimulus contrast and crowding interact with pre-saccadic perception. We asked observers to report the orientation of a tilted Gabor presented in the periphery, with or without four flanking vertically oriented Gabors. Observers performed the task either following a saccade or while maintaining fixation. Contrasts of the target and flankers were independently set to either high or low, with equal probability. In both the fixation and saccade conditions, the flanked conditions resulted in worse discrimination performance—the crowding effect. In the unflanked saccade trials, performance significantly decreased with target-to-saccade onset for low-contrast targets but not for high-contrast targets. In the presence of flankers, impending saccades reduced performance only for low-contrast, but not for high-contrast flankers. Interestingly, average performance in the fixation and saccade conditions was mostly similar in all contrast conditions. Moreover, the magnitude of crowding was influenced by saccades only when the target had high contrast and the flankers had low contrasts. Overall, our results are consistent with modulation of perisaccadic spatial localization by contrast and saccadic suppression, but at odds with a recent report of pre-saccadic release of crowding. |
Jordi Aguila; F. Javier Cudeiro; Casto Rivadulla In: Cerebral Cortex, vol. 27, no. 6, pp. 3331–3345, 2017. @article{Aguila2017, In awake monkeys, we used repetitive transcranial magnetic stimulation (rTMS) to focally inactivate visual cortex while measuring the responsiveness of parvocellular lateral geniculate nucleus (LGN) neurons. Effects were noted in 64/75 neurons, and could be divided into 2 main groups: (1) for 39 neurons, visual responsiveness decreased and visual latency increased without apparent shift in receptive field (RF) position and (2) a second group (n = 25, 33% of the recorded cells) whose excitability was not compromised, but whose RF position shifted an average of 4.5°. This change is related to the retinotopic correspondence observed between the recorded thalamic area and the affected cortical zone. The effect of inactivation for this group of neurons was compatible with silencing the original retinal drive and unmasking a second latent retinal drive onto the studied neuron. These results indicate novel and remarkable dynamics in thalamocortical circuitry that force us to reassess constraints on retinogeniculate transmission. |
Carlos Aguilar; Eric Castet Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people Journal Article In: PLoS ONE, vol. 12, no. 4, pp. e0174910, 2017. @article{Aguilar2017, People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)-conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading. |
C. J. Aine; H. J. Bockholt; J. R. Bustillo; J. M. Cañive; A. Caprihan; C. Gasparovic; F. M. Hanlon; J. M. Houck; R. E. Jung; J. Lauriello; J. Liu; A. R. Mayer; N. I. Perrone-Bizzozero; S. Posse; Julia M. Stephen; J. A. Turner; V. P. Clark; Vince D. Calhoun Multimodal neuroimaging in schizophrenia: Description and dissemination Journal Article In: Neuroinformatics, vol. 15, no. 4, pp. 343–364, 2017. @article{Aine2017, In this paper we describe an open-access collection ofmultimodal neuroimaging data in schizophrenia for release to the community. Data were acquired from approximately 100 patients with schizophrenia and 100 age-matched controls during rest as well as several task activation paradigms targeting a hierarchy of cognitive constructs. Neuroimaging data include structural MRI, functional MRI, diffusion MRI, MR spectroscopic imaging, and magnetoencephalography. For three of the hypothesis-driven projects, task activation paradigms were acquired on subsets of~200 volunteers which examined a range of sensory and cognitive processes (e.g., auditory sensory gating, auditory/visual multisensory integration, visual transverse patterning). Neuropsychological data were also acquired and genetic material via saliva samples were collected from most of the participants and have been typed for both genome-wide polymorphism data as well as genome-wide methylation data. Some results are also present- ed from the individual studies as well as from our data-driven multimodal analyses (e.g., multimodal examinations of network structure and network dynamics and multitask fMRI data analysis across projects). All data will be released through the Mind Research Network's collaborative informatics and neuroimaging suite (COINS). |
Avigael M. Aizenman; Trafton Drew; Krista A. Ehinger; Dianne Georgian-smith; Jeremy M. Wolfe Comparing search patterns in digital breast tomosynthesis and full-field digital mammography: An eye tracking study Journal Article In: Journal of Medical Imaging, vol. 4, no. 4, pp. 1–22, 2017. @article{Aizenman2017, As a promising imaging modality, digital breast tomosynthesis (DBT) leads to better diagnostic per- formance than traditional full-field digital mammograms (FFDM) alone. DBT allows different planes of the breast to be visualized, reducing occlusion from overlapping tissue. Although DBT is gaining popularity, best practices for search strategies in this medium are unclear. Eye tracking allowed us to describe search patterns adopted by radiologists searching DBT and FFDM images. Eleven radiologists examined eight DBT and FFDM cases. Observers marked suspicious masses with mouse clicks. Eye position was recorded at 1000 Hz and was coregistered with slice/depth plane as the radiologist scrolled through the DBT images, allowing a 3-D representation of eye position. Hit rate for masses was higher for tomography cases than 2-D cases and DBT led to lower false positive rates. However, search duration was much longer for DBT cases than FFDM. DBT was associated with longer fixations but similar saccadic amplitude compared with FFDM. When comparing radiologists' eye movements to a previous study, which tracked eye movements as radiologists read chest CT, we found DBT viewers did not align with previously identified “driller” or “scanner” strategies, although their search strategy most closely aligns with a type of vigorous drilling strategy. |
Umair Akram; Jason G. Ellis; Andriy Myachykov; Nicola L. Barclay Preferential attention towards the eye-region amongst individuals with insomnia Journal Article In: Journal of Sleep Research, vol. 26, no. 1, pp. 84–91, 2017. @article{Akram2017, People with insomnia often perceive their own facial appearance as more tired compared with the appearance of others. Evidence also highlights the eye-region in projecting tiredness cues to perceivers, and tiredness judgements often rely on preferential attention towards this region. Using a novel eye-tracking paradigm, this study examined: (i) whether individuals with insomnia display preferential attention towards the eyeregion, relative to nose and mouth regions, whilst observing faces compared with normal-sleepers; and (ii) whether an attentional bias towards the eye-region amongst individuals with insomnia is self-specific or general in nature. Twenty individuals with DSM-5 Insomnia Disorder and 20 normal-sleepers viewed 48 neutral facial photographs (24 of themselves, 24 of other people) for periods of 4000 ms. Eye movements were recorded using eye-tracking, and first fixation onset, first fixation duration and total gaze duration were examined for three interest-regions (eyes, nose, mouth). Significant group 9 interest-region interactions indicated that, regardless of the face presented, participants with insomnia were quicker to attend to, and spent more time observing, the eye-region relative to the nose and mouth regions compared with normal-sleepers. However, no group 9 face 9 interest-region interactions were established. Thus, whilst individuals with insomnia displayed preferential attention towards the eye-region in general, this effect was not accentuated during self-perception. Insomnia appears to be characterized by a general, rather than self-specific, attentional bias towards the eye-region. These findings contribute to our understanding of face perception in insomnia, and provide tentative support for cognitive models of insomnia demonstrating that individuals with insomnia monitor faces in general, with a specific focus around the eye-region, for cues associated with tiredness. |
Noor Z. Al Dahhan; John R. Kirby; Donald C. Brien; Douglas P. Munoz Eye movements and articulations during a letter naming speed task: Children with and without Dyslexia Journal Article In: Journal of Learning Disabilities, vol. 50, no. 3, pp. 275–285, 2017. @article{AlDahhan2017, Abstract Naming speed (NS) refers to how quickly and accurately participants name a set of familiar stimuli (e.g., letters). NS is an established predictor of reading ability, but controversy remains over why it is related to reading. We used three techniques (stimulus manipulations to emphasize phonological and/or visual aspects, decomposition of NS times into pause and articulation components, and analysis of eye movements during task performance) with three groups of participants (children with dyslexia, ages 9–10; chronological-age [CA] controls, ages 9–10; reading-level [RL] controls, ages 6–7) to examine NS and the NS–reading relationship. Results indicated (a) for all groups, increasing visual similarity of the letters decreased letter naming efficiency and increased naming errors, saccades, regressions (rapid eye movements back to letters already fixated), pause times, and fixation durations; (b) children with dyslexia performed like RL controls and were less efficient, had longer articulation times, pause times, fixation durations, and made more errors and regressions than CA controls; and (c) pause time and fixation duration were the most powerful predictors of reading. We conclude that NS is related to reading via fixation durations and pause times: Longer fixation durations and pause times reflect the greater amount of time needed to acquire visual/orthographic information from stimuli and prepare the correct response. |
Ada D. Mishler; Mark B. Neider Absence of distracting information explains the redundant signals effect for a centrally presented categorization task Journal Article In: Acta Psychologica, vol. 181, pp. 18–26, 2017. @article{Mishler2017, The redundant signals effect, a speed-up in response times with multiple targets compared to a single target in one display, is well-documented, with some evidence suggesting that it can occur even in conceptual processing when targets are presented bilaterally. The current study was designed to determine whether or not category-based redundant signals can speed up processing even without bilateral presentation. Toward that end, participants performed a go/no-go visual task in which they responded only to members of the target category (i.e., they responded only to numbers and did not respond to letters). Numbers and letters were presented along an imaginary vertical line in the center of the visual field. When the single signal trials contained a nontarget letter (Experiment 1), there was a significant redundant signals effect. The effect was not significant when the single-signal trials did not contain a nontarget letter (Experiments 2 and 3). The results indicate that, when targets are defined categorically and not presented bilaterally, the redundant signals effect may be an effect of reducing the presence of information that draws attention away from the target. This suggests that redundant signals may not speed up conceptual processing when interhemispheric presentation is not available. |
Sanako Mitsugi Incremental comprehension of Japanese passives: Evidence from the visual-world paradigm Journal Article In: Applied Psycholinguistics, vol. 38, no. 4, pp. 953–983, 2017. @article{Mitsugi2017, Psycholinguistic research has shown that sentence processing is incremental (e.g., Altmann & Kamide, 1999). In Japanese, a verb-final language, native speakers use case markers to incrementally assign thematic roles and predictively activate a structural representation of upcoming linguistic items. This study examined whether second-language learners of Japanese, guided by case markers, generate predictions as to whether the upcoming verb involves the active or passive voice. The results show that the native speakers made predictive eye movements before the verb, but the learners did not; the learners were less efficient in using case-marker cues than the native speakers and relied more on verb morphology information. These results suggest that case markers guide thematic role assignments, expediting the processing for Japanese native speakers. Learners may depend more on information from the verb to compensate for the inefficiency in case-marker-driven predictive processing. |
Holger Mitterer; Eva Reinisch Visual speech influences speech perception immediately but not automatically Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 2, pp. 660–678, 2017. @article{Mitterer2017, Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately. |
Koji Miwa; Ton Dijkstra Lexical processes in the recognition of Japanese horizontal and vertical compounds Journal Article In: Reading and Writing, vol. 30, no. 4, pp. 791–812, 2017. @article{Miwa2017a, This lexical decision eye-tracking study investigated whether horizontal and vertical readings elicit comparable behavioral patterns and whether reading directions modulate lexical processes. Response times and eye movements were recorded during a lexical decision task with Japanese bimorphemic compound words presented vertically. The data were then analyzed together with those obtained in a horizontal lexical decision experiment of Miwa, Libben, Dijkstra, and Baayen (2014). Linear mixed-effects analyses of response times and eye movements revealed that, although response times and first fixation durations were notably shorter in horizontal reading than vertical reading, the vertical reading elicited fewer fixations. Furthermore, while compounds were recognized largely in comparable ways regardless of reading direction, several lexical processes were found to be reading-direction-dependent. Particularly, processing of the first morpheme was modulated by reading direction in a late time frame, such that a horizontal reading advantage was observed for words with a high frequency first morpheme. All in all, the two reading directions do not only differ quantitatively in processing speed, but also qualitatively in terms of underlying processing mechanisms. |
Koji Miwa; Gary Libben; Yu Ikemoto Visual trimorphemic compound recognition in a morphographic script Journal Article In: Language, Cognition and Neuroscience, vol. 32, no. 1, pp. 1–20, 2017. @article{Miwa2017, This lexical decision with eye-tracking study investigated how Japanese trimorphemic compounds (e.g. 体温計 “clinical thermometer”) are recognised. The questions answered were, in the course of decomposing and composing Japanese trimorphemic compounds, (1) whether recognition processes are tuned for a specific branching direction, (2) whether the morphological processing proceeds in a bottom-up combinatorial manner, and (3) whether the three constituents of trimorphemic compounds are equally important and processed serially. Mixed-effects regression analyses of response times and fixation durations revealed that a left-branching advantage appears in a late time frame and that, although there was early processing of the whole compound from the first fixation, a character frequency effect was also observed. Furthermore, the first and the third, but not the second, constituent frequencies contributed to compound recognition. This bathtub-like effect was further supported by corpus-based evidence: the conditional probability for the second constituent is incomparably high. |
Tobias Moehler; Katja Fiehler In: Experimental Brain Research, vol. 235, no. 11, pp. 3251–3260, 2017. @article{Moehler2017, The current study investigated the role of automatic encoding and maintenance of remembered, past, and present visual distractors for reach movement planning. The previous research on eye movements showed that saccades curve away from locations actively kept in working memory and also from task-irrelevant perceptually present visual distractors, but not from task-irrelevant past distractors. Curvature away has been associated with an inhibitory mechanism resolving the competition between multiple active movement plans. Here, we examined whether reach movements underlie a similar inhibitory mechanism and thus show systematic modulation of reach trajectories when the location of a previously presented distractor has to be (a) maintained in working memory or (b) ignored, or (c) when the distractor is perceptually present. Participants performed vertical reach movements on a computer monitor from a home to a target location. Distractors appeared laterally and near or far from the target (equidistant from central fixation). We found that reaches curved away from the distractors located close to the target when the distractor location had to be memorized and when it was perceptually present, but not when the past distractor had to be ignored. Our findings suggest that automatically encoding present distractors and actively maintaining the location of past distractors in working memory evoke a similar response competition resolved by inhibition, as has been previously shown for saccadic eye movements. |
Rebecca L. Monk; J. Westwood; Derek Heim; Adam W. Qureshi The effect of pictorial content on attention levels and alcohol-related beliefs: An eye-tracking study Journal Article In: Journal of Applied Social Psychology, vol. 47, no. 3, pp. 158–164, 2017. @article{Monk2017, To examine attention levels to different types of alcohol warning labels. Twenty-two participants viewed neutral or graphic warning messages while dwell times for text and image components of messages were assessed. Pre and postexposure outcome expectancies were assessed in order to compute change scores. Dwell times were significantly higher for the image, as opposed to the text, components of warnings, irrespective of image type. Participants whose expectancies increased after exposure to the warnings spent longer looking at the image than did those whose positive expectancies remained static or decreased. Images in alcohol warnings appear beneficial for drawing attention, although findings may suggest that this is also associated with heightened positive alcohol-related beliefs. Implications for health intervention are discussed and future research in this area is recommended. |
Ilya E. Monosov Anterior cingulate is a source of valence-specific information about value and uncertainty Journal Article In: Nature Communications, vol. 8, pp. 134, 2017. @article{Monosov2017, Anterior cingulate cortex (ACC) is thought to control a wide range of reward, punishment, and uncertainty-related behaviors. However, how it does so is unclear. Here, in a Pavlovian procedure in which monkeys displayed a diverse repertoire of reward-related, punishment-related, and uncertainty-related behaviors, we show that many ACC-neurons represent expected value and uncertainty in a valence-specific manner, signaling value or uncertainty predictions about either rewards or punishments. Other ACC-neurons signal prediction information about rewards and punishments by displaying excitation to both (rather than excitation to one and inhibition to the other). This diversity in valence representations may support the role of ACC in many behavioral states that are either enhanced by reward and punishment (e.g., vigilance) or specific to either reward or punishment (e.g., approach and avoidance). Also, this first demonstration of punishment-uncertainty signals in the brain suggests that ACC could be a target for the treatment of uncertainty-related disorders of mood. |
Masahiro Morii; Takashi Ideno; Kazuhisa Takemura; Mitsuhiro Okada Qualitatively coherent representation makes decision-making easier with binary-colored multi-attribute tables: An eye-tracking study Journal Article In: Frontiers in Psychology, vol. 8, pp. 1388, 2017. @article{Morii2017, We aimed to identify the ways in which coloring cells affected decision-making in the context of binary-colored multi-attribute tables, using eye movement data. In our black-white attribute tables, the value of attributes was limited to two (with a certain threshold for each attribute) and each cell of the table was colored either black or white on the white background. We compared the two natural ways of systematic color assignment: “quantitatively coherent” ways and “qualitatively coherent” ways (namely, the ways in which the black-white distinction represented the quantitative amount distinction, and the ways in which the black-white distinction represented the quality distinction). The former consists of the following two types: (Type 1) “larger is black,” where the larger value-level was represented by black, and “smaller is white,” and (Type 2) “smaller is black.” The latter consisted of the following two types: (Type 3) “better is black,” and (Type 4) “worse is black.” We obtained the following two findings. [Result 1] The qualitatively coherent black-white tables (Types 3 and 4) made decision-making easier than the quantitatively coherent ones (Types 1 and 2). [Result 2] Among the two qualitatively coherent types, the “black is better” tables (Type 3) made decision making easier; in fact, the participants focused on the more important (black) cells in the case of “black is better” tables (Type 3) while they did not focus enough on themore important (white) ones in the case of the “white is better” tables (Type 4). We also examined some measures of eye movement patterns and showed that these measures supported our hypotheses. The data showed differences in the eye movement patterns between the first and second halves of each trial, which indicated the phased or combined decision strategies taken by the participants. |
Kentaro Morita; Kenichiro Miura; Michiko Fujimoto; Hidenaga Yamamori; Yuka Yasuda; Masao Iwase; Kiyoto Kasai; Ryota Hashimoto Eye movement as a biomarker of schizophrenia: Using an integrated eye movement score Journal Article In: Psychiatry and Clinical Neurosciences, vol. 71, no. 2, pp. 104–114, 2017. @article{Morita2017, Aim: Studies have shown that eye movement abnormalities are possible neurophysiological biomarkers for schizophrenia. The aim of this study was to investigate the utility of eye movement abnormalities in identifying patients with schizophrenia from healthy controls. Methods: Eighty-five patients with schizophrenia and 252 healthy controls participated in this study. Eye movement measures were collected from free viewing, fixation stability, and smooth pursuit tests. In an objective and stepwise method, eye movement measures were extracted to create an integrated eye movement score. Results: The discriminant analysis resulted in three eye movement measures; the scanpath length during the free viewing test, the horizontal position gain during the fast Lissajous paradigm of the smooth pursuit test, and the duration of fixations during the far distractor paradigm of the fixation stability test. An integrated score using these variables can distinguish patients with schizophrenia from healthy controls with 82% accuracy. The integrated score was correlated with Wechsler Adult Intelligence Scale-Third Edition full scale IQ, Positive and Negative Syndrome Scale scores, and chlorpromazine equivalents, with different correlation patterns in the three eye movement measures used. The discriminant analysis in subgroups matched for age, sex, years of education, and premorbid IQ revealed a sustained classification rate. Conclusion: We established an integrated eye movement score with high classification accuracy between patients with schizophrenia and healthy controls, although there was a significant effect of medication. This study provides further evidence of the utility of eye movement abnormalities in schizophrenia pathology and treatment. |
Vincenzo Moscati; Likan Zhan; Peng Zhou Children's on-line processing of epistemic modals Journal Article In: Journal of Child Language, vol. 44, pp. 1025–1040, 2017. @article{Moscati2017, In this paper we investigated the real-time processing of epistemic modals in five-year-olds. In a simple reasoning scenario, we monitored children's eye-movements while processing a sentence with modal expressions of different force (might/must). Children were also asked to judge the truth-value of the target sentences at the end of the reasoning task. Consistent with previous findings (Noveck, 2001), we found that children's behavioural responses were much less accurate compared to adults. Their eye-movements, however, revealed that children did not treat the two modal expressions alike. As soon as a modal expression was presented, children and adults showed a similar fixation pattern that varied as a function of the modal expression they heard. It is only at the very end of the sentence that children's fixations diverged from the adult ones. We discuss these findings in relation to the proposal that children narrow down the set of possible outcomes in undetermined reasoning scenarios and endorse only one possibility among several (Acredolo & Horobin, 1987, Ozturk & Papafragou, 2015). |
Yuki Motomura; Ruri Katsunuma; Michitaka Yoshimura; Kazuo Mishima Two days' sleep debt causes mood decline during resting state via diminished amygdala-prefrontal connectivity Journal Article In: Sleep, vol. 40, no. 10, pp. zsx133, 2017. @article{Motomura2017, Study objectives: Sleep debt (SD) has been suggested to evoke emotional instability by diminishing the suppression of the amygdala by the medial prefrontal cortex (MPFC). Here, we investigated how short-term SD affects resting-state functional connectivity between the amygdala and MPFC, self-reported mood, and sleep parameters. Methods: Eighteen healthy adult men aged 29 ± 8.24 years participated in a 2-day sleep control session (SC; time in bed [TIB], 9 hours) and 2-day SD session (TIB, 3 hours). On day 2 of each session, resting-state functional magnetic resonance imaging was performed, followed immediately by measuring self-reported mood on the State-Trait Anxiety Inventory-State subscale (STAI-S). Results: STAI-S score was significantly increased, and functional connectivity between the amygdala and MPFC was significantly decreased in SD compared with SC. Significant correlations were observed between reduced rapid eye movement (REM) sleep and reduced left amygdala-MPFC functional connectivity (FCL_amg-MPFC ) and between reduced FCL_amg-MPFC and increased STAI-S score in SD compared with SC. Conclusions: These findings suggest that reduced MPFC functional connectivity of amygdala activity is involved in mood deterioration under SD, and that REM sleep reduction is involved in functional changes in the corresponding brain regions. Having adequate REM sleep may be important for mental health maintenance. |
Alexandra S. Mueller; Esther G. González; Chris McNorgan; Martin J. Steinbach; Brian Timney Aperture extent and stimulus speed affect the perception of visual acceleration Journal Article In: Experimental Brain Research, vol. 235, no. 3, pp. 743–752, 2017. @article{Mueller2017b, Humans are generally poor at detecting the presence of visual acceleration, but it is unclear whether the extent of a field of moving objects through an aperture affects this ability. Hypothetically, the farther a stimulus can accelerate uninterrupted by an aperture's physical constraints, the easier it should be to discern its motion profile. We varied the horizontal extent of the aperture through which continuously accelerating or decelerating random dot arrays were presented at different average speeds, and measured acceleration and deceleration detection thresholds. We also hypothesized that manipulating aperture extent at different speeds would change how observers visually pursue acceleration, which we tested in a control experiment. Results showed that, while there was no difference between the acceleration and deceleration conditions, detection was better in the larger than small aperture conditions. Regardless of aperture size, smaller acceleration and deceleration rates (relative to average speed) were needed to detect changing speed in faster than slower speed ranges. Similarly, observers tracked the stimuli to a greater extent in the larger than small apertures, and smooth pursuit was overall poorer at faster than slower speeds. Notably, the effect of speed on pursuit was greater for the larger than small aperture conditions, suggesting that the small aperture restricted pursuit. Furthermore, there was little difference in psychophysical and eye movement data between the medium and large aperture conditions within each speed range, indicating that it is easier to detect an accelerating profile when the aperture is large enough to encourage a minimum level of pursuit. |
Hermann J. Mueller; Thomas Geyer; Franziska Günther; Jim Kacian; Stella Pierides Reading English-Language haiku: Processes of meaning construction revealed by eye movements Journal Article In: Journal of Eye Movement Research, vol. 10, no. 1, pp. 1–33, 2017. @article{Mueller2017, In the present study, poets and cognitive scientists came together to investigate the construction of meaning in the process of reading normative, 3-line English-language haiku (ELH), as found in leading ELH journals. The particular haiku which we presented to our readers consisted of two semantically separable parts, or images, that were set in a ‘tense' relationship by the poet. In our sample of poems, the division, or cut, between the two parts was positioned either after line 1 or after line 2; and the images related to each other in terms of either a context–action association (context–action haiku) or a conceptually more abstract association (juxtaposition haiku). From a constructivist perspective, understanding such haiku would require the reader to integrate these parts into a coherent ‘meaning Gestalt', mentally (re-)creating the pattern intended by the poet (or one from within the poem's meaning potential). To examine this process, we recorded readers' eye movements, and we obtained measures of memory for the read poems as well as subjective ratings of comprehension difficulty and understanding achieved. The results indicate that processes of meaning construction are reflected in patterns of eye movements during reading (1st-pass) and re-reading (2nd- and 3rd-pass). From those, the position of the cut (after line 1 vs. after line 2) and, to some extent, the type of haiku (context–action vs. juxtaposition) can be ‘recovered'. Moreover, post-reading, readers tended to explicitly recognize a particular haiku they had read if they had been able to understand the poem, pointing to a role of actually resolving the haiku's meaning (rather than just attempting to resolve it) for memory consolidation and subsequent retrieval. Taken together, these first findings are promising, suggesting that haiku can be a paradigmatic material for studying meaning construction during poetry reading. |
Stefanie Mueller; Katja Fiehler In: PLoS ONE, vol. 12, no. 7, pp. e0180782, 2017. @article{Mueller2017a, In previous research, we demonstrated that spatial coding of proprioceptive reach targets depends on the presence of an effector movement (Mueller & Fiehler, Neuropsychologia, 2014, 2016). In these studies, participants were asked to reach in darkness with their right hand to a proprioceptive target (tactile stimulation on the finger tip) while their gaze was varied. They either moved their left, stimulated hand towards a target location or kept it stationary at this location where they received a touch on the fingertip to which they reached with their right hand. When the stimulated hand was moved, reach errors varied as a function of gaze relative to target whereas reach errors were independent of gaze when the hand was kept stationary. The present study further examines whether (a) the availability of proprioceptive online information, i.e. reaching to an online versus a remembered target, (b) the time of the effector movement, i.e. before or after target presentation, or (c) the target distance from the body influences gaze-centered coding of proprioceptive reach targets. We found gaze-dependent reach errors in the conditions which included a movement of the stimulated hand irrespective of whether proprioceptive information was available online or remembered. This suggests that an effector movement leads to gaze-centered coding for both online and remembered proprioceptive reach targets. Moreover, moving the stimulated hand before or after target presentation did not affect gaze-dependent reach errors, thus, indicating a continuous spatial update of positional signals of the stimulated hand rather than the target location per se. However, reaching to a location close to the body rather than farther away (but still within reachable space) generally decreased the influence of a gaze-centered reference frame. |
Parashkev Nachev; Geoff E. Rose; David H. Verity; Sanjay G. Manohar; Kelly MacKenzie; Gill Adams; Maria Theodorou; Quentin A. Pankhurst; Christopher Kennard Magnetic oculomotor prosthetics for acquired nystagmus Journal Article In: Ophthalmology, vol. 124, no. 10, pp. 1556–1564, 2017. @article{Nachev2017, Purpose: Acquired nystagmus, a highly symptomatic consequence of damage to the substrates of oculomotor control, often is resistant to pharmacotherapy. Although heterogeneous in its neural cause, its expression is unified at the effector—the eye muscles themselves—where physical damping of the oscillation offers an alternative approach. Because direct surgical fixation would immobilize the globe, action at a distance is required to damp the oscillation at the point of fixation, allowing unhindered gaze shifts at other times. Implementing this idea magnetically, herein we describe the successful implantation of a novel magnetic oculomotor prosthesis in a patient. Design: Case report of a pilot, experimental intervention. Participant: A 49-year-old man with longstanding, medication-resistant, upbeat nystagmus resulting from a paraneoplastic syndrome caused by stage 2A, grade I, nodular sclerosing Hodgkin's lymphoma. Methods: We designed a 2-part, titanium-encased, rare-earth magnet oculomotor prosthesis, powered to damp nystagmus without interfering with the larger forces involved in saccades. Its damping effects were confirmed when applied externally. We proceeded to implant the device in the patient, comparing visual functions and high-resolution oculography before and after implantation and monitoring the patient for more than 4 years after surgery. Main Outcome Measures: We recorded Snellen visual acuity before and after intervention, as well as the amplitude, drift velocity, frequency, and intensity of the nystagmus in each eye. Results The patient reported a clinically significant improvement of 1 line of Snellen acuity (from 6/9 bilaterally to 6/6 on the left and 6/5–2 on the right), reflecting an objectively measured reduction in the amplitude, drift velocity, frequency, and intensity of the nystagmus. These improvements were maintained throughout a follow-up of 4 years and enabled him to return to paid employment. Conclusions: This work opens a new field of implantable therapeutic devices—oculomotor prosthetics—designed to modify eye movements dynamically by physical means in cases where a purely neural approach is ineffective. Applied to acquired nystagmus refractory to all other interventions, it is shown successfully to damp pathologic eye oscillations while allowing normal saccadic shifts of gaze. |
Claire K. Naughtin; Kristina Horne; Dana Schneider; Dustin Venini; Ashley York; Paul E. Dux Do implicit and explicit belief processing share neural substrates? Journal Article In: Human Brain Mapping, vol. 38, no. 9, pp. 4760–4772, 2017. @article{Naughtin2017, Humans rely on their ability to infer another person's mental state to understand and predict others' behavior (“theory of mind,” ToM). Multiple lines of research suggest that not only are humans able to consciously process another person's belief state, but also are able to do so implicitly. Here we explored how general implicit belief states are represented in the brain, compared to those substrates involved in explicit ToM processes. Previous work on this topic has yielded conflicting results, and thus, the extent to which the implicit and explicit ToM systems draw on common neural bases is unclear. Participants were presented with “Sally-Anne” type movies in which a protagonist was falsely led to believe a ball was in one location, only for a puppet to later move it to another location in their absence (false-belief condition). In other movies, the protagonist had their back turned the entire time the puppet moved the ball between the two locations, meaning that they had no opportunity to develop any pre-existing beliefs about the scenario (no-belief condition). Using a group of independently localized explicit ToM brain regions, we found greater activity for false-belief trials, relative to no-belief trials, in the right temporoparietal junction, right superior temporal sulcus, precuneus, and left middle prefrontal gyrus. These findings extend upon previous work on the neural bases of implicit ToM by showing substantial overlap between this system and the explicit ToM system, suggesting that both abilities might recruit a common set of mentalizing processes/functional brain regions. |
Maital Neta; Tien T. Tong; Monica L. Rosen; Alex Enersen; M. Justin Kim; Michael D. Dodd All in the first glance: First fixation predicts individual differences in valence bias Journal Article In: Cognition and Emotion, vol. 31, no. 4, pp. 772–780, 2017. @article{Neta2017, Surprised expressions are interpreted as negative by some people, and as positive by others. When compared to fearful expressions, which are consistently rated as negative, surprise and fear share similar morphological structures (e.g. widened eyes), but these similarities are primarily in the upper part of the face (eyes). We hypothesised, then, that individuals would be more likely to interpret surprise positively when fixating faster to the lower part of the face (mouth). Participants rated surprised and fearful faces as either positive or negative while eye movements were recorded. Positive ratings of surprise were associated with longer fixation on the mouth than negative ratings. There were also individual differences in fixation patterns, with individuals who fixated the mouth earlier exhibiting increased positive ratings. These findings suggest that there are meaningful individual differences in how people process faces. |
Sujaya Neupane; Daniel Guitton; Christopher C. Pack Coherent alpha oscillations link current and future receptive fields during saccades Journal Article In: Proceedings of the National Academy of Sciences, vol. 114, no. 29, pp. E5979–E5985, 2017. @article{Neupane2017, Oscillations are ubiquitous in the brain, and they can powerfully influence neural coding. In particular, when oscillations at distinct sites are coherent, they provide a means of gating the flow of neural signals between different cortical regions. Coherent oscillations also occur within individual brain regions, although the purpose of this coherence is not well understood. Here, we report that within a single brain region, coherent alpha oscillations link stimulus representations as they change in space and time. Specifically, in primate cortical area V4, alpha coherence links sites that encode the retinal location of a visual stimulus before and after a saccade. These coherence changes exhibit properties similar to those of receptive field remapping, a phenomenon in which individual neurons change their receptive fields according to the metrics of each saccade. In particular, alpha coherence, like remapping, is highly dependent on the saccade vector and the spatial arrangement of current and future receptive fields. Moreover, although visual stimulation plays a modulatory role, it is neither necessary nor sufficient to elicit alpha coherence. Indeed, a similar pattern of coherence is observed even when saccades are made in darkness. Together, these results show that the pattern of alpha coherence across the retinotopic map in V4 matches many of the properties of receptive field remapping. Thus, oscillatory coherence might play a role in constructing the stable representation of visual space that is an essential aspect of conscious perception. |
Daniel P. Newman; Gerard M. Loughnane; Simon P. Kelly; Redmond G. O'Connell; Mark A. Bellgrove Visuospatial asymmetries arise from differences in the onset time of perceptual evidence accumulation Journal Article In: Journal of Neuroscience, vol. 37, no. 12, pp. 3378–3385, 2017. @article{Newman2017, Healthy subjects tend to exhibit a bias of visual attention whereby left hemifield stimuli are processed more quickly and accurately than stimuli appearing in the right hemifield. It has long been held that this phenomenon arises from the dominant role of the right cerebral hemisphere in regulating attention. However, methods that would enable more precise understanding of the mechanisms underpinning visuospatial bias have remained elusive. We sought to finely trace the temporal evolution of spatial biases by leveraging a novel bilateral dot motion detection paradigm. In combination with electroencephalography, this paradigm enables researchers to isolate discrete neural signals reflecting the key neural processes needed for making these detection decisions. These include signals for spatial attention, early target selection, evidence accumulation, and motor preparation. Using this method, we established that three key neural markers accounted for unique between-subject variation in visuospatial bias: hemispheric asymmetry in posterior α power measured before target onset, which is related to the distribution of preparatory attention across the visual field; asymmetry in the peak latency of the early N2c target-selection signal; and, finally, asymmetry in the onset time of the subsequent neural evidence-accumulation process with earlier onsets for left hemifield targets. Our development of a single paradigm to dissociate distinct processing components that track the temporal evolution of spatial biases not only advances our understanding of the neural mechanisms underpinning normal visuospatial attention bias, but may also in the future aid differential diagnoses in disorders of spatial attention. |
Veerle Neyens; Rose Bruffaerts; Antonietta G. Liuzzi; Ioannis Kalfas; Ronald Peeters; Emmanuel Keuleers; Rufin Vogels; Simon De Deyne; Gert Storms; Patrick Dupont; Rik Vandenberghe Representation of semantic similarity in the left intraparietal sulcus: Functional magnetic resonance imaging evidence Journal Article In: Frontiers in Human Neuroscience, vol. 11, pp. 402, 2017. @article{Neyens2017, According to a recent study, semantic similarity between concrete entities correlates with the similarity of activity patterns in left middle IPS during category naming. We examined the replicability of this effect under passive viewing conditions, the potential role of visuoperceptual similarity, where the effect is situated compared to regions that have been previously implicated in visuospatial attention, and how it compares to effects of object identity and location. Forty-six subjects participated. Subjects passively viewed pictures from two categories, musical instruments and vehicles. Semantic similarity between entities was estimated based on a concept-feature matrix obtained in more than 1,000 subjects. Visuoperceptual similarity was modeled based on the HMAX model, the AlexNet deep convolutional learning model, and thirdly, based on subjective visuoperceptual similarity ratings. Among the IPS regions examined, only left middle IPS showed a semantic similarity effect. The effect was significant in hIP1, hIP2, and hIP3. Visuoperceptual similarity did not correlate with similarity of activity patterns in left middle IPS. The semantic similarity effect in left middle IPS was significantly stronger than in the right middle IPS and also stronger than in the left or right posterior IPS. The semantic similarity effect was similar to that seen in the angular gyrus. Object identity effects were much more widespread across nearly all parietal areas examined. Location effects were relatively specific for posterior IPS and area 7 bilaterally. To conclude, the current findings replicate the semantic similarity effect in left middle IPS under passive viewing conditions, and demonstrate its anatomical specificity within a cytoarchitectonic reference frame. We propose that the semantic similarity effect in left middle IPS reflects the transient uploading of semantic representations in working memory. |
James E. Niemeyer; Michael A. Paradiso Contrast sensitivity, V1 neural activity, and natural vision Journal Article In: Journal of Neurophysiology, vol. 117, no. 2, pp. 492–508, 2017. @article{Niemeyer2017, Contrast sensitivity is fundamental to natural visual processing and an important tool for characterizing both visual function and clinical disorders. We simultaneously measured contrast sensitivity and neural contrast response functions and compared measurements in common laboratory conditions with naturalistic conditions. In typical experiments, a subject holds fixation and a stimulus is flashed on, whereas in natural vision, saccades bring stimuli into view. Motivated by our previous V1 findings, we tested the hypothesis that perceptual contrast sensitivity is lower in natural vision and that this effect is associated with corresponding changes in V1 activity. We found that contrast sensitivity and V1 activity are correlated and that the relationship is similar in laboratory and naturalistic paradigms. However, in the more natural situation, contrast sensitivity is reduced up to 25% compared with that in a standard fixation paradigm, particularly at lower spatial frequencies, and this effect correlates with significant reductions in V1 responses. Our data suggest that these reductions in natural vision result from fast adaptation on one fixation that lowers the response on a subsequent fixation. This is the first demonstration of rapid, naturalimage adaptation that carries across saccades, a process that appears to constantly influence visual sensitivity in natural vision. NEW & NOTEWORTHY Visual sensitivity and activity in brain area V1 were studied in a paradigm that included saccadic eye movements and natural visual input. V1 responses and contrast sensitivity were significantly reduced compared with results in common laboratory paradigms. The parallel neural and perceptual effects of eye movements and stimulus complexity appear to be due to a form of rapid adaptation that carries across saccades. |
Jenny A. Nij Bijvank; L. J. Balk; H. S. Tan; Bernard M. J. Uitdehaag; L. J. Rijn; A. Petzold A rare cause for visual symptoms in multiple sclerosis: Posterior internuclear ophthalmoplegia of Lutz, a historical misnomer Journal Article In: Journal of Neurology, vol. 264, no. 3, pp. 600–602, 2017. @article{NijBijvank2017, A 22-year-old female patient with a 1 year history of relapsing remitting multiple sclerosis (MS) complained of difficulties focusing and brief episodes of horizontal gaze evoked diplopia. Symptoms occurred intermittently in rest, and increased whilst walking or cycling in busy environ- ments. Her past medical and family history was unre- markable and she was not taking any medication. On examination extraocular eye movements were full and convergence was normal. There was no abducting or adducting nystagmus, and no convincingly reproducible slowing of saccades on repeated testing and no oscillopsia. The reminder of her cranial nerve examination was normal. Her vestibulo-ocular reflex was normal. The optokinetic nystagmus was not tested. We thought we had not suffi- ciently excluded the possibility of an internuclear ophthal- moplegia (INO) and recorded the eye movements with high-frequency infrared oculography (Eyelink 1000 plus, SR Research Ltd., Canada). |
Yaser Merrikhi; Kelsey Clark; Eddy Albarran; Mohammadbagher Parsa; Marc Zirnsak; Tirin Moore; Behrad Noudoost Spatial working memory alters the efficacy of input to visual cortex Journal Article In: Nature Communications, vol. 8, pp. 15041, 2017. @article{Merrikhi2017, Prefrontal cortex modulates sensory signals in extrastriate visual cortex, in part via its direct projections from the frontal eye field (FEF), an area involved in selective attention. We find that working memory-related activity is a dominant signal within FEF input to visual cortex. Although this signal alone does not evoke spiking responses in areas V4 and MT during memory, the gain of visual responses in these areas increases, and neuronal receptive fields expand and shift towards the remembered location, improving the stimulus representation by neuronal populations. These results provide a basis for enhancing the representation of working memory targets and implicate persistent FEF activity as a basis for the interdependence of working memory and selective attention. |
Kate E. Merritt; Ken N. Seergobin; Daniel A. Mendonça; Mary E. Jenkins; Melvyn A. Goodale; Penny A. MacDonald Automatic online motor control is intact in Parkinson's Disease with and without perceptual awareness Journal Article In: eNeuro, vol. 4, no. 5, pp. 1–12, 2017. @article{Merritt2017, In the double-step paradigm, healthy human participants automatically correct reaching movements when targets are displaced. Motor deficits are prominent in Parkinson's disease (PD) patients. In the lone investigation of online motor correction in PD using the double-step task, a recent study found that PD patients performed unconscious adjustments appropriately but seemed impaired for consciously-perceived modifications. Conscious perception of target movement was achieved by linking displacement to movement onset. PD-related bradykinesia disproportionately prolonged preparatory phases for movements to original target locations for patients, potentially accounting for deficits. Eliminating this confound in a double-step task, we evaluated the effect of conscious awareness of trajectory change on online motor corrections in PD. On and off dopaminergic therapy, PD patients (n = 14) and healthy controls (n = 14) reached to peripheral visual targets that remained stationary or unexpectedly moved during an initial saccade. Saccade latencies in PD are comparable to controls'. Hence, target displacements occurred at equal times across groups. Target jump size affected conscious awareness, confirmed in an independent target displacement judgment task. Small jumps were subliminal, but large target displacements were consciously perceived. Contrary to the previous result, PD patients performed online motor corrections normally and automatically, irrespective of conscious perception. Patients evidenced equivalent movement durations for jump and stay trials, and trajectories for patients and controls were identical, irrespective of conscious perception. Dopaminergic therapy had no effect on performance. In summary, online motor control is intact in PD, unaffected by conscious perceptual awareness. The basal ganglia are not implicated in online corrective responses. |
Natalie Mestry; Tamaryn Menneer; Kyle R. Cave; Hayward J. Godwin; Nick Donnelly Dual-target cost in visual search for multiple unfamiliar faces Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 8, pp. 1504–1519, 2017. @article{Mestry2017, The efficiency of visual search for one (single-target) and either of two (dual-target) unfamiliar faces was explored to understand the manifestations of capacity and guidance limitations in face search. The visual similarity of distractor faces to target faces was manipulated using morphing (Experi- ments 1 and 2) and multidimensional scaling (Experiment 3). A dual-target cost was found in all experiments, evidenced by slower and less accurate search in dual- than single-target conditions. The dual-target cost was unequal across the targets, with performance being maintained on one target and reduced on the other, which we label “preferred” and “non-preferred” respectively. We calculated the capacity for each target face and show reduced capacity for representing the non-preferred target face. However, results show that the capacity for the non-preferred target can be increased when the dual-target condition is conducted after participants complete the single-target conditions. Analyses of eye movements revealed evidence for weak guidance of fixations in single-target search, and when searching for the preferred target in dual-target search. Overall, the experiments show dual-target search for faces is capacity- and guidance-limited, leading to superior search for 1 face over the other in dual-target search. However, learning faces individually may improve capacity with the second face. |
Paul Metzner; Titus Malsburg; Shravan Vasishth; Frank Rösler The importance of reading naturally: Evidence from combined recordings of eye movements and electric brain potentials Journal Article In: Cognitive Science, vol. 41, pp. 1232–1263, 2017. @article{Metzner2017, How important is the ability to freely control eye movements for reading comprehension? And how does the parser make use of this freedom? We investigated these questions using coregistration of eye movements and event-related brain potentials (ERPs) while participants read either freely or in a computer-controlled word-by-word format (also known as RSVP). Word-by-word presentation and natural reading both elicited qualitatively similar ERP effects in response to syntactic and semantic violations (N400 and P600 effects). Comprehension was better in free reading but only in trials in which the eyes regressed to previous material upon encountering the anomaly. A more fine-grained ERP analysis revealed that these regressions were strongly associated with the well-known P600 effect. In trials without regressions, we instead found sustained centro-parietal negativities starting at around 320 ms post-onset; however, these negativities were only found when the violation occurred in sentence-final position. Taken together, these results suggest that the sentence processing system engages in strategic choices: In response to words that don't match built-up expectations, it can either explore alternative interpretations (reflected by regressions, P600 effects, and good com-prehension) or pursue a " good-enough " processing strategy that tolerates a deficient interpretation (reflected by progressive saccades, sustained negativities, and relatively poor comprehension). |
Inga Meyhöfer; Veena Kumari; Antje Hill; Nadine Petrovsky; Ulrich Ettinger Sleep deprivation as an experimental model system for psychosis: Effects on smooth pursuit, prosaccades, and antisaccades Journal Article In: Journal of Psychopharmacology, vol. 31, no. 4, pp. 418–433, 2017. @article{Meyhoefer2017, Current antipsychotic medications fail to satisfactorily reduce negative and cognitive symptoms and produce many unwanted side effects, necessitating the development of new compounds. Cross-species, experimental behavioural model systems can be valuable to inform the development of such drugs. The aim of the current study was to further test the hypothesis that controlled sleep deprivation is a safe and effective model system for psychosis when combined with oculomotor biomarkers of schizophrenia. Using a randomized counterbalanced within-subjects design, we investigated the effects of 1 night of total sleep deprivation in 32 healthy participants on smooth pursuit eye movements (SPEM), prosaccades (PS), antisaccades (AS), and self-ratings of psychosis-like states. Compared with a normal sleep control night, sleep deprivation was associated with reduced SPEM velocity gain, higher saccadic frequency at 0.2 Hz, elevated PS spatial error, and an increase in AS direction errors. Sleep deprivation also increased intra-individual variability of SPEM, PS, and AS measures. In addition, sleep deprivation induced psychosis-like experiences mimicking hallucinations, cognitive disorganization, and negative symptoms, which in turn had moderate associations with AS direction errors. Taken together, sleep deprivation resulted in psychosis-like impairments in SPEM and AS performance. However, diverging somewhat from the schizophrenia literature, sleep deprivation additionally disrupted PS control. Sleep deprivation thus represents a promising but possibly unspecific experimental model that may be helpful to further improve our understanding of the underlying mechanisms in the pathophysiology of psychosis and aid the development of antipsychotic and pro-cognitive drugs. |
Inga Meyhöfer; Maria Steffens; Eliana Faiola; Anna-Maria Kasparbauer; Veena Kumari; Ulrich Ettinger Combining two model systems of psychosis: The effects of schizotypy and sleep deprivation on oculomotor control and psychotomimetic states Journal Article In: Psychophysiology, vol. 54, no. 11, pp. 1755–1769, 2017. @article{Meyhoefer2017a, Model systems of psychosis, such as schizotypy or sleep deprivation, are valuable in informing our understanding of the etiology of the disorder and aiding the development of new treatments. Schizophrenia patients, high schizotypes, and sleep-deprived subjects are known to share deficits in oculomotor biomarkers. Here, we aimed to further validate the schizotypy and sleep deprivation models and investigated, for the first time, their interactive effects on smooth pursuit eye movements (SPEM), prosaccades, antisaccades, predictive saccades, and measures of psychotomimetic states, anxiety, depression, and stress. To do so |
Martina Micai; Holly S. S. L. Joseph; Mila Vulchanova; David Saldaña Strategies of readers with autism when responding to inferential questions: An eye-movement study Journal Article In: Autism Research, vol. 10, no. 5, pp. 888–900, 2017. @article{Micai2017, Previous research suggests that individuals with autism spectrum disorder (ASD) have difficulties with inference generation in reading tasks. However, most previous studies have examined how well children understand a text after reading or have measured on-line reading behavior without response to questions. The aim of this study was to investigate the online strategies of children and adolescents with autism during reading and at the same time responding to a question by monitoring their eye movements. The reading behavior of participants with ASD was compared with that of age-, language-, nonverbal intelligence-, reading-, and receptive language skills-matched participants without ASD (control group). The results showed that the ASD group were as accurate as the control group in generating inferences when answering questions about the short texts, and no differences were found between the two groups in the global paragraph reading and responding times. However, the ASD group displayed longer gaze latencies on a target word necessary to produce an inference. They also showed more regressions into the word that supported the inference compared to the control group after reading the question, irrespective of whether an inference was required or not. In conclusion, the ASD group achieved an equivalent level of inferential comprehension, but showed subtle differences in reading comprehension strategies compared to the control group. |
Audrey L. Michal; Steven L. Franconeri Visual routines are associated with specific graph interpretations Journal Article In: Cognitive Research: Principles and Implications, vol. 2, pp. 1–10, 2017. @article{Michal2017, We argue that people compare values in graphs with a visual routine – attending to data values in an ordered pattern over time. Do these visual routines exist to manage capacity limitations in how many values can be encoded at once, or do they actually affect the relations that are extracted? We measured eye movements while people judged configurations of a two-bar graph based on size only (“[short tall] or [tall short]?”) and contrast only (“[light dark] or [dark light]?”). Participants exhibited visual routines in which they systematically attended to a specific feature (or “anchor point”) in the graph; in the size task, most participants inspected the taller bar first, and in the contrast task, most participants attended to the darker bar first. Participants then judged configurations that varied in both size and contrast (e.g., [short-light tall-dark]); however, only one dimension was task-relevant (varied between subjects). During this orthogonal task, participants overwhelmingly relied on the same anchor point used in the single-dimension version, but only for the task-relevant dimension (e.g., taller bar for the size-relevant task). These results suggest that visual routines are associated with specific graph interpretations. Responses were also faster when task-relevant and task-irrelevant anchor points appeared on the same object (congruent) than on different objects (incongruent). This interference from the task-irrelevant dimension suggests that top-down control may be necessary to extract relevant relations from graphs. The effect of visual routines on graph comprehension has implications for both science, technology, engineering, and mathematics pedagogy and graph design. |
Andra Mihali; Bas Opheusden; Wei Ji Ma Bayesian microsaccade detection Journal Article In: Journal of Vision, vol. 17, no. 1, pp. 1–23, 2017. @article{Mihali2017, Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. |
Kelly Miles; Catherine M. McMahon; Isabelle Boisvert; Ronny Ibrahim; Peter Lissa; Petra L. Graham; Björn Lyxell Objective assessment of listening effort: Coregistration of pupillometry and EEG Journal Article In: Trends in Hearing, vol. 21, 2017. @article{Miles2017, Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechan-isms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral reso-lution (16-and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants' true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral reso-lution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort. |
Ailsa E. Millen; Lorraine Hope; Anne P. Hillstrom; Aldert Vrij Tracking the truth: The effect of face familiarity on eye fixations during deception Journal Article In: Quarterly Journal of Experimental Psychology, vol. 70, no. 5, pp. 930–943, 2017. @article{Millen2017, In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure. |
Lisa M. Soederberg Miller; Elizabeth Applegate; Laurel A. Beckett; MacHelle D. Wilson; Tanja N. Gibson Age differences in the use of serving size information on food labels: Numeracy or attention? Journal Article In: Public Health Nutrition, vol. 20, no. 5, pp. 786–796, 2017. @article{Miller2017, Objective: The ability to use serving size information on food labels is important for managing age-related chronic conditions such as diabetes, obesity and cancer. Past research suggests that older adults are at risk for failing to accurately use this portion of the food label due to numeracy skills. However, the extent to which older adults pay attention to serving size information on packages is unclear. We compared the effects of numeracy and attention on age differences in accurate use of serving size information while individuals evaluated product healthfulness. Design: Accuracy and attention were assessed across two tasks in which participants compared nutrition labels of two products to determine which was more healthful if they were to consume the entire package. Participants' eye movements were monitored as a measure of attention while they compared two products presented side-by-side on a computer screen. Numeracy as well as food label habits and nutrition knowledge were assessed using questionnaires. Setting: Sacramento area, California, USA, 2013–2014. Subjects: Stratified sample of 358 adults, aged 20–78 years. Results: Accuracy declined with age among those older adults who paid less attention to serving size information. Although numeracy, nutrition knowledge and self-reported food label use supported accuracy, these factors did not influence age differences in accuracy. Conclusions: The data suggest that older adults are less accurate than younger adults in their use of serving size information. Age differences appear to be more related to lack of attention to serving size information than to numeracy skills. |
Mark Mills; Mohammed Alwatban; Benjamin Hage; Erin Barney; Edward J. Truemper; Gregory R. Bashford; Michael D. Dodd In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 7, pp. 1291–1302, 2017. @article{Mills2017, Systematic patterns of eye movements during scene perception suggest a functional distinction between 2 viewing modes: an ambient mode (characterized by short fixations and large saccades) thought to reflect dorsal activity involved with spatial analysis, and a focal mode (characterized by long fixations and small saccades) thought to reflect ventral activity involved with object analysis. Little neuroscientific evidence exists supporting this claim. Here, functional transcranial Doppler ultrasound (fTCD) was used to investigate whether these modes show hemispheric specialization. Participants viewed scenes for 20 s under instructions to search or memorize. Overall, early viewing was right lateralized, whereas later viewing was left lateralized. This right-to-left shift interacted with viewing task (more pronounced in the memory task). Importantly, changes in lateralization correlated with changes in eye movements. This is the first demonstration of right hemisphere bias for eye movements servicing spatial analysis and left hemisphere bias for eye movements servicing object analysis. |
Sorato Minami; Kaoru Amano Illusory jitter perceived at the frequency of alpha oscillations Journal Article In: Current Biology, vol. 27, no. 15, pp. 1–13, 2017. @article{Minami2017, Neural oscillations, such as alpha (8–13 Hz), beta (13–30 Hz), and gamma (30–100 Hz), are widespread across cortical areas, and their possible functional roles include feature binding [1], neuronal communication [2, 3], and memory [1, 4]. The most prominent signal among these neural oscillations is the alpha oscillation. Although accumulating evidence suggests that alpha oscillations correlate with various aspects of visual processing [5–18], the number of studies proving their causal contribution in visual perception is limited [11, 16–18]. Here we report that illusory visual vibrations are consciously experienced at the frequency of intrinsic alpha oscillations. We employed an illusory jitter perception termed the motion-induced spatial conflict [19] that originates from the cyclic interaction between motion and shape processing. Comparison between the perceived frequency of illusory jitter and the peak alpha frequency (PAF) measured using magnetoencephalography (MEG) revealed that the inter- and intra-participant variations of the PAF are mirrored by an illusory jitter perception. More crucially, psychophysical and MEG measurements during amplitude-modulated current stimulation [20] showed that the PAF can be artificially manipulated, which results in a corresponding change in the perceived jitter frequency. These results suggest the causal contribution of neural oscillations at the alpha frequency in creating temporal characteristics of visual perception. Our results suggest that cortical areas, dorsal and ventral visual areas in this case, are interacting at the frequency of alpha oscillations [2, 3, 21–27]. |
Juri Minxha; Clayton Mosher; Jeremiah K. Morrow; Adam N. Mamelak; Ralph Adolphs; Katalin M. Gothard; Ueli Rutishauser Fixations gate species-specific responses to free viewing of faces in the human and macaque amygdala Journal Article In: Cell Reports, vol. 18, no. 4, pp. 878–891, 2017. @article{Minxha2017, Neurons in the primate amygdala respond prominently to faces. This implicates the amygdala in the processing of socially significant stimuli, yet its contribution to social perception remains poorly understood. We evaluated the representation of faces in the primate amygdala during naturalistic conditions by recording from both human and macaque amygdala neurons during free viewing of identical arrays of images with concurrent eye tracking. Neurons responded to faces only when they were fixated, suggesting that neuronal activity was gated by visual attention. Further experiments in humans utilizing covert attention confirmed this hypothesis. In both species, the majority of face-selective neurons preferred faces of conspecifics, a bias also seen behaviorally in first fixation preferences. Response latencies, relative to fixation onset, were shortest for conspecific-selective neurons and were ∼100 ms shorter in monkeys compared to humans. This argues that attention to faces gates amygdala responses, which in turn prioritize species-typical information for further processing. |
Michael G. Cutter; Denis Drieghe; Simon P. Liversedge Reading sentences of uniform word length: Evidence for the adaptation of the preferred saccade length during reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 11, pp. 1895–1911, 2017. @article{Cutter2017a, In the current study, the effect of removing word length variability within sentences on spatial aspects of eye movements during reading was investigated. Participants read sentences that were uniform in terms of word length, with each sentence consisting entirely of three-, four-, or five-letter words, or a combination of these word lengths. Several interesting findings emerged. Adaptation of the preferred saccade length occurred for sentences with different uniform word length; participants would be more accurate at making short saccades while reading uniform sentences of three-letter words, while they would be more accurate at making long saccades while reading uniform sentences of five-letter words. Furthermore, word skipping was affected such that three- and four-letter words were more likely, and five-letter words less likely, to be directly fixated in uniform compared to non-uniform sentences. It is argued that saccadic targeting during reading is highly adaptable and flexible toward the characteristics of the text currently being read, as opposed to the idea implemented in most current models of eye movement control during reading that readers develop a preference for making saccades of a certain length across a lifetime of experience with a given language. |
Michael G. Cutter; Denis Drieghe; Simon P. Liversedge Is orthographic information from multiple parafoveal words processed in parallel: An eye-tracking study Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 8, pp. 1550–1567, 2017. @article{Cutter2017, In the current study we investigated whether orthographic information available from 1 upcoming parafoveal word influences the processing of another parafoveal word. Across 2 experiments we used the boundary paradigm (Rayner, 1975) to present participants with an identity preview of the 2 words after the boundary (e.g.,hot pan), a preview in which 2 letters were transposed between these words ( e.g., hop tan), or a preview in which the same 2 letters were substituted (e.g., hob fan). We hypothesized that if these 2 words were processed in parallel in the parafovea then we may observe significant preview benefits for the condition in which the letters were transposed between words relative to the condition in which the letters were substituted. However, no such effect was observed, with participants fixating the words for the same amount of time in both conditions. This was the case both when the transposition was made between the final and first letter of the 2 words (e.g., hop tan as a preview of hot pan; Experiment 1) and when the transposition maintained within word letter position (e.g., pit hop as a preview of hit pop; Experiment 2). The implications of these findings are considered in relation to serial and parallel lexical processing during reading. |
Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken Identifying the machine translation error types with the greatest impact on post-editing effort Journal Article In: Frontiers in Psychology, vol. 8, pp. 1282, 2017. @article{Daems2017a, Translation Environment Tools make translators' work easier by providing them with term lists, translation memories and machine translation output. Ideally, such tools automatically predict whether it is more effortful to post-edit than to translate from scratch, and determine whether or not to provide translators with machine translation output. Current machine translation quality estimation systems heavily rely on automatic metrics, even though they do not accurately capture actual post-editing effort. In addition, these systems do not take translator experience into account, even though novices' translation processes are different from those of professional translators. In this paper, we report on the impact of machine translation errors on various types of post-editing effort indicators, for professional translators as well as student translators. We compare the impact of MT quality on a product effort indicator (HTER) with that on various process effort indicators. The translation and post-editing process of student translators and professional translators was logged with a combination of keystroke logging and eye-tracking, and the MT output was analyzed with a fine-grained translation quality assessment approach. We find that most post-editing effort indicators (product as well as process) are influenced by machine translation quality, but that different error types affect different post-editing effort indicators, confirming that a more fine-grained MT quality analysis is needed to correctly estimate actual post-editing effort. Coherence, meaning shifts, and structural issues are shown to be good indicators of post-editing effort. The additional impact of experience on these interactions between MT quality and post-editing effort is smaller than expected. |
Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken Translation methods and experience: A comparative analysis of human translation and post-editing with students and professional translators Journal Article In: Meta, vol. 62, no. 2, pp. 245–270, 2017. @article{Daems2017, While the benefits of using post-editing for technical texts have been more or less acknowledged, it remains unclear whether post-editing is a viable alternative to human translation for more general text types. In addition, we need a better understanding of both translation methods and how they are performed by students as well as professionals, so that pitfalls can be determined and translator training can be adapted accordingly. In this article, we aim to get a better understanding of the differences between human translation and post-editing for newspaper articles. Processes are registered by means of eye tracking and keystroke logging, which allows us to study translation speed, cognitive load, and the use of external resources. We also look at the final quality of the product as well as translators' attitude towards both methods of translation. Studying these different aspects shows that both methods and groups are more similar than anticipated. |
Weiwei Dai; Ivan Selesnick; John-Ross Rizzo; Alexa Ruel; Todd E. Hudson A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades Journal Article In: Journal of Vision, vol. 17, no. 9, pp. 1–15, 2017. @article{Dai2017, The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter. |
Olga Dal Monte; Matthew Piva; Kevin M. Anderson; Marios Tringides; Avram J. Holmes; Steve W. C. Chang Oxytocin under opioid antagonism leads to supralinear enhancement of social attention Journal Article In: Proceedings of the National Academy of Sciences, vol. 114, no. 20, pp. 5247–5252, 2017. @article{DalMonte2017, To provide new preclinical evidence toward improving the efficacy of oxytocin (OT) in treating social dysfunction, we tested the benefit of administering OT under simultaneously induced opioid antagonism during dyadic gaze interactions in monkeys. OT coadministered with a μ-opioid receptor antagonist, naloxone, invoked a supralinear enhancement of prolonged and selective social attention, producing a stronger effect than the summed effects of each administered separately. These effects were consistently observed when averaging over entire sessions, as well as specifically following events of particular social importance, including mutual eye contact and mutual reward receipt. Furthermore, attention to various facial regions was differentially modulated depending on social context. Using the Allen Institute's transcriptional atlas, we further established the colocalization of μ-opioid and κ-opioid receptor genes and OT genes at the OT-releasing sites in the human brain. These data across monkeys and humans support a regulatory relationship between the OT and opioid systems and suggest that administering OT under opioid antagonism may boost the therapeutic efficacy of OT for enhancing social cognition. |
Mario Dalmaso; Luigi Castelli; Giovanni Galfano Attention holding elicited by direct-gaze faces is reflected in saccadic peak velocity Journal Article In: Experimental Brain Research, vol. 235, no. 11, pp. 3319–3332, 2017. @article{Dalmaso2017, Manual response times to peripherally presented targets have been reported to be greater in the presence of task-irrelevant pictorial faces at fxation which establish an eye contact with the observer. This efect is interpreted as evidence that direct-gaze faces hold attention. In three experiments, we investigated whether this attention-holding efect is also refected in saccadic response times. Participants were asked to make a saccade towards a symbolic target that could appear rightwards or leftwards, in the presence of a task-irrelevant centrally placed face with either direct gaze or closed eyes. Unexpectedly, saccadic response times did not show any consistent response pattern as a function of whether the faces were presented with direct gaze vs. closed eyes. Interestingly, saccadic peak velocities were found to be lower in the presence of faces with direct gaze rather than closed eyes (Experiment 1). This efect emerged even in the presence of non-human primate faces (Experiment 2), and no diferences between direct gaze and closed eyes emerged when the faces were presented inverted rather than upright (Experiment 3). Overall, these findings suggest that eye contact can have an impact on the saccadic generation system. |
Mario Dalmaso; Luigi Castelli; Pietro Scatturin; Giovanni Galfano Trajectories of social vision: Eye contact increases saccadic curvature Journal Article In: Visual Cognition, vol. 25, no. 1-3, pp. 358–365, 2017. @article{Dalmaso2017b, Saccades are known to deviate away from distractors, and the amplitude of this deviation seems to reflect the salience of these stimuli, as in the case of human faces. Here, we investigated whether eye contact can modulate attention allocation by examining saccadic curvature when faces with closed vs. open eyes act as distractors. In two experiments, participants were asked to perform a vertical saccade towards a symbolic target. At the same time, task-irrelevant faces with open or closed eyes (Experiments 1 and 2) and scrambled faces (Experiment 2) could appear leftwards or rightwards with respect to the ideal trajectory towards the target. Overall, a greater saccadic curvature was observed in response to faces with open eyes, as compared to the other two conditions. These results confirm that eye contact plays an important role in shaping attentional mechanisms and provide further evidence concerning the link between social vision and eye movements. |
Mario Dalmaso; Luigi Castelli; Pietro Scatturin; Giovanni Galfano Working memory load modulates microsaccadic rate Journal Article In: Journal of Vision, vol. 17, no. 3, pp. 1–12, 2017. @article{Dalmaso2017a, Microsaccades are tiny eye movements that individuals perform unconsciously during fixation. Despite that the nature and the functions of microsaccades are still lively debated, recent evidence has shown an association between these micro eye movements and higher order cognitive processes. Here, in two experiments, we specifically focused on working memory and addressed whether differential memory load could be reflected in a modulation of microsaccade dynamics. In Experiment 1, participants memorized a numerical sequence composed of either two (low-load condition) or five digits (high- load condition), appearing at fixation. The results showed a reduction in the microsaccadic rate in the high- load compared to the low-load condition. In Experiment 2, five red or green digits were always presented at fixation. Participants either memorized the color (low- load condition) or the five digits (high-load condition). Hence, visual stimuli were exactly the same in both conditions. Consistent with Experiment 1, microsaccadic rate was lower in the high-load than in the low-load condition. Overall, these findings reveal that an engagement of working memory can have an impact on microsaccadic rate, consistent with the view that microsaccade generation is pervious to top-down processes. |
Atser Damsma; Hedderik Rijn Pupillary response indexes the metrical hierarchy of unattended rhythmic violations Journal Article In: Brain and Cognition, vol. 111, pp. 95–103, 2017. @article{Damsma2017, The perception of music is a complex interaction between what we hear and our interpretation. This is reflected in beat perception, in which a listener infers a regular pulse from a musical rhythm. Although beat perception is a fundamental human ability, it is still unknown whether attention to the music is necessary to establish the perception of stronger and weaker beats, or meter. In addition, to what extent beat perception is dependent on musical expertise is still a matter of debate. Here, we address these questions by measuring the pupillary response to omissions at different metrical positions in drum rhythms, while participants attended to another task. We found that the omission of the salient first beat elicited a larger pupil dilation than the omission of the less-salient second beat. This result shows that participants not only detected the beat without explicit attention to the music, but also perceived a metrical hierarchy of stronger and weaker beats. This suggests that hierarchical beat perception is an automatic process that requires no or minimal attentional resources. In addition, we found no evidence for the hypothesis that hierarchical beat perception is affected by musical expertise, suggesting that elementary beat perception might be independent from musical expertise. Finally, our results show that pupil dilation reflects surprise without explicit attention, demonstrating that the pupil is an accessible index to signatures of unattentive processing. |
Christopher L. Dancy; Frank E. Ritter IGT-Open: An open-source, computerized version of the Iowa Gambling Task Journal Article In: Behavior Research Methods, vol. 49, no. 3, pp. 972–978, 2017. @article{Dancy2017, The Iowa Gambling Task (IGT) is commonly used to understand the processes involved in decision-making. Though the task was originally run without a computer, using a computerized version of the task has become typical. These computerized versions of the IGT are useful, because they can make the task more standardized across studies and allow for the task to be used in environments where a physical version of the task may be difficult or impossible to use (e.g., while collecting brain imaging data). Though these computerized versions of the IGT have been useful for experimentation, having multiple software implementations of the task could present reliability issues. We present an open-source software version of the Iowa Gambling Task (called IGT-Open) that allows for millisecond visual presentation accuracy and is freely available to be used and modified. This software has been used to collect data from human subjects and also has been used to run model-based simulations with computational process models developed to run in the ACT-R architecture. |
Gina M. D'Andrea-Penna; Sebastian M. Frank; Todd F. Heatherton; Peter U. Tse Distracting tracking: Interactions between negative emotion and attentional load in multiple-object tracking Journal Article In: Emotion, vol. 17, no. 6, pp. 900–904, 2017. @article{DAndreaPenna2017, Stimuli that attract exogenous attention have been shown to interfere with behavioral performance on various tasks. In the present study, participants performed multiple-object tracking (MOT) in conditions where either neutral or negatively valenced images were flashed at fixation. Results reveal a significant impairment of tracking accuracy in the emotional MOT conditions compared to the neutral conditions specifically at the highest level of task difficulty. These findings suggest that emotional distraction is most detrimental when maximal endogenous attentional engagement is required. This interaction between emotional distraction and attentional load is inconsistent with existing models of emotional distraction. |
Yarden Dankner; Lilach Shalev; Marisa Carrasco; Shlomit Yuval-Greenberg Prestimulus inhibition of saccades in adults with and without attention-deficit/hyperactivity disorder as an index of temporal expectations Journal Article In: Psychological Science, vol. 28, no. 7, pp. 835–850, 2017. @article{Dankner2017, Reports an error in "Prestimulus inhibition of saccades in adults with and without attention-deficit/hyperactivity disorder as an index of temporal expectations" by Yarden Dankner, Lilach Shalev, Marisa Carrasco and Shlomit Yuval-Greenberg ( Psychological Science, 2017[Jul], Vol 28[7], 835-850). In the original article, there were some errors in Figures 4, 7, and 9. The corrected Figures 4, 7, and 9 are present in the erratum. (The following abstract of the original article appeared in record 2017-30892-001). Knowing when to expect important events to occur is critical for preparing context-appropriate behavior. However, anticipation is inherently complicated to assess because conventional measurements of behavior, such as accuracy and reaction time, are available only after the predicted event has occurred. Anticipatory processes, which occur prior to target onset, are typically measured only retrospectively by these methods. In this study, we utilized a novel approach for assessing temporal expectations through the dynamics of prestimulus saccades. Results showed that saccades of neurotypical participants were inhibited prior to the onset of stimuli that appeared at predictable compared with less predictable times. No such inhibition was found in most participants with attention-deficit/hyperactivity disorder (ADHD), and particularly not in those who experienced difficulties in sustaining attention over time. These findings suggest that individuals with ADHD, especially those with sustained-attention deficits, have diminished ability to benefit from temporal predictability, and this could account for some of their context-inappropriate behaviors. |
Joshua Davis; Elinor McKone; Marc Zirnsak; Tirin Moore; Richard O'Kearney; Deborah Apthorp; Romina Palermo Social and attention-to-detail subclusters of autistic traits differentially predict looking at eyes and face identity recognition ability Journal Article In: British Journal of Psychology, vol. 108, no. 1, pp. 191–219, 2017. @article{Davis2017, This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory. |