All EyeLink Eye Tracker Publications
All 14,000+ peer-reviewed EyeLink research publications up until 2025 (with some early 2026s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2016 |
Mark D. Lescroart; Nancy Kanwisher; Julie D. Golomb No evidence for automatic remapping of stimulus features or location found with fMRI Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 53, 2016. @article{Lescroart2016,The input to our visual system shifts every time we move our eyes. To maintain a stable percept of the world, visual representations must be updated with each saccade. Near the time of a saccade, neurons in several visual areas become sensitive to the regions of visual space that their receptive fields occupy after the saccade. This process, known as remapping, transfers information from one set of neurons to another, and may provide a mechanism for visual stability. However, it is not clear whether remapping transfers information about stimulus features in addition to information about stimulus location. To investigate this issue, we recorded BOLD fMRI responses while human subjects viewed images of faces and houses (two visual categories with many feature differences). Immediately after some image presentations, subjects made a saccade that moved the previously stimulated location to the opposite side of the visual field. We then used a combination of univariate analyses and multivariate pattern analyses to test whether information about stimulus location and stimulus features were remapped to the ipsilateral hemisphere after the saccades. We found no reliable indication of stimulus feature remapping in any region. However, we also found no reliable indication of stimulus location remapping, despite the fact that our paradigm was highly similar to previous fMRI studies of remapping. The absence of location remapping in our study precludes strong conclusions regarding feature remapping. However, these results also suggest that measurement of location remapping with fMRI depends strongly on the details of the experimental paradigm used. We highlight differences in our approach from the original fMRI studies of remapping, discuss potential reasons for the failure to generalize prior location remapping results, and suggest directions for future research. |
Rosanna K. Olsen; Vinoja Sebanayagam; Yunjo Lee; Morris Moscovitch; Cheryl L. Grady; R. Shayna Rosenbaum; Jennifer D. Ryan The relationship between eye movements and subsequent recognition: Evidence from individual differences and amnesia Journal Article In: Cortex, vol. 85, pp. 182–193, 2016. @article{Olsen2016,There is consistent agreement regarding the positive relationship between cumulative eye movement sampling and subsequent recognition, but the role of the hippocampus in this sampling behavior is currently unknown. It is also unclear whether the eye movement repetition effect, i.e., fewer fixations to repeated, compared to novel, stimuli, depends on explicit recognition and/or an intact hippocampal system. We investigated the relationship between cumulative sampling, the eye movement repetition effect, subsequent memory, and the hippocampal system. Eye movements were monitored in a developmental amnesic case (H.C.), whose hippocampal system is compromised, and in a group of typically developing participants while they studied single faces across multiple blocks. The faces were studied from the same viewpoint or different viewpoints and were subsequently tested with the same or different viewpoint. Our previous work suggested that hippocampal representations support explicit recognition for information that changes viewpoint across repetitions (Olsen et al., 2015). Here, examination of eye movements during encoding indicated that greater cumulative sampling was associated with better memory among controls. Increased sampling, however, was not associated with better explicit memory in H.C., suggesting that increased sampling only improves memory when the hippocampal system is intact. The magnitude of the repetition effect was not correlated with cumulative sampling, nor was it related reliably to subsequent recognition. These findings indicate that eye movements collect information that can be used to strengthen memory representations that are later available for conscious remembering, whereas eye movement repetition effects reflect a processing change due to experience that does not necessarily reflect a memory representation that is available for conscious appraisal. Lastly, H.C. demonstrated a repetition effect for fixed viewpoint faces but not for variable viewpoint faces, which suggests that repetition effects are differentially supported by neocortical and hippocampal systems, depending upon the representational nature of the underlying memory trace. |
Hyojin Park; Christoph Kayser; Gregor Thut; Joachim Gross Lip movements entrain the observers' low-frequency brain oscillations to facilitate speech intelligibility Journal Article In: eLife, vol. 5, pp. 1–17, 2016. @article{Park2016,During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker's lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker's lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing. |
Wei Shen; Qingqing Qu; Xingshan Li In: Attention, Perception, & Psychophysics, vol. 78, no. 5, pp. 1267–1284, 2016. @article{sql16,In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm. |
Christina U. Pfeuffer; Andrea Kiesel; Lynn Huestegge A look into the future: Spontaneous anticipatory saccades reflect processes of anticipatory action control Journal Article In: Journal of Experimental Psychology: General, vol. 145, no. 11, pp. 1530–1547, 2016. @article{Pfeuffer2016,According to ideomotor theory, human action control uses anticipations of one's own actions' future consequences, that is, action effect anticipations, as a means of triggering actions that will produce desired outcomes (e.g., Hommel, Müsseler, Aschersleben, & Prinz, 2001). Using the response-effect compatibility paradigm (Kunde, 2001), we demonstrate that the anticipation of one's own manual actions' future consequences not only triggers appropriate (i.e., instructed) actions, but simultaneously induces spontaneous (uninstructed) anticipatory saccades to the location of future action consequences. In contrast to behavioral response-effect compatibility effects that have been linked to processes of action selection and action planning, our results suggest that these anticipatory saccades serve the function of outcome evaluation, that is, the comparison of expected/intended and observed action outcomes. Overall, our results demonstrate the informational value of additionally analyzing uninstructed behavioral components complementary to instructed responses and allow us to specify essential mechanisms of the complex interplay between the manual and oculomotor control system in goal-directed action control. |
Esther X. W. Wu; Fook-Kee Chua; Shih-Cheng Yen Saccade plan overlap and cancellation during free viewing Journal Article In: Vision Research, vol. 127, pp. 122–131, 2016. @article{Wu2016a,In the current study, we examined how the saccadic system responds when visual information changes dynamically in our environment. Previous studies, using the double-step task, have shown that (a) saccade plans could overlap, such that saccade preparation to an object started even while the saccade preparation to another object was ongoing, and (b) saccade plans could be cancelled before they were completed. In these studies, saccade targets were restricted to a few, experimenter-defined locations. Here, we examined whether saccade plan overlap and cancellation mechanisms could be observed in free-viewing conditions. For each trial, we constructed sets of two images, each containing five objects. All objects have unique positions. Image 1 was presented for several fixations, before Image 2 was presented during a fixation, presumably while a saccade plan to an object in Image 1 was ongoing. There were two crucial findings: (a) First, the saccade immediately following the transition was sometimes executed towards objects in Image 2, and not an object in Image 1, suggesting that the earlier saccade plan to an Image 1 object had been cancelled. Second, analysis of the temporal data also suggested that preparation of the first post-transition saccade started before an earlier saccade plan to an Image 1 object was executed, implying that saccade plans overlapped. |
Pablo A. Barrionuevo; Dingcai Cao Luminance and chromatic signals interact differently with melanopsin activation to control the pupil light response Journal Article In: Journal of Vision, vol. 16, no. 11, pp. 29, 2016. @article{Barrionuevo2016,Intrinsically photosensitive retinal ganglion cells (ipRGCs) express the photopigment melanopsin. These cells receive afferent inputs from rods and cones, which provide inputs to the postreceptoral visual pathways. It is unknown, however, how melanopsin activation is integrated with postreceptoral signals to control the pupillary light reflex. This study reports human flicker pupillary responses measured using stimuli generated with a five-primary photostimulator that selectively modulated melanopsin, rod, S-, M-, and L-cone excitations in isolation, or in combination to produce postreceptoral signals. We first analyzed the light adaptation behavior of melanopsin activation and rod and cones signals. Second, we determined how melanopsin is integrated with postreceptoral signals by testing with cone luminance, chromatic blue-yellow, and chromatic red-green stimuli that were processed by magnocellular (MC), koniocellular (KC), and parvocellular (PC) pathways, respectively. A combined rod and melanopsin response was also measured. The relative phase of the postreceptoral signals was varied with respect to the melanopsin phase. The results showed that light adaptation behavior for all conditions was weaker than typical Weber adaptation. Melanopsin activation combined linearly with luminance, S-cone, and rod inputs, suggesting the locus of integration with MC and KC signals was retinal. The melanopsin contribution to phasic pupil responses was lower than luminance contributions, but much higher than S-cone contributions. Chromatic red-green modulation interacted with melanopsin activation nonlinearly as described by a "winner-takes-all" process, suggesting the integration with PC signals might be mediated by a postretinal site. |
Sabine Born; Hannah M. Krüger; Eckart Zimmermann; Patrick Cavanagh Compression of space for low visibility probes Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 1–13, 2016. @article{Born2016,Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as perisaccadic compression of space (Ross, Morrone, & Burr, 1997). More recently, we have demonstrated that brief probes are attracted towards a visual reference when followed by a mask, even in the absence of saccades (Zimmermann, Born, Fink, & Cavanagh, 2014). Here, we ask whether spatial compression depends on the transient disruptions of the visual input stream caused by either a mask or a saccade. Both of these degrade the probe visibility but we show that low probe visibility alone causes compression in the absence of any disruption. In a first experiment, we varied the regions of the screen covered by a transient mask, including areas where no stimulus was presented and a condition without masking. In all conditions, we adjusted probe contrast to make the probe equally hard to detect. Compression effects were found in all conditions. To obtain compression without a mask, the probe had to be presented at much lower contrasts than with masking. Comparing mislocalizations at different probe detection rates across masking, saccades and low contrast conditions without mask or saccade, Experiment 2 confirmed this observation and showed a strong influence of probe contrast on compression. Finally, in Experiment 3, we found that compression decreased as probe duration increased both for masks and saccades although here we did find some evidence that factors other than simply visibility as we measured it contribute to compression. Our experiments suggest that compression reflects how the visual system localizes weak targets in the context of highly visible stimuli. |
Shujie Deng; Jian Chang; Julie A. Kirkby; Jian J. Zhang Gaze–mouse coordinated movements and dependency with coordination demands in tracing Journal Article In: Behaviour & Information Technology, vol. 35, no. 8, pp. 665–679, 2016. @article{Deng2016,Eye movements have been shown to lead hand movements in tracing tasks where subjects have to move their fingers along a predefined trace. The question remained, whether the leading relationship was similar when tracing with a pointing device, such as a mouse; more importantly, whether tasks that required more or less gaze–mouse coordination would introduce variation in this pattern of behaviour, in terms of both spatial and temporal leading of gaze position to mouse movement. A three-level gaze–mouse coordination demand paradigm was developed to address these questions. A substantial dataset of 1350 trials was collected and analysed. The linear correlation of gaze–mouse movements, the statistical distribution of the lead time, as well as the lead distance between gaze and mouse cursor positions were all considered, and we proposed a new method to quantify lead time in gaze–mouse coordination. The results supported and extended previous empirical findings that gaze often led mouse movements. We found that the gaze–mouse coordination demands of the task were positively correlated to the gaze lead, both spatially and temporally. However, the mouse movements were synchronised with or led gaze in the simple straight line condition, which demanded the least gaze–mouse coordination. |
Klaske A. Glashouwer; Nienke C. Jonker; Karen Thomassen; Peter J. Jong Take a look at the bright side: Effects of positive body exposure on selective visual attention in women with high body dissatisfaction Journal Article In: Behaviour Research and Therapy, vol. 83, pp. 19–25, 2016. @article{Glashouwer2016,Women with high body dissatisfaction look less at their 'beautiful' body parts than their 'ugly' body parts. This study tested the robustness of this selective viewing pattern and examined the influence of positive body exposure on body-dissatisfied women's attention for 'ugly' and 'beautiful' body parts. In women with high body dissatisfaction (N = 28) and women with low body dissatisfaction (N = 14) eye-tracking was used to assess visual attention towards pictures of their own and other women's bodies. Participants with high body dissatisfaction were randomly assigned to 5 weeks positive body exposure (n = 15) or a no-treatment condition (n = 13). Attention bias was assessed again after 5 weeks. Body-dissatisfied women looked longer at 'ugly' than 'beautiful' body parts of themselves and others, while participants with low body dissatisfaction attended equally long to own/others' 'beautiful' and 'ugly' body parts. Although positive body exposure was very effective in improving participants' body satisfaction, it did not systematically change participants' viewing pattern. The tendency to preferentially allocate attention towards one's 'ugly' body parts seems a robust phenomenon in women with body dissatisfaction. Yet, modifying this selective viewing pattern seems not a prerequisite for successfully improving body satisfaction via positive body exposure. |
Esther S. Kim; Shannon F. Lemke Behavioural and eye-movement outcomes in response to text-based reading treatment for acquired alexia Journal Article In: Neuropsychological Rehabilitation, vol. 26, no. 1, pp. 60–86, 2016. @article{Kim2016,Text-based reading treatments, such as Multiple Oral Rereading (MOR) and Oral Reading for Language in Aphasia (ORLA) have been used successfully to remediate reading impairments in individuals with acquired alexia, but the mechanisms underlying such improvements are not well understood. In this study, an individual with acquired alexia who demonstrated reliance on a sub-lexical reading strategy (i.e., presence of spelling regularity effect and phonologically plausible errors) underwent 12 weeks of text-based reading treatment combining MOR and ORLA procedures. Behavioural assessments of single-word and text reading, along with eye-tracking assessments were conducted pre-treatment, post-treatment and at 5 month follow-up. Improved reading fluency (rate, accuracy) was observed for both trained and untrained passages. Evidence from behavioural and eye-tracking assessment suggested text-based reading treatment facilitated use of a lexical-semantic reading strategy. Increased frequency and lexicality effects, as well as a shift in initial landing position towards the centre of the word (the "optimal viewing position") were observed at post-treatment and follow-up assessments. These results demonstrate the potential utility of using eye movements as a parameter of interest in addition to traditional behavioural outcomes when investigating response to reading treatment. |
Christoph W. Korn; Dominik R. Bach A solid frame for the window on cognition: Modeling event-related pupil responses Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–16, 2016. @article{Korn2016,Pupil size is often used to infer central processes, including attention, memory, and emotion. Recent research has spotlighted its relation to behavioral variables from decision-making models and to neural variables such as locus coeruleus activity and cortical oscillations. As yet, a unified and principled approach for analyzing pupil responses is lacking. Here we seek to establish a formal, quantitative forward model for pupil responses by describing them with linear time-invariant systems. Based on empirical data from human participants, we show that a combination of two linear time-invariant systems can parsimoniously explain approximately all variance evoked by illuminance changes. Notably, the model makes a counterintuitive prediction that pupil constriction dominates the responses to darkness flashes, as in previous empirical reports. This prediction was quantitatively confirmed for responses to light and darkness flashes in an independent group of participants. Crucially, illuminance- and nonilluminance-related inputs to the pupillary system are presumed to share a common final pathway, composed of muscles and nerve terminals. Hence, we can harness our illuminance-based model to estimate the temporal evolution of this neural input for an auditory-oddball task, an emotional-words task, and a visual-detection task. Onset and peak latencies of the estimated neural inputs furnish plausible hypotheses for the complexity of the underlying neural circuit. To conclude, this mathematical description of pupil responses serves as a prerequisite to refining their relation to behavioral and brain indices of cognitive processes. |
Gerardo Salvato; Eva Z. Patai; Tayla Mccloud; Anna C. Nobre In: Cortex, vol. 82, pp. 206–216, 2016. @article{spmn16,Apolipoprotein (APOE) ɛ4 genotype has been identified as a risk factor for late-onset Alzheimer disease (AD). The memory system is mostly involved in AD, and memory deficits represent its key feature. A growing body of studies has focused on the earlier identification of cognitive dysfunctions in younger and older APOE ɛ4 carriers, but investigation on middle-aged individuals remains rare. Here we sought to investigate if the APOE ɛ4 genotype modulates declarative memory and its influences on perception in the middle of the life span. We tested 60 middle-aged individuals recruited according to their APOE allele variants (ɛ3/ɛ3, ɛ3/ɛ4, ɛ4/ɛ4) on a long-term memory-based orienting of attention task. Results showed that the APOE ɛ4 genotype impaired neither explicit memory nor memory-based orienting of spatial attention. Interestingly, however, we found that the possession of the ɛ4 allele broke the relationship between declarative long-term memory and memory-guided orienting of visuo-spatial attention, suggesting an earlier modulation exerted by pure genetic characteristics on cognition. These findings are discussed in light of possible accelerated brain ageing in middle-aged ɛ4-carriers, and earlier structural changes in the brain occurring at this stage of the lifespan. |
Tao Yao; Stefan Treue; B. Suresh Krishna An attention-sensitive memory trace in macaque MT following saccadic eye movements Journal Article In: PLoS Biology, vol. 14, no. 2, pp. e1002390, 2016. @article{Yao2016,We experience a visually stable world despite frequent retinal image displacements induced by eye, head, and body movements. The neural mechanisms underlying this remain unclear. One mechanism that may contribute is transsaccadic remapping, in which the responses of some neurons in various attentional, oculomotor, and visual brain areas appear to anticipate the consequences of saccades. The functional role of transsaccadic remapping is actively debated, and many of its key properties remain unknown. Here, recording from two monkeys trained to make a saccade while directing attention to one of two spatial locations, we show that neurons in the middle temporal area (MT), a key locus in the motion-processing pathway of humans and macaques, show a form of transsaccadic remapping called a memory trace. The memory trace in MT neurons is enhanced by the allocation of top-down spatial attention. Our data provide the first demonstration, to our knowledge, of the influence of top-down attention on the memory trace anywhere in the brain. We find evidence only for a small and transient effect of motion direction on the memory trace (and in only one of two monkeys), arguing against a role for MT in the theoretically critical yet empirically contentious phenomenon of spatiotopic feature-comparison and adaptation transfer across saccades. Our data support the hypothesis that transsaccadic remapping represents the shift of attentional pointers in a retinotopic map, so that relevant locations can be tracked and rapidly processed across saccades. Our results resolve important issues concerning the perisaccadic representation of visual stimuli in the dorsal stream and demonstrate a significant role for top-down attention in modulating this representation. |
Monica S. Castelhano; Richelle L. Witherspoon How you use it matters: Object function guides attention during visual search in scenes Journal Article In: Psychological Science, vol. 27, no. 5, pp. 606–621, 2016. @article{Castelhano2016,How does one know where to look for objects in scenes? Objects are seen in context daily, but also used for specific purposes. Here, we examined whether an objects function can guide attention during visual search in scenes. In Experiment 1, participants studied either the function (function group) or features (feature group) of a set of invented objects. In a subsequent search, the function group located studied objects faster than novel (unstudied) objects, whereas the feature group did not. In Experiment 2, invented objects were positioned in locations that were either congruent or incongruent with the objects functions. Search for studied objects was faster for function-congruent locations and hampered for function-incongruent locations, relative to search for novel objects. These findings demonstrate that knowledge of object function can guide attention in scenes, and they have important implications for theories of visual cognition, cognitive neuroscience, and developmental and ecological psychology. |
Gernot Horstmann; Stefanie I. Becker; Daniel Ernst Perceptual salience captures the eyes on a surprise trial Journal Article In: Attention, Perception, & Psychophysics, vol. 78, no. 7, pp. 1889–1900, 2016. @article{Horstmann2016a,A number of characteristics of the visual system and of the visual stimulus are invoked to explain involuntary control of attention, including goals, novelty, and perceptual salience. The present experiment tested perceptual salience on a surprise trial, that is, on its unannounced first presentation following trials lacking any salient items, thus eliminating possible confounds by current goals. Moreover, the salient item's location was not singled out by a novel feature, thus eliminating a possible confound by novelty in directing attention. Eye tracking was used to measure involuntary attention. Results show a prioritization of the salient item. However, contrary to predictions of prominent neuro-computational and psychological salience models, prioritization was not fast-acting. Rather the observers' gaze was attracted only as the secondfixationonaverage or later (dependingoncondition) and with a latency of more than 500 ms on average. These results support the general proposition that salience can control attention. However, contrary to most salience models, the present results indicate that salience changes attentional priority only in novel environments. |
Bianca Huurneman; F. Nienke Boonstra; Jeroen Goossens Perceptual learning in children with infantile nystagmus: Effects on visual performance Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 10, pp. 4216–4228, 2016. @article{Huurneman2016a,PURPOSE: To evaluate whether computerized training with a crowded or uncrowded letter-discrimination task reduces visual impairment (VI) in 6- to 11-year-old children with infantile nystagmus (IN) who suffer from increased foveal crowding, reduced visual acuity, and reduced stereopsis. METHODS: Thirty-six children with IN were included. Eighteen had idiopathic IN and 18 had oculocutaneous albinism. These children were divided in two training groups matched on age and diagnosis: a crowded training group (n = 18) and an uncrowded training group (n = 18). Training occurred two times per week during 5 weeks (3500 trials per training). Eleven age-matched children with normal vision were included to assess baseline differences in task performance and test-retest learning. Main outcome measures were task-specific performance, distance and near visual acuity (DVA and NVA), intensity and extent of (foveal) crowding at 5 m and 40 cm, and stereopsis. RESULTS: Training resulted in task-specific improvements. Both training groups also showed uncrowded and crowded DVA improvements (0.10 ± 0.02 and 0.11 ± 0.02 logMAR) and improved stereopsis (670 ± 249″). Crowded NVA improved only in the crowded training group (0.15 ± 0.02 logMAR), which was also the only group showing a reduction in near crowding intensity (0.08 ± 0.03 logMAR). Effects were not due to test-retest learning. CONCLUSIONS: Perceptual learning with or without distractors reduces the extent of crowding and improves visual acuity in children with IN. Training with distractors improves near vision more than training with single optotypes. Perceptual learning also transfers to DVA and NVA under uncrowded and crowded conditions and even stereopsis. Learning curves indicated that improvements may be larger after longer training. |
Gerardo Salvato; Eva Z. Patai; Anna C. Nobre Preserved memory-based orienting of attention with impaired explicit memory in healthy ageing Journal Article In: Cortex, vol. 74, pp. 67–78, 2016. @article{spn16,It is increasingly recognised that spatial contextual long-term memory (LTM) prepares neural activity for guiding visuo-spatial attention in a proactive manner. In the current study, we investigated whether the decline in explicit memory observed in healthy ageing would compromise this mechanism. We compared the behavioural performance of younger and older participants on learning new contextual memories, on orienting visual attention based on these learnt contextual associations, and on explicit recall of contextual memories. We found a striking dissociation between older versus younger participants in the relationship between the ability to retrieve contextual memories versus the ability to use these to guide attention to enhance performance on a target-detection task. Older participants showed significant deficits in the explicit retrieval task, but their behavioural benefits from memory-based orienting of attention were equivalent to those in young participants. Furthermore, memory-based orienting correlated significantly with explicit contextual LTM in younger adults but not in older adults. These results suggest that explicit memory deficits in ageing might not compromise initial perception and encoding of events. Importantly, the results also shed light on the mechanisms of memory-guided attention, suggesting that explicit contextual memories are not necessary. |
Tom J. Barry; Bram Vervliet; Dirk Hermans Threat‐related gaze fixation and its relationship with the speed and generalisability of extinction learning Journal Article In: Australian Journal of Psychology, vol. 68, no. 3, pp. 200–208, 2016. @article{Barry2016,Objective: Attention plays an important role in the treatment of anxiety. Research has yet to elucidate how individual differences in attention or, particularly, gaze fixation can influence learning during treatment. The present investigation used an experimental analogue of the acquisition, treatment, and relapse of fear to examine this issue. Method: After pairing a stimulus (A) with an aversive electrocutaneous shock, such that participants come to fear this previously neutral stimulus, participants are repeatedly presented with a second stimulus (B) that possessed some common features with A as well as some of its own unique features. During presentations of B, fear was expected to reduce or extinguish. After this, participants were presented with C, which possessed some features of A that were not present in B as well as some features of B that were not present in A, and return of fear was assessed. Throughout this procedure, differences in gaze were measured so that this could be compared with indices for extinction and return of fear. Fear was measured in terms of skin conductance response. Results: Participants who spent more time looking at the unique features of B or who avoided the features in common with A showed slower extinction of their fear response. The same participants also showed reduced return of fear when C was presented. Conclusions: These findings are interpreted in terms of how attentional avoidance of threat-related stimuli might influence the inhibitory learning that takes place during extinction in experimental settings and exposure in clinical settings. |
Bianca Huurneman; F. Nienke Boonstra; Jeroen Goossens Perceptual learning in children with infantile Nystagmus: Effects on 2D oculomotor behavior Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 10, pp. 4229–4238, 2016. @article{Huurneman2016,PURPOSE: To determine changes in oculomotor behavior after 10 sessions of perceptual learning on a letter discrimination task in children with infantile nystagmus (IN). METHODS: Children with IN (18 children with idiopathic IN and 18 with oculocutaneous albinism accompanied by IN) aged 6 to 11 years were divided into two training groups matched on diagnosis: an uncrowded training group (n = 18) and a crowded training group (n = 18). Target letters always appeared briefly (500 ms) at an eccentric location, forcing subjects to quickly redirect their gaze. Training occurred twice per week for 5 consecutive weeks (3500 trials total). Norm data and test-retest values were collected from children with normal vision (n = 11). Outcome measures were: nystagmus characteristics (amplitude, frequency, intensity, and the expanded nystagmus acuity function); fixation stability (the bivariate contour ellipse area and foveation time); and saccadic eye movements (latencies and accuracy) made during a simple saccade task and a crowded letter-identification task. RESULTS: After training, saccadic responses of children with IN improved on the saccade task (latencies decreased by 14 ± 4 ms and gains increased by 0.03 ± 0.01), but not on the crowded letter task. There were also no training-induced changes in nystagmus characteristics and fixation stability. Although children with normal vision had shorter latencies in the saccade task (47 ± 14 ms at baseline), test-retest changes in their saccade gains and latencies were almost equal to the training effects observed in children with IN. CONCLUSIONS: Our results suggest that the improvement in visual performance after perceptual learning in children with IN is primarily due to improved sensory processing rather than improved two-dimensional oculomotor behavior. |
Markos Kyritsis; Stephen R. Gulliver; Eva Feredoes Environmental factors and features that influence visual search in a 3D WIMP interface Journal Article In: International Journal of Human-Computer Studies, vol. 92-93, pp. 30–43, 2016. @article{Kyritsis2016,The challenge of moving past the classic Window Icons Menus Pointer (WIMP) interface, i.e. by turning it '3D', has resulted in much research and development. To evaluate the impact of 3D on the 'finding a target picture in a folder' task, we built a 3D WIMP interface that allowed the systematic manipulation of visual depth, visual aides, semantic category distribution of targets versus non-targets; and the detailed measurement of lower-level stimuli features. Across two separate experiments, one large sample web-based experiment, to understand associations, and one controlled lab environment, using eye tracking to understand user focus, we investigated how visual depth, use of visual aides, use of semantic categories, and lower-level stimuli features (i.e. contrast, colour and luminance) impact how successfully participants are able to search for, and detect, the target image. Moreover in the lab-based experiment, we captured pupillometry measurements to allow consideration of the influence of increasing cognitive load as a result of either an increasing number of items on the screen, or due to the inclusion of visual depth. Our findings showed that increasing the visible layers of depth, and inclusion of converging lines, did not impact target detection times, errors, or failure rates. Low-level features, including colour, luminance, and number of edges, did correlate with differences in target detection times, errors, and failure rates. Our results also revealed that semantic sorting algorithms significantly decreased target detection times. Increased semantic contrasts between a target and its neighbours correlated with an increase in detection errors. Finally, pupillometric data did not provide evidence of any correlation between the number of visible layers of depth and pupil size, however, using structural equation modelling, we demonstrated that cognitive load does influence detection failure rates when there is luminance contrasts between the target and its surrounding neighbours. Results suggest that WIMP interaction designers should consider stimulus-driven factors, which were shown to influence the efficiency with which a target icon can be found in a 3D WIMP interface. |
Jiří Lukavský; Filip Děchtěrenko Gaze position lagging behind scene content in multiple object tracking: Evidence from forward and backward presentations Journal Article In: Attention, Perception, & Psychophysics, vol. 78, no. 8, pp. 2456–2468, 2016. @article{Lukavsky2016,In everyday life, people often need to track moving objects. Recently, a topic of discussion has been whether people rely solely on the locations of tracked objects, or take their directions into account in multiple object tracking (MOT). In the current paper, we pose a related question: do people utilise extrapolation in their gaze behaviour, or, in more practical terms, should the mathematical models of gaze behaviour in an MOT task be based on objects' current, past or anticipated positions? We used a data-driven approach with no a priori assumption about the underlying gaze model. We repeatedly presented the same MOT trials forward and backward and collected gaze data. After reversing the data from the backward trials, we gradually tested different time adjustments to find the local maximum of similarity. In a series of four experiments, we showed that the gaze position lagged by approximately 110 ms behind the scene content. We observed the lag in all subjects (Experiment 1). We further experimented to determine whether tracking workload or predictability of movements affect the size of the lag. Low workload led only to a small non-significant shortening of the lag (Experiment 2). Impairing the predictability of objects' trajectories increased the lag (Experiments 3a and 3b). We tested our observations with predictions of a centroid model: we observed a better fit for a model based on the locations of objects 110 ms earlier. We conclude that mathematical models of gaze behaviour in MOT should account for the lags. |
Tao Deng; Kaifu Yang; Yongjie Li; Hongmei Yan Where does the driver look? Top-down-based saliency detection in a traffic driving environment Journal Article In: IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 2051–2062, 2016. @article{Deng2016a,A traffic driving environment is a complex and dynamically changing scene. When driving, drivers always allocate their attention to the most important and salient areas or targets. Traffic saliency detection, which computes the salient and prior areas or targets in a specific driving environment, is an indispensable part of intelligent transportation systems and could be useful in supporting autonomous driving, traffic sign detection, driving training, car collision warning, and other tasks. Recently, advances in visual attention models have provided substantial progress in describing eye movements over simple stimuli and tasks such as free viewing or visual search. However, to date, there exists no computational framework that can accurately mimic a driver's gaze behavior and saliency detection in a complex traffic driving environment. In this paper, we analyzed the eye-tracking data of 40 subjects consisted of nondrivers and experienced drivers when viewing 100 traffic images. We found that a driver's attention was mostly concentrated on the end of the road in front of the vehicle. We proposed that the vanishing point of the road can be regarded as valuable top-down guidance in a traffic saliency detection model. Subsequently, we build a framework of a classic bottom-up and top-down combined traffic saliency detection model. The results show that our proposed vanishing-point-based top-down model can effectively simulate a driver's attention areas in a driving environment. |
Catherine M. McMahon; Isabelle Boisvert; Peter Lissa; Louise Granger; Ronny Ibrahim; Chi Yhun Lo; Kelly Miles; Petra L. Graham Monitoring alpha oscillations and pupil dilation across a performance-intensity function Journal Article In: Frontiers in Psychology, vol. 7, pp. 745, 2016. @article{McMahon2016,Listening to degraded speech can be challenging and requires a continuous investment of cognitive resources, which is more challenging for those with hearing loss. However, while alpha power (8-12 Hz) and pupil dilation have been suggested as objective correlates of listening effort, it is not clear whether they assess the same cognitive processes involved, or other sensory and/or neurophysiological mechanisms that are associated with the task. Therefore, the aim of this study is to compare alpha power and pupil dilation during a sentence recognition task in 15 randomized levels of noise (-7dB to +7dB SNR) using highly intelligible (16 channel vocoded) and moderately intelligible (6 channel vocoded) speech. Twenty young normal hearing adults participated in the study; however, due to extraneous noise, data from 16 (10 females, 6 males; aged 19-28 years) was used in the EEG analysis and 10 in the pupil analysis. Behavioral testing of perceived effort and speech performance was assessed at 3 fixed SNRs per participant and was comparable to sentence recognition performance assessed in the physiological test session for both 16- and 6-channel vocoded sentences. Results showed a significant interaction between channel vocoding for both the alpha power and the pupil size changes. While both measures significantly decreased with more positive SNRs for the 16-channel vocoding, this was not observed with the 6-channel vocoding. The results of this study suggest that these measures may encode different processes involved in speech perception, which show similar trends for highly intelligible speech, but diverge for more spectrally degraded speech. The results to date suggest that these objective correlates of listening effort, and the cognitive processes involved in listening effort, are not yet sufficiently well understood to be used within a clinical setting. |
Alex L. White; Martin Rolfs Oculomotor inhibition covaries with conscious detection Journal Article In: Journal of Neurophysiology, vol. 116, pp. 1507–1521, 2016. @article{White2016,Saccadic eye movements occur frequently even during attempted fixation, but they halt momentarily when a new stimulus appears. Here, we demonstrate that this rapid, involuntary “oculomotor freezing” reflex is yoked to fluctuations in explicit visual perception. Human observers reported the presence or absence of a brief visual stimulus while we recorded microsaccades, small spontaneous eye movements. We found that microsaccades were reflexively inhibited if and only if the observer reported seeing the stimulus, even when none was present. By apply- ing a novel Bayesian classification technique to patterns of microsac- cades on individual trials, we were able to decode the reported state of perception more accurately than the state of the stimulus (present vs. absent). Moreover, explicit perceptual sensitivity and the oculomotor reflex were both susceptible to orientation-specific adaptation. The adaptation effects suggest that the freezing reflex is mediated by signals processed in the visual cortex before reaching oculomotor control centers rather than relying on a direct subcortical route, as some previous research has suggested. We conclude that the reflexive inhibition of microsaccades immediately and inadvertently reveals when the observer becomes aware of a change in the environment. By providing an objective measure of conscious perceptual detection that does not require explicit reports, this finding opens doors to clinical applications and further investigations of perceptual awareness. |
Gernot Horstmann; Arvid Herwig Novelty biases attention and gaze in a surprise trial Journal Article In: Attention, Perception, & Psychophysics, vol. 78, no. 1, pp. 69–77, 2016. @article{Horstmann2016,While the classical distinction between task- driven and stimulus-driven biasing of attention appears to be a dichotomy at first sight, there seems to be a third category that depends on the contrast or discrepancy be- tween active representations and the upcoming stimulus, and may be termed novelty, surprise, or prediction failure. For previous demonstrations of the discrepancy-attention link, stimulus-driven components (saliency) may have played a decisive role. The present study was conducted to evaluate the discrepancy-attentionlinkinadisplay where novel and familiar stimuli are equated for saliency. Eye tracking was used to determine fixations on novel and familiar stimuli as a proxy for attention. Results show a prioritization of attention by the novel color, and a de- prioritization of the familiar color, which is clearly present at the second fixation, and spans over the next couple of fixations. Saliency, on the other hand, did not prioritize items in the display. The results thus reinforce the notion that novelty captures and binds attention. |
Simona Buetti; Alejandro Lleras Distractibility is a function of engagement, not task difficulty: Evidence from a new oculomotor capture paradigm Journal Article In: Journal of Experimental Psychology: General, vol. 145, no. 10, pp. 1382–1405, 2016. @article{Buetti2016,It has been shown that when humans require a brief moment of concentration or mental effort, they tend to avert their gaze away from the attended location (or even blink). Similarly, participants tend to miss unexpected events when they are highly focused on a task. We present an engagement theory of distractibility that is meant to capture the relationship between participants' engagement in a task and reduction in sensitivity to new sensory events in a broad range of situations. In a series of experiments, we asked participants to perform different cognitive tasks of varying degrees of difficulty while we measured spontaneous oculomotor capture by new images that were completely unrelated to the participants' task. The images appeared while participants were cognitively engaged in the task. Our results showed that increased cognitive engagement produced decreased sensitivity to visual events. We propose that individual differences in intrinsic motivation play a large role in determining sensitivity to task unrelated events. In addition, our results also indicate that changes in task difficulty on a trial-to-trial basis do not generate trial-by-trial differences in oculomotor capture. Importantly, we believe our framework provides us with a promising way of extending laboratory findings to many real world situations. |
Jesse A. Harris Processing let alone coordination in silent reading Journal Article In: Lingua, vol. 169, pp. 70–94, 2016. @article{Harris2016,Processing research on coordination indicates that simpler conjuncts are preferred over more complex ones, and that positing ellipsis structure in the second conjunct is taxing to process when a simpler non-ellipsis structure exists. The present study investigates let alone coordination, which is argued to require clausal ellipsis in the second conjunct. It is proposed that the processor always projects a clausal structure for the second conjunct for the ellipsis, obviating a general preference for a less complex conjunct. Experiment 1 consists of several sentence-completion questionnaires testing whether a DP or VP conjunct is preferred in let alone structures as in John doesn't like Mary, let alone (Sue | love her). The results found a bias towards VP remnants that was weakly affected by syntactic placement of the focus particle even, as well as by prior context. Experiment 2 examined the effect of remnant type on eye movements during silent reading, revealing only distinct processing patterns, rather than major processing penalties, for different remnant types, and a general facilitation when even was present to signal upcoming scalar contrast. |
Gernot Horstmann; Arvid Herwig; Stefanie I. Becker Distractor dwelling, skipping, and revisiting determine target absent performance in difficult visual search Journal Article In: Frontiers in Psychology, vol. 7, pp. 1152, 2016. @article{Horstmann2016b,Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences. |
Sanjay G. Manohar; Masud Husain Human ventromedial prefrontal lesions alter incentivisation by reward Journal Article In: Cortex, vol. 76, pp. 104–120, 2016. @article{Manohar2016,Although medial frontal brain regions are implicated in valuation of rewards, evidence from focal lesions to these areas is scant, with many conflicting results regarding motivation and affect, and no human studies specifically examining incentivisation by reward. Here, 19 patients with isolated, focal damage in ventral and medial prefrontal cortex were selected from a database of 453 individuals with subarachnoid haemorrhage. Using a speeded saccadic task based on the oculomotor capture paradigm, we manipulated the maximum reward available on each trial using an auditory incentive cue. Modulation of behaviour by motivation permitted quantification of reward sensitivity. At the group level, medial frontal damage was overall associated with significantly reduced effects of reward on invigorating saccadic velocity and autonomic (pupil) responses compared to age-matched, healthy controls. Crucially, however, some individuals instead showed abnormally strong incentivisation effects for vigour. Increased sensitivity to rewards within the lesion group correlated with damage in subgenual ventromedial prefrontal cortex (vmPFC) areas, which have recently become the target for deep brain stimulation (DBS) in depression. Lesion correlations with clinical apathy suggested that the apathy associated with prefrontal damage is in fact reduced by damage at those coordinates. Reduced reward sensitivity showed a trend to correlate with damage near nucleus accumbens. Lesions did not, on the other hand, influence reward sensitivity of cognitive control, as measured by distractibility. Thus, although medial frontal lesions may generally reduce reward sensitivity, damage to key subregions paradoxically protect from this effect. |
Nadia Alahyane; Christelle Lemoine-Lardennois; Coline Tailhefer; Thérèse Collins; Jacqueline Fagard; Karine Doré-Mazars Development and learning of saccadic eye movements in 7- to 42-month-old children Journal Article In: Journal of Vision, vol. 16, no. 1, pp. 1–12, 2016. @article{Alahyane2016,From birth, infants move their eyes to explore their environment, interact with it, and progressively develop a multitude of motor and cognitive abilities. The characteristics and development of oculomotor control in early childhood remain poorly understood today. Here, we examined reaction time and amplitude of saccadic eye movements in 93 7- to 42-month-old children while they oriented toward visual animated cartoon characters appearing at unpredictable locations on a computer screen over 140 trials. Results revealed that saccade performance is immature in children compared to a group of adults: Saccade reaction times were longer, and saccade amplitude relative to target location (10° eccentricity) was shorter. Results also indicated that performance is flexible in children. Although saccade reaction time decreased as age increased, suggesting developmental improvements in saccade control, saccade amplitude gradually improved over trials. Moreover, similar to adults, children were able to modify saccade amplitude based on the visual error made in the previous trial. This second set of results suggests that short visual experience and/or rapid sensorimotor learning are functional in children and can also affect saccade performance. |
Jessica Taubert; Valerie Goffaux; Goedele Van Belle; Wim Vanduffel; Rufin Vogels The impact of orientation filtering on face-selective neurons in monkey inferior temporal cortex Journal Article In: Scientific Reports, vol. 6, pp. 21189, 2016. @article{Taubert2016,Faces convey complex social signals to primates. These signals are tolerant of some image transformations (e.g. changes in size) but not others (e.g. picture-plane rotation). By filtering face stimuli for orientation content, studies of human behavior and brain responses have shown that face processing is tuned to selective orientation ranges. In the present study, for the first time, we recorded the responses of face-selective neurons in monkey inferior temporal (IT) cortex to intact and scrambled faces that were filtered to selectively preserve horizontal or vertical information. Guided by functional maps, we recorded neurons in the lateral middle patch (ML), the lateral anterior patch (AL), and an additional region located outside of the functionally defined face-patches (CONTROL). We found that neurons in ML preferred horizontal-passed faces over their vertical-passed counterparts. Neurons in AL, however, had a preference for vertical-passed faces, while neurons in CONTROL had no systematic preference. Importantly, orientation filtering did not modulate the firing rate of neurons to phase-scrambled face stimuli in any recording region. Together these results suggest that face-selective neurons found in the face-selective patches are differentially tuned to orientation content, with horizontal tuning in area ML and vertical tuning in area AL. |
Margarita Vinnikov; Robert S. Allison; Suzette Fernandes Impact of depth of field simulation on visual fatigue: Who are impacted? and how? Journal Article In: International Journal of Human-Computer Studies, vol. 91, pp. 37–51, 2016. @article{Vinnikov2016,While stereoscopic content can be compelling, it is not always comfortable for users to interact with on a regular basis. This is because the stereoscopic content on displays viewed at a short distance has been associated with different symptoms such as eye-strain, visual discomfort, and even nausea. Many of these symptoms have been attributed to cue conflict, for example between vergence and accommodation. To resolve those conflicts, volumetric and other displays have been proposed to improve the user's experience. However, these displays are expensive, unduly restrict viewing position, or provide poor image quality. As a result, commercial solutions are not readily available. We hypothesized that some of the discomfort and fatigue symptoms exhibited from viewing in stereoscopic displays may result from a mismatch between stereopsis and blur, rather than between sensed accommodation and vergence. To find factors that may support or disprove this claim, we built a real-time gaze-contingent system that simulates depth of field (DOF) that is associated with accommodation at the virtual depth of the point of regard (POR). Subsequently, a series of experiments evaluated the impact of DOF on people of different age groups (younger versus older adults). The difference between short duration discomfort and fatigue due to prolonged viewing was also examined. Results indicated that age may be a determining factor for a user's experience of DOF. There was also a major difference in a user's perception of viewing comfort during short-term exposure and prolonged viewing. Primarily, people did not find that the presence of DOF enhanced short-term viewing comfort, while DOF alleviated some symptoms of visual fatigue but not all. |
Carola Salvi; Emanuela Bricolo; John Kounios; Edward Bowden; Mark Beeman Insight solutions are correct more often than analytic solutions Journal Article In: Thinking and Reasoning, vol. 22, no. 4, pp. 443–460, 2016. @article{Salvi2016,How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants' solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants' self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants' analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. |
Steven G. Luke; Kiel Christianson Limits on lexical prediction during reading Journal Article In: Cognitive Psychology, vol. 88, pp. 22–60, 2016. @article{Luke2016,Efficient language processing may involve generating expectations about upcoming input. To investigate the extent to which prediction might facilitate reading, a large-scale survey provided cloze scores for all 2689 words in 55 different text passages. Highly predictable words were quite rare (5% of content words), and most words had a more-expected competitor. An eye-tracking study showed sensitivity to cloze probability but no mis-prediction cost. Instead, the presence of a more-expected competitor was found to be facilitative in several measures. Further, semantic and morphosyntactic information was highly predictable even when word identity was not, and this information facilitated reading above and beyond the predictability of the full word form. The results are consistent with graded prediction but inconsistent with full lexical prediction. Implications for theories of prediction in language comprehension are discussed. |
Alexandra S. Mueller; Esther G. González; Chris McNorgan; Martin J. Steinbach; Brian Timney Effects of vertical direction and aperture size on the perception of visual acceleration Journal Article In: Perception, vol. 45, no. 6, pp. 670–683, 2016. @article{Mueller2016a,It is not well understood whether the distance over which moving stimuli are visible affects our sensitivity to the presence of acceleration or our ability to track such stimuli. It is also uncertain whether our experience with gravity creates anisotropies in how we detect vertical acceleration and deceleration. To address these questions, we varied the vertical extent of the aperture through which we presented vertically accelerating and decelerating random dot arrays. We hypothesized that observers would better detect and pursue accelerating and decelerating stimuli that extend over larger than smaller distances. In Experiment 1, we tested the effects of vertical direction and aperture size on acceleration and deceleration detection accuracy. Results indicated that detection is better for downward motion and for large apertures, but there is no difference between vertical acceleration and deceleration detection. A control experiment revealed that our manipulation of vertical aperture size affects the ability to track vertical motion. Smooth pursuit is better (i.e., with higher peak velocities) for large apertures than for small apertures. Our findings suggest that the ability to detect vertical acceleration and deceleration varies as a function of the direction and vertical over which an observer can track the moving stimulus. |
Jillian M. Schuh; Inge Marie Eigsti; Daniel Mirman Discourse comprehension in autism spectrum disorder: Effects of working memory load and common ground Journal Article In: Autism Research, vol. 9, no. 12, pp. 1340–1352, 2016. @article{sem16,Pragmatic language impairments are nearly universal in autism spectrum disorders (ASD). Discourse requires that we monitor information that is shared or mutually known, called "common ground." While many studies have examined the role of Theory of Mind (ToM) in such impairments, few have examined working memory (WM). Common ground impairments in ASD could reflect limitations in both WM and ToM. This study explored common ground use in youth ages 8-17 years with high-functioning ASD (n = 13) and typical development (n = 22); groups did not differ on age, gender, IQ, or standardized language. We tracked participants' eye movements while they performed a discourse task in which some information was known only to the participant (e.g., was privileged; a manipulation of ToM). In addition, the amount of privileged information varied (a manipulation of WM). All participants were slower to fixate the target when considering privileged information, and this effect was greatest during high WM load trials. Further, the ASD group was more likely to fixate competing (non-target) shapes. Predictors of fixation patterns included ASD symptomatology, language ability, ToM, and WM. Groups did not differ in ToM. Individuals with better WM fixated the target more rapidly, suggesting an association between WM capacity and efficient discourse. In addition to ToM knowledge, WM capacity constrains common ground representation and impacts pragmatic skills in ASD. Social impairments in ASD are thus associated with WM capacity, such that deficits in domain-general, nonsocial processes such as WM exert an influence during complex social interactions. |
Viola S. Störmer; George A. Alvarez Attention alters perceived attractiveness Journal Article In: Psychological Science, vol. 27, no. 4, pp. 563–571, 2016. @article{Stoermer2016,Can attention alter the impression of a face? Previous studies showed that attention modulates the appearance of lower-level visual features. For instance, attention can make a simple stimulus appear to have higher contrast than it actually does. We tested whether attention can also alter the perception of a higher-order property—namely, facial attractiveness. We asked participants to judge the relative attractiveness of two faces after summoning their attention to one of the faces using a briefly presented visual cue. Across trials, participants judged the attended face to be more attractive than the same face when it was unattended. This effect was not due to decision or response biases, but rather was due to changes in perceptual processing of the faces. These results show that attention alters perceived facial attractiveness, and broadly demonstrate that attention can influence higher-level perception and may affect people's initial impressions of one another. |
Ulrike Zimmer; M H"ofler; Karl Koschutnig; Anja Ischebeck; Margit Höfler; Karl Koschutnig; Anja Ischebeck Neuronal interactions in areas of spatial attention reflect avoidance of disgust, but orienting to danger Journal Article In: NeuroImage, vol. 134, pp. 94–104, 2016. @article{Zimmer2016,For survival, it is necessary to attend quickly towards dangerous objects, but to turn away from something that is disgusting. We tested whether fear and disgust sounds direct spatial attention differently. Using fMRI, a sound cue (disgust, fear or neutral) was presented to the left or right ear. The cue was followed by a visual target (a small arrow) which was located on the same (valid) or opposite (invalid) side as the cue. Participants were required to decide whether the arrow pointed up- or downwards while ignoring the sound cue. Behaviorally, responses were faster for invalid compared to valid targets when cued by disgust, whereas the opposite pattern was observed for targets after fearful and neutral sound cues. During target presentation, activity in the visual cortex and IPL increased for targets invalidly cued with disgust, but for targets validly cued with fear which indicated a general modulation of activation due to attention. For the TPJ, an interaction in the opposite direction was observed, consistent with its role in detecting targets at unattended positions and in relocating attention. As a whole our results indicate that a disgusting sound directs spatial attention away from its location, in contrast to fearful and neutral sounds. |
Ilse C. Van Dromme; Elsie Premereur; Bram-Ernst Verhoef; Wim Vanduffel Posterior parietal cortex drives inferotemporal activations during three- dimensional object vision Journal Article In: PloS Biology, vol. 14, no. 4, pp. e1002445, 2016. @article{Dromme2016,The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-micro-stimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. |
Heather Sheridan; Erik D. Reichle; Eyal M. Reingold Why does removing inter-word spaces produce reading deficits? The role of parafoveal processing Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 5, pp. 1543–1552, 2016. @article{srr16,To examine the role of inter-word spaces during reading, we used a gaze-contingent boundary paradigm to manipulate parafoveal preview (i.e., valid vs. invalid preview) in a normal text condition that contained spaces (e.g., "John decided to sell the table") and in an unsegmented text condition that contained random numbers instead of spaces (e.g.,"John4decided8to5sell9the7table"). Preview effects on mean first-fixation durations were larger for normal than unsegmented text conditions, and survival analyses revealed a delay in the onset of both preview validity and word-frequency effects on first-fixation durations for unsegmented relative to normal text. Taken together with simulations that were conducted using the E-Z Reader model, the present findings indicated that unsegmented text deficits reflect disruptions to both parafoveal processing and lexical processing. We discuss the implications of our results for models of eye-movement control. |
Oleg Solopchuk; Andrea Alamia; Etienne Olivier; Alexandre Zénon Chunking improves symbolic sequence processing and relies on working memory gating mechanisms Journal Article In: Learning and Memory, vol. 23, no. 3, pp. 108–112, 2016. @article{saoz16,Chunking, namely the grouping of sequence elements in clusters, is ubiquitous during sequence processing, but its impact on performance remains debated. Here, we found that participants who adopted a consistent chunking strategy during symbolic sequence learning showed a greater improvement of their performance and a larger decrease in cognitive workload over time. Stronger reliance on chunking was also associated with higher scores in a WM updating task, suggesting the contribution of WM gating mechanisms to sequence chunking. Altogether, these results indicate that chunking is a cost-saving strategy that enhances effectiveness of symbolic sequence learning. |
Joseph E. Barton; Valentina Graci; Charlene Hafer-Macko; John D. Sorkin; Richard F. Macko Dynamic balanced reach: A temporal and spectral analysis across increasing performance demands Journal Article In: Journal of Biomechanical Engineering, vol. 138, pp. 1–13, 2016. @article{Barton2016a,Standing balanced reach is a fundamental task involved in many activities of daily living that has not been well analyzed quantitatively to assess and characterize the multiseg- mental nature of the body's movements. We developed a dynamic balanced reach test (BRT) to analyze performance in this activity; in which a standing subject is required to maintain balance while reaching and pointing to a target disk moving across a large pro- jection screen according to a sum-of-sines function. This tracking and balance task is made progressively more difficult by increasing the disk's overall excursion amplitude. Using kinematic and ground reaction force data from 32 young healthy subjects, we investigated how the motions ofthe tracking finger and whole-body center ofmass (CoM) varied in response to the motion ofthe disk across five overall disk excursion amplitudes. Group representative performance statistics for the cohort revealed a monotonically increasing root mean squared (RMS) tracking error (RMSE) and RMS deviation (RMSD) between whole-body CoM (projected onto the ground plane) and the center ofthe base of support (BoS) with increasing amplitude (p<0.03). Tracking and CoM response delays remained constant, however, at 0.5 s and 1.0 s, respectively. We also performed detailed spectral analyses ofgroup-representative response data for each ofthe five overall excur- sion amplitudes. We derived empirical and analytical transfer functions between the motion of the disk and that of the tracking finger and CoM, computed tracking and CoM responses to a step input, and RMSE and RMSD as functions ofdisk frequency. We found that for frequencies less than 1.0 Hz, RMSE generally decreased, while RMSE normalized to disk motion amplitude generally increased. RMSD, on the other hand, decreased monotonically. These findings quantitatively characterize the amplitude- and frequency- dependent nature ofyoung healthy tracking and balance in this task. The BRT is not sub- ject to floor or ceiling effects, overcoming an important deficiency associated with most research and clinical instruments used to assess balance. This makes a comprehensive quantification of young healthy balance performance possible. The results of such analy- ses could be used in work space design and in fall-prevention instructional materials, for both the home and work place. Young healthy performance represents “exemplar” per- formance and can also be used as a reference against which to compare the performance ofaging and other clinical populations at risk for falling. |
Arielle Borovsky; Erica M. Ellis; Julia L. Evans; Jeffrey L. Elman Lexical leverage: Category knowledge boosts real-time novel word recognition in 2-year-olds Journal Article In: Developmental Science, vol. 19, no. 6, pp. 918–932, 2016. @article{Borovsky2016,Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label-object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition. |
Mark Mills; Olivia Wieda; Scott F. Stoltenberg; Michael D. Dodd Emotion moderates the association between HTR2A (rs6313) genotype and antisaccade latency Journal Article In: Experimental Brain Research, vol. 234, no. 9, pp. 2653–2665, 2016. @article{Mills2016,The serotonin system is heavily involved in cognitive and emotional control processes. Previous work has typically investigated this system's role in control processes separately for cognitive and emotional domains, yet it has become clear the two are linked. The present study, therefore, examined whether variation in a serotonin receptor gene (HTR2A, rs6313) moderated effects of emotion on inhibitory control. An emotional antisaccade task was used in which participants looked toward (prosaccade) or away (antisaccade) from a target presented to the left or right of a happy, angry, or neutral face. Overall, antisaccade latencies were slower for rs6313 C allele homozygotes than T allele carriers, with no effect of genotype on prosaccade latencies. Thus, C allele homozygotes showed relatively weak inhibitory control but intact reflexive control. Importantly, the emotional stimulus was either present during target presentation (overlap trials) or absent (gap trials). The gap effect (slowed latency in overlap versus gap trials) in antisaccade trials was larger with angry versus neutral faces in C allele homozygotes. This impairing effect of negative valence on inhibitory control was larger in C allele homozygotes than T allele carriers, suggesting that angry faces disrupted/competed with the control processes needed to generate an antisaccade to a greater degree in these individuals. The genotype difference in the negative valence effect on antisaccade latency was attenuated when trial N-1 was an antisaccade, indicating top-down regulation of emotional influence. This effect was reduced in C/C versus T/_ individuals, suggesting a weaker capacity to downregulate emotional processing of task-irrelevant stimuli. |
Andrea Phillipou; Larry Allen Abel; David Jonathan Castle; Matthew Edward Hughes; Richard Grant Nibbs; Caroline T. Gurvich; Susan Lee Rossell Resting state functional connectivity in anorexia nervosa Journal Article In: Psychiatry Research: Neuroimaging, vol. 251, pp. 45–52, 2016. @article{Phillipou2016,Anorexia Nervosa (AN) is a serious psychiatric illness characterised by a disturbance in body image, a fear of weight gain and significantly low body weight. The factors involved in the genesis and maintenance of AN are unclear, though the potential neurobiological underpinnings of the condition are of increasing interest. Through the investigation of functional connectivity of the brain at rest, information relating to neuronal communication and integration of information that may relate to behaviours and cognitive symptoms can be explored. The aim of this study was to investigate functional connectivity of the default mode network, and sensorimotor and visual networks in AN. 26 females with AN and 27 healthy control participants matched for age, gender and premorbid intelligence underwent a resting state functional magnetic resonance imaging scan. Default mode network functional connectivity did not differ between groups. AN participants displayed reduced functional connectivity between the sensorimotor and visual networks, in comparison to healthy controls. This finding is discussed in terms of differences in visuospatial processing in AN and the distortion of body image experienced by these individuals. Overall, the findings suggest that sensorimotor and visual network connectivity may be related to visuospatial processing in AN, though, further research is required. |
Joseph E. Barton; Anindo Roy; John D. Sorkin; Mark W. Rogers; Richard F. Macko An engineering model of human balance control—Part I: Biomechanical model Journal Article In: Journal of Biomechanical Engineering, vol. 138, no. 1, pp. 1–11, 2016. @article{Barton2016,We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task-with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities. |
Arielle Borovsky; Erica M. Ellis; Julia L. Evans; Jeffrey L. Elman Semantic structure in vocabulary knowledge interacts with lexical and sentence processing in infancy Journal Article In: Child Development, vol. 87, no. 6, pp. 1893–1908, 2016. @article{Borovsky2016a,Although the size of a child's vocabulary associates with language-processing skills, little is understoodregarding how this relation emerges. This investigation asks whether and how the structure of vocabularyknowledge affects language processing in English-learning 24-month-old children (N = 32; 18 F, 14 M). Paren-tal vocabulary report was used to calculate semantic density in several early-acquired semantic categories.Performance on two language-processing tasks (lexical recognition and sentence processing) was compared asa function of semantic density. In both tasks, real-time comprehension was facilitated for higher density items,whereas lower density items experienced more interference. The findings indicate that language-processingskills develop heterogeneously and are influenced by the semantic network surrounding a known word. |
Ran Manor; Liran Mishali; Amir B. Geva Multimodal neural network for rapid serial visual presentation brain computer interface Journal Article In: Frontiers in Computational Neuroscience, vol. 10, pp. 130, 2016. @article{Manor2016,Brain computer interfaces allow users to preform various tasks using only the electrical activity of the brain. BCI applications often present the user a set of stimuli and record the corresponding electrical response. The BCI algorithm will then have to decode the acquired brain response and perform the desired task. In rapid serial visual presentation (RSVP) tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. In this work, we suggest a multimodal neural network for RSVP tasks. The network operates on the brain response and on the initiating stimulus simultaneously, providing more information for the BCI application. We present two variants of the multimodal network, a supervised model, for the case when the targets are known in advanced, and a semi-supervised model for when the targets are unknown. We test the neural networks with a RSVP experiment on satellite imagery carried out with two subjects. The multimodal networks achieve a significant performance improvement in classification metrics. We visualize what the networks has learned and discuss the advantages of using neural network models for BCI applications. |
Tanja C. Roembke; Bob McMurray Observational word learning: Beyond propose-but-verify and associative bean counting Journal Article In: Journal of Memory and Language, vol. 87, pp. 105–127, 2016. @article{rm16,Learning new words is difficult. In any naming situation, there are multiple possible interpretations of a novel word. Recent approaches suggest that learners may solve this problem by tracking co-occurrence statistics between words and referents across multiple naming situations (e.g. Yu & Smith, 2007), overcoming the ambiguity in any one situation. Yet, there remains debate around the underlying mechanisms. We conducted two experiments in which learners acquired eight word-object mappings using cross-situational statistics while eye-movements were tracked. These addressed four unresolved questions regarding the learning mechanism. First, eye-movements during learning showed evidence that listeners maintain multiple hypotheses for a given word and bring them all to bear in the moment of naming. Second, trial-by-trial analyses of accuracy suggested that listeners accumulate continuous statistics about word-object mappings, over and above prior hypotheses they have about a word. Third, consistent, probabilistic context can impede learning, as false associations between words and highly co-occurring referents are formed. Finally, a number of factors not previously considered in prior analysis impact observational word learning: knowledge of the foils, spatial consistency of the target object, and the number of trials between presentations of the same word. This evidence suggests that observational word learning may derive from a combination of gradual statistical or associative learning mechanisms and more rapid real-time processes such as competition, mutual exclusivity and even inference or hypothesis testing. |
Tom Bullock; James C. Elliott; John T. Serences; Barry Giesbrecht Acute exercise modulates feature-selective responses in human cortex Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 4, pp. 605–618, 2016. @article{Bullock2016,An organism's current behavioral state influences ongoing brain activity. Nonhuman mammalian and invertebrate brains exhibit large increases in the gain of feature-selective neural responses in sensory cortex during locomotion, suggesting that the visual system becomes more sensitive when actively exploring the environment. This raises the possibility that human vision is also more sensitive during active movement. To investigate this possibility, we used an inverted encoding model technique to estimate feature-selective neural response profiles from EEG data acquired from participants performing an orientation discrimination task. Participants (n = 18) fixated at the center of a flickering (15 Hz) circular grating presented at one of nine different orientations and monitored for a brief shift in orientation that occurred on every trial. Participants completed the task while seated on a stationary exercise bike at rest and during low- and high-intensity cycling. We found evidence for inverted-U effects; such that the peak of the reconstructed feature-selective tuning profiles was highest during low-intensity exercise compared with those estimated during rest and high-intensity exercise. When modeled, these effects were driven by changes in the gain of the tuning curve and in the profile bandwidth during low-intensity exercise relative to rest. Thus, despite profound differences in visual pathways across species, these data show that sensitivity in human visual cortex is also enhanced during locomotive behavior. Our results reveal the nature of exercise-induced gain on feature-selective coding in human sensory cortex and provide valuable evidence linking the neural mechanisms of behavior state across species. |
Roberto R. Heredia; Anna B. Cieślicka Metaphoric reference: An eye movement analysis of Spanish-English and English-Spanish bilingual readers Journal Article In: Frontiers in Psychology, vol. 7, pp. 439, 2016. @article{Heredia2016,This study examines the processing of metaphoric reference by bilingual speakers. English dominant, Spanish dominant, and balanced bilinguals read passages in English biasing either a figurative (e.g., describing a weak and soft fighter that always lost and everyone hated) or a literal (e.g., describing a donut and bakery shop that made delicious pastries) meaning of a critical metaphoric referential description (e.g., 'creampuff'). We recorded the eye movements (first fixation, gaze duration, go-past duration, and total reading time) for the critical region, which was a metaphoric referential description in each passage. The results revealed that literal vs. figurative meaning activation was modulated by language dominance, where Spanish dominant bilinguals were more likely to access the literal meaning, and English dominant and balanced bilinguals had access to both the literal and figurative meanings of the metaphoric referential description. Overall, there was a general tendency for the literal interpretation to be more active, as revealed by shorter reading times for the metaphoric reference used literally, in comparison to when it was used figuratively. Results are interpreted in terms of the Graded Salience Hypothesis (Giora, 2002, 2003) and the Literal Salience Model (Cieślicka, 2006, 2015). |
Steven G. Luke; John M. Henderson The influence of content meaningfulness on eye movements across tasks: Evidence from scene viewing and reading Journal Article In: Frontiers in Psychology, vol. 7, pp. 257, 2016. @article{Luke2016a,The present study investigated the influence of content meaningfulness on eye-movement control in reading and scene viewing. Texts and scenes were manipulated to make them uninterpretable, and then eye-movements in reading and scene-viewing were compared to those in pseudo-reading and pseudo-scene viewing. Fixation durations and saccade amplitudes were greater for pseudo-stimuli. The effect of the removal of meaning was seen exclusively in the tail of the fixation duration distribution in both tasks, and the size of this effect was the same across tasks. These findings suggest that eye movements are controlled by a common mechanism in reading and scene viewing. They also indicate that not all eye movements are responsive to the meaningfulness of stimulus content. Implications for models of eye movement control are discussed. |
Stefanie Mueller; Katja Fiehler Mixed body- and gaze-centered coding of proprioceptive reach targets after effector movement Journal Article In: Neuropsychologia, vol. 87, pp. 63–73, 2016. @article{Mueller2016,Previous studies demonstrated that an effector movement intervening between encoding and reaching to a proprioceptive target determines the underlying reference frame: proprioceptive reach targets are represented in a gaze-independent reference frame if no movement occurs but are represented with respect to gaze after an effector movement (Mueller and Fiehler, 2014a). The present experiment explores whether an effector movement leads to a switch from a gaze-independent, body-centered reference frame to a gaze-dependent reference frame or whether a gaze-dependent reference frame is employed in addition to a gaze-independent, body-centered reference frame. Human participants were asked to reach in complete darkness to an unseen finger (proprioceptive target) of their left target hand indicated by a touch. They completed 2 conditions in which the target hand remained either stationary at the target location (stationary condition) or was actively moved to the target location, received a touch and was moved back before reaching to the target (moved condition). We dissociated the location of the movement vector relative to the body midline and to the gaze direction. Using correlation and regression analyses, we estimated the contribution of each reference frame based on horizontal reach errors in the stationary and moved conditions. Gaze-centered coding was only found in the moved condition, replicating our previous results. Body-centered coding dominated in the stationary condition while body- and gaze-centered coding contributed equally strong in the moved condition. Our results indicate a shift from body-centered to combined body- and gaze-centered coding due to an effector movement before reaching towards proprioceptive targets. |
Caleb E. Strait; Brianna J. Sleezer; Tommy C. Blanchard; Habiba Azab; Meghan D. Castagno; Benjamin Y. Hayden Neuronal selectivity for spatial positions of offers and choices in five reward regions Journal Article In: Journal of Neurophysiology, vol. 115, no. 3, pp. 1098–1111, 2016. @article{Strait2016,When we evaluate an option, how is the neural representation of its value linked to information that identifies it, such as its position in space? We hypothesized that value information and identity cues are not bound together at a particular point but are represented together at the single unit level throughout the entirety of the choice process. We examined neuronal responses in two-option gambling tasks with lateralized and asynchronous presentation of offers in five reward regions: orbitofrontal cortex (OFC, area 13), ventromedial prefrontal cortex (vmPFC, area 14), ventral striatum (VS), dorsal anterior cingulate cortex (dACC), and subgenual anterior cingulate cortex (sgACC, area 25). Neuronal responses in all areas are sensitive to the positions of both offers and of choices. This selectivity is strongest in reward-sensitive neurons, indicating that it is not a property of a specialized subpopulation of cells. We did not find consistent contralateral or any other organization to these responses, indicating that they may be difficult to detect with aggregate measures like neuro-imaging or studies of lesion effects. These results suggest that value coding is wed to factors that identify the object throughout the reward system and suggest a possible solution to the binding problem raised by abstract value encoding schemes. |
Rui Wang; Jie Wang; Jun-Yun Zhang; Xin-Yu Xie; Yu-Xiang Yang; Shu-Han Luo; Cong Yu; Wu Li Perceptual learning at a conceptual level Journal Article In: Journal of Neuroscience, vol. 36, no. 7, pp. 2238–2246, 2016. @article{Wang2016a,Humans can learn to abstract and conceptualize the shared visual features defining an object category in object learning. Therefore, learning is generalizable to transformations of familiar objects and even to new objects that differ in other physical properties. In contrast, visual perceptual learning (VPL), improvement in discriminating fine differences of a basic visual feature through training, is commonly regarded as specific and low-level learning because the improvement often disappears when the trained stimulus is simply relocated or rotated in the visual field. Such location and orientation specificity is taken as evidence for neural plasticity in primary visual cortex (V1) or improved readout of V1 signals. However, new training methods have shown complete VPL transfer across stimulus locations and orientations, suggesting the involvement of high-level cognitive processes. Here we report that VPL bears similar properties of object learning. Specifically, we found that orientation discrimination learning is completely transferrable between luminance gratings initially encoded in V1 and bilaterally symmetric dot patterns encoded in higher visual cortex. Similarly, motion direction discrimination learning is transferable between first-and second-order motion signals. These results suggest that VPL can take place at a conceptual level and generalize to stimuli with different physical properties. Our findings thus reconcile perceptual and object learning into a unified framework. |
Lauren R. Godier; Jessica C. Scaife; Sven Braeutigam; Rebecca J. Park Enhanced early neuronal processing of food pictures in Anorexia Nervosa: A magnetoencephalography study Journal Article In: Psychiatry Journal, vol. 2016, pp. 1–13, 2016. @article{Godier2016,Neuroimaging studies in Anorexia Nervosa (AN) have shown increased activation in reward and cognitive control regions in response to food, and a behavioral attentional bias (AB) towards food stimuli is reported. This study aimed to further investigate the neural processing of food using magnetoencephalography (MEG). Participants were 13 females with restricting-type AN, 14 females recovered from restricting-type AN, and 15 female healthy controls. MEG data was acquired whilst participants viewed high- and low-calorie food pictures. Attention was assessed with a reaction time task and eye tracking. Time-series analysis suggested increased neural activity in response to both calorie conditions in the AN groups, consistent with an early AB. Increased activity was observed at 150 ms in the current AN group. Neuronal activity at this latency was at normal level in the recovered group; however, this group exhibited enhanced activity at 320 ms after stimulus. Consistent with previous studies, analysis in source space and behavioral data suggested enhanced attention and cognitive control processes in response to food stimuli in AN. This may enable avoidance of salient food stimuli and maintenance of dietary restraint in AN. A later latency of increased activity in the recovered group may reflect a reversal of this avoidance, with source space and behavioral data indicating increased visual and cognitive processing of food stimuli. |
Andrea Phillipou; Susan Lee Rossell; Caroline T. Gurvich; David Jonathan Castle; Nikolaus F. Troje; Larry Allen Abel Body image in anorexia nervosa: Body size estimation utilising a biological motion task and eyetracking Journal Article In: European Eating Disorders Review, vol. 24, no. 2, pp. 131–138, 2016. @article{Phillipou2016a,OBJECTIVE: Anorexia nervosa (AN) is a psychiatric condition characterised by a distortion of body image. However, whether individuals with AN can accurately perceive the size of other individuals' bodies is unclear. METHOD: In the current study, 24 women with AN and 24 healthy control participants undertook two biological motion tasks while eyetracking was performed: to identify the gender and to indicate the walkers' body size. RESULTS: Anorexia nervosa participants tended to 'hyperscan' stimuli but did not demonstrate differences in how visual attention was directed to different body areas, relative to controls. Groups also did not differ in their estimation of body size. DISCUSSION: The hyperscanning behaviours suggest increased anxiety to disorder-relevant stimuli in AN. The lack of group difference in the estimation of body size suggests that the AN group was able to judge the body size of others accurately. The findings are discussed in terms of body image distortion specific to oneself in AN. |
Renée M. Visser; Michelle I. C. Haan; Tinka Beemsterboer; Pia Haver; Merel Kindt; H. Steven Scholte Quantifying learning-dependent changes in the brain: Single-trial multivoxel pattern analysis requires slow event-related fMRI Journal Article In: Psychophysiology, vol. 53, no. 8, pp. 1117–1127, 2016. @article{Visser2016,Single-trial analysis is particularly useful for assessing cognitive processes that are intrinsically dynamic, such as learning. Studying these processes with fMRI is problematic, as the low signal-to-noise ratio of fMRI requires the averaging over multiple trials, obscuring trial-by-trial changes in neural activation. The superior sensitivity of multivoxel pattern analysis over univariate analyses has opened up new possibilities for single-trial analysis, but this may require different fMRI designs. Here, we measured fMRI and pupil dilation responses during discriminant aversive conditioning, to assess associative learning in a trial-by-trial manner. The impact of design choices was examined by varying trial spacing and trial order in a series of five experiments (total n = 66), while keeping stimulus duration constant (4.5 s). Our outcome measure was the change in similarity between neural response patterns related to two consecutive presentations of the same stimulus (within-stimulus) and between patterns related to pairs of different stimuli (between-stimulus) that shared a specific outcome (electric stimulation vs. no consequence). This trial-by-trial similarity analysis revealed clear single-trial learning curves in conditions with intermediate (8.1-12.6 s) and long (16.5-18.4 s) intervals, with effects being strongest in designs with long intervals and counterbalanced stimulus presentation. No learning curves were observed in designs with shorter intervals (1.6-6.1 s), indicating that rapid event-related designs-at present, the most common designs in fMRI research-are not suited for single-trial pattern analysis. These findings emphasize the importance of deciding on the type of analysis prior to data collection. |
Yingying Wu; Xiaohong Yang; Yufang Yang Eye movement evidence for hierarchy effects on memory representation of discourses Journal Article In: PLoS ONE, vol. 11, no. 1, pp. e0147313, 2016. @article{Wu2016b,In this study, we applied the text-change paradigm to investigate whether and how discourse hierarchy affected the memory representation of a discourse. Three kinds of three-sentence discourses were constructed. In the hierarchy-high condition and the hierarchy-low condition, the three sentences of the discourses were hierarchically organized and the last sentence of each discourse was located at the high level and the low level of the discourse hierarchy, respectively. In the linear condition, the three sentences of the discourses were linearly organized. Critical words were always located at the last sentence of the discourses. These discourses were successively presented twice and the critical words were changed to semantically related words in the second presentation. The results showed that during the early processing stage, the critical words were read for longer times when they were changed in the hierarchy-high and the linear conditions, but not in the hierarchy-low condition. During the late processing stage, the changed-critical words were again found to induce longer reading times only when they were in the hierarchy-high condition. These results suggest that words in a discourse have better memory representation when they are located at the higher rather than at the lower level of the discourse hierarchy. Global discourse hierarchy is established as an important factor in constructing the mental representation of a discourse. |
Eckart Zimmermann Spatiotopic buildup of saccade target representation depends on target size Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 11, 2016. @article{Zimmermann2016,How we maintain spatial stability across saccade eye movements is an open question in visual neuroscience. A phenomenon that has received much attention in the field is our seemingly poor ability to discriminate the direction of transsaccadic target displacements. We have recently shown that discrimination performance increases the longer the saccade target has been previewed before saccade execution (Zimmermann, Morrone, & Burr, 2013). We have argued that the spatial representation of briefly presented stimuli is weak but that a strong representation is needed for transsaccadic, i.e., spatiotopic localization. Another factor that modulates the representation of saccade targets is stimulus size. The representation of spatially extended targets is more noisy than that of point-like targets. Here, I show that theincreaseintranssaccadic displacement discrimination as a function of saccade target preview duration depends on target size. This effect was found for spatially extended targets—thus replicating the results of Zimmermann et al. (2013)— but not for point-like targets. An analysis of saccade parameters revealed that the constant error for reaching the saccade target was bigger for spatially extended than for point-like targets, consistent with weaker representation of bigger targets. These results show that transsaccadic displacement discrimination becomes accurate when saccade targets are spatially extended and presented longer, thus resembling closer stimuli in real-world environments. |
Steven C. Dakin; Philip R. K. Turnbull Similar contrast sensitivity functions measured using psychophysics and optokinetic nystagmus Journal Article In: Scientific Reports, vol. 6, pp. 34514, 2016. @article{Dakin2016,Although the contrast sensitivity function (CSF) is a particularly useful way of characterising functional vision, its measurement relies on observers making reliable perceptual reports. Such procedures can be challenging when testing children. Here we describe a system for measuring the CSF using an automated analysis of optokinetic nystagmus (OKN); an involuntary oscillatory eye movement made in response to drifting stimuli, here spatial-frequency (SF) band-pass noise. Quantifying the strength of OKN in the stimulus direction allows us to estimate contrast sensitivity across a range of SFs. We compared the CSFs of 30 observers with normal vision measured using both OKN and perceptual report. The approaches yield near-identical CSFs (mean R = 0.95) that capture subtle intra-observer variations in visual acuity and contrast sensitivity (both R = 0.84, p < 0.0001). Trial-by-trial analysis reveals high correlation between OKN and perceptual report, a signature of a common neural mechanism for determining stimulus direction. We also observe conditions where OKN and report are significantly decorrelated as a result of a minority of observers experiencing direction-reversals that are not reflected by OKN. We conclude that there are a wide range of stimulus conditions for which OKN can provide a valid alternative means of measuring of the CSF. |
Bob McMurray; Ashley Farris-Trimble; Michael Seedorff; Hannah Rigler The effect of residual acoustic hearing and adaptation to uncertainty on speech perception in cochlear implant users: Evidence from eye-tracking Journal Article In: Ear & Hearing, vol. 37, no. 1, pp. e37–e51, 2016. @article{McMurray2016,OBJECTIVES: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/ʃ, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. DESIGN: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/ʃ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ʃ-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. RESULTS: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. CONCLUSION: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions. |
Andrea Phillipou; Susan Lee Rossell; Caroline T. Gurvich; Matthew Edward Hughes; David Jonathan Castle; Richard Grant Nibbs; Larry Allen Abel Saccadic eye movements in Anorexia Nervosa Journal Article In: PLoS ONE, vol. 11, no. 3, pp. e0152338, 2016. @article{Phillipou2016b,Background: Anorexia Nervosa (AN) has a mortality rate among the highest of any mental illness, though the factors involved in the condition remain unclear. Recently, the potential neurobiological underpinnings of the condition have become of increasing interest. Saccadic eye movement tasks have proven useful in our understanding of the neurobiology of some other psychiatric illnesses as they utilise known brain regions, but to date have not been examined in AN. The aim of this study was to investigate whether individuals with AN differ from healthy individuals in performance on a range of saccadic eye movements tasks. Methods: 24 females with AN and 25 healthy individuals matched for age, gender and premorbid intelligence participated in the study. Participants were required to undergo memory-guided and self-paced saccade tasks, and an interleaved prosaccade/antisaccade/no-go saccade task while undergoing functional magnetic resonance imaging (fMRI). Results: AN participants were found to make prosaccades of significantly shorter latency than healthy controls. AN participants also made an increased number of inhibitory errors on the memory-guided saccade task. Groups did not significantly differ in antisaccade, no-go saccade or self-paced saccade performance, or fMRI findings. Discussion: The results suggest a potential role of GABA in the superior colliculus in the psychopathology of AN. |
J. Kael White; Ilya E. Monosov Neurons in the primate dorsal striatum signal the uncertainty of object-reward associations Journal Article In: Nature Communications, vol. 7, pp. 12735, 2016. @article{White2016b,To learn, obtain reward and survive, humans and other animals must monitor, approach and act on objects that are associated with variable or unknown rewards. However, the neuronal mechanisms that mediate behaviours aimed at uncertain objects are poorly understood. Here we demonstrate that a set of neurons in an internal-capsule bordering regions of the primate dorsal striatum, within the putamen and caudate nucleus, signal the uncertainty of object– reward associations. Their uncertainty responses depend on the presence of objects asso- ciated with reward uncertainty and evolve rapidly as monkeys learn novel object–reward associations. Therefore, beyond its established role in mediating actions aimed at known or certain rewards, the dorsal striatum also participates in behaviours aimed at reward-uncertain objects. |
Serguei V. Astafiev; Kristina L. Zinn; Gordon L. Shulman; Maurizio Corbetta Exploring the physiological correlates of chronic mild traumatic brain injury symptoms Journal Article In: NeuroImage: Clinical, vol. 11, pp. 10–19, 2016. @article{Astafiev2016,We report on the results of a multimodal imaging study involving behavioral assessments, evoked and resting-state BOLD fMRI, and DTI in chronic mTBI subjects. We found that larger task-evoked BOLD activity in the MT+/LO region in extra-striate visual cortex correlated with mTBI and PTSD symptoms, especially light sensitivity. Moreover, higher FA values near the left optic radiation (OR) were associated with both light sensitivity and higher BOLD activity in the MT+/LO region. The MT+/LO region was localized as a region of abnormal functional connectivity with central white matter regions previously found to have abnormal physiological signals during visual eye movement tracking (Astafiev et al., 2015). We conclude that mTBI symptoms and light sensitivity may be related to excessive responsiveness of visual cortex to sensory stimuli. This abnormal sensitivity may be related to chronic remodeling of white matter visual pathways acutely injured. |
Sarah Schuster; Stefan Hawelka; Florian Hutzler; Martin Kronbichler; Fabio Richlan Words in context: The effects of length, frequency, and predictability on brain responses during natural reading Journal Article In: Cerebral Cortex, vol. 26, no. 10, pp. 3889–3904, 2016. @article{Schuster2016,Word length, frequency, and predictability count among the most influential variables during reading. Their effects are well-documented in eye movement studies, but pertinent evidence from neuroimaging primarily stem from single-word presentations. We investigated the effects of these variables during reading of whole sentences with simultaneous eye-tracking and functional magnetic resonance imaging (fixation-related fMRI). Increasing word length was associated with increasing activation in occipital areas linked to visual analysis. Additionally, length elicited a U-shaped modulation (i.e., least activation for medium-length words) within a brain stem region presumably linked to eye movement control. These effects, however, were diminished when accounting for multiple fixation cases. Increasing frequency was associated with decreasing activation within left inferior frontal, superior parietal, and occipito-temporal regions. The function of the latter region-hosting the putative visual word form area-was originally considered as limited to sublexical processing. An exploratory analysis revealed that increasing predictability was associated with decreasing activation within middle temporal and inferior frontal regions previously implicated in memory access and unification. The findings are discussed with regard to their correspondence with findings from single-word presentations and with regard to neurocognitive models of visual word recognition, semantic processing, and eye movement control during reading. |
Gregory P. Strauss; Kathryn L. Ossenfort; Kayla M. Whearty In: PLoS ONE, vol. 11, no. 11, pp. e0162290, 2016. @article{Strauss2016,Multiple emotion regulation strategies have been identified and found to differ in their effectiveness at decreasing negative emotions. One reason for this might be that individual strategies are associated with differing levels of cognitive demand and require distinct patterns of visual attention to achieve their effects. In the current study, we tested this hypothesis in a sample of psychiatrically healthy participants (n = 25) who attempted to down-regulate negative emotion to photographs from the International Affective Picture System using cognitive reappraisal or distraction. Eye movements, pupil dilation, and subjective reports of negative emotionality were obtained for reappraisal, distraction, unpleasant passive viewing, and neutral passive viewing conditions. Behavioral results indicated that reappraisal and distraction successfully decreased self-reported negative affect relative to unpleasant passive viewing. Successful down regulation of negative affect was associated with different patterns of visual attention across regulation strategies. During reappraisal, there was an initial increase in dwell time to arousing scene regions and a subsequent shift away from these regions during later portions of the trial, whereas distraction was associated with reduced total dwell time to arousing interest areas throughout the entire stimulus presentation. Pupil dilation was greater for reappraisal than distraction or unpleasant passive viewing, suggesting that reappraisal may recruit more effortful cognitive control processes. Furthermore, greater decreases in self-reported negative emotion were associated with a lower proportion of dwell time within arousing areas of interest. These findings suggest that different emotion regulation strategies necessitate different patterns of visual attention to be effective and that individual differences in visual attention predict the extent to which individuals can successfully decrease negative emotion using reappraisal and distraction. |
Ian M. Erkelens; Benjamin Thompson; William R. Bobier Unmasking the linear behaviour of slow motor adaptation to prolonged convergence Journal Article In: European Journal of Neuroscience, vol. 43, no. 12, pp. 1553–1560, 2016. @article{Erkelens2016,Adaptation to changing environmental demands is central to maintaining optimal motor system function. Current theories suggest that adaptation in both the skeletal-motor and oculomotor systems involves a combination of fast (reflexive) and slow (recalibration) mechanisms. Here we used the oculomotor vergence system as a model to investigate the mechanisms underlying slow motor adaptation. Unlike reaching with the upper limbs, vergence is less susceptible to changes in cognitive strategy that can affect the behaviour of motor adaptation. We tested the hypothesis that mechanisms of slow motor adaptation reflect early neural processing by assessing the linearity of adaptive responses over a large range of stimuli. Using varied disparity stimuli in conflict with accommodation, the slow adaptation of tonic vergence was found to exhibit a linear response whereby the rate (R(2) = 0.85, p < 0.0001) and amplitude (R(2) = 0.65, p < 0.0001) of the adaptive effects increased proportionally with stimulus amplitude. These results suggest that this slow adaptive mechanism is an early neural process, implying its fundamental physiological nature that is potentially dominated by subcortical and cerebellar substrates. |
Jukka Hyönä; Miia Ekholm Background speech effects on sentence processing during reading: An eye movement study Journal Article In: PLoS ONE, vol. 11, no. 3, pp. e0152133, 2016. @article{Hyoenae2016,Effects of background speech on reading were examined by playing aloud different types of background speech, while participants read long, syntactically complex and less complex sentences embedded in text. Readers' eye movement patterns were used to study online sentence comprehension. Effects of background speech were primarily seen in rereading time. In Experiment 1, foreign-language background speech did not disrupt sentence pro- cessing. Experiment 2 demonstrated robust disruption in reading as a result of semantically and syntactically anomalous scrambled background speech preserving normal sentence- like intonation. Scrambled speech that was constructed from the text to-be read did not dis- rupt reading more than scrambled speech constructed from a different, semantically unre- lated text. Experiment 3 showed that scrambled speech exacerbated the syntactic complexity effect more than coherent background speech, which also interfered with read- ing. Experiment 4 demonstrated that both semantically and syntactically anomalous speech produced no more disruption in reading than semantically anomalous but syntactically cor- rect background speech. The pattern of results is best explained by a semantic account that stresses the importance of similarity in semantic processing, but not similarity in semantic content, between the reading task and background speech. Introduction |
Yu-Cin Jian Fourth graders' cognitive processes and learning strategies for reading illustrated biology texts: Eye movement measurements Journal Article In: Reading Research Quarterly, vol. 51, no. 1, pp. 93–109, 2016. @article{Jian2016,Previous research suggests that multiple representations can improve sci- ence reading comprehension. This facilitation effect is premised on the observation that readers can efficiently integrate information in text and diagram formats; however, this effect in young readers is still contested. Using eye- tracking technology and sequential analysis, this study investi- gated students' reading strategies and comprehension of illustrated biology texts in relation to adult readers' performance. The target population was fourth- grade students with high reading ability, and the control group was university students. All participants read a biology article from an elemen- tary school science textbook containing two illustrations, one representa- tional and one decorative. After the reading task, participants answered questions on recognition, textual, and illustration items. Unsurprisingly, the university students outperformed the younger students on all tests; how- ever, more interestingly, eye movement patterns differed across the two groups. The adult readers demonstrated bidirectional reading pathways for both text and illustrations, whereas the fourth graders' eye fixations only went back and forth within paragraphs in the text and between the illustrations, but made fewer references to both text and illustration. This suggests that regardless of their high reading ability, fourth-grade students' visual literacy is not mature enough to perceive connections between corresponding features of different representations crucial to reading comprehension. Despite differences in cognitive processes between adult readers and young readers, high-ability young readers still have certain capabilities in reading comprehension. The results of sequential analysis showed that they looked back to previous paragraphs frequently, indicating that they were monitoring their comprehension. |
Johanne Tromp; Peter Hagoort; Antje S. Meyer Pupillometry reveals increased pupil size during indirect request comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 6, pp. 1093–1108, 2016. @article{Tromp2016,Fluctuations in pupil size have been shown to reflect variations in processing demands during lexical and syntactic processing in language comprehension. An issue that has not received attention is whether pupil size also varies due to pragmatic manipulations. In two pupillometry experiments, we investigated whether pupil diameter was sensitive to increased processing demands as a result of comprehending an indirect request versus a direct statement. Adult participants were presented with 120 picture-sentence combinations that could be interpreted either as an indirect request (a picture of a window with the sentence "it's very hot here") or as a statement (a picture of a window with the sentence "it's very nice here"). Based on the hypothesis that understanding indirect utterances requires additional inferences to be made on the part of the listener, we predicted a larger pupil diameter for indirect requests than statements. The results of both experiments are consistent with this expectation. We suggest that the increase in pupil size reflects additional processing demands for the comprehension of indirect requests as compared to statements. This research demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. |
Natsuki Atagi; Melissa DeWolf; James W. Stigler; Scott P. Johnson The role of visual representations in college students' understanding of mathematical notation Journal Article In: Journal of Experimental Psychology: Applied, vol. 22, no. 3, pp. 295–304, 2016. @article{Atagi2016,Developing understanding of fractions involves connections between nonsymbolic visual representations and symbolic representations. Initially, teachers introduce fraction concepts with visual representations before moving to symbolic representations. Once the focus is shifted to symbolic representations, the connections between visual representations and symbolic notation are considered to be less useful, and students are rarely asked to connect symbolic notation back to visual representations. In 2 experiments, we ask whether visual representations affect understanding of symbolic notation for adults who understand symbolic notation. In a conceptual fraction comparison task (e.g., Which is larger, 5 / a or 8 / a?), participants were given comparisons paired with accurate, helpful visual representations, misleading visual representations, or no visual representations. The results show that even college students perform significantly better when accurate visuals are provided over misleading or no visuals. Further, eye-tracking data suggest that these visual representations may affect performance even when only briefly looked at. Implications for theories of fraction understanding and education are discussed. |
Kinan Muhammed; Sanjay G. Manohar; Michael Ben Yehuda; Trevor T. J. Chong; George Tofaris; Graham Lennox; Marko Bogdanovic; Michele Hu; Masud Husain Reward sensitivity deficits modulated by dopamine are associated with apathy in Parkinson's disease Journal Article In: Brain, vol. 139, no. 10, pp. 2706–2721, 2016. @article{Muhammed2016,Apathy is a debilitating and under-recognized condition that has a significant impact in many neurodegenerative disorders. In Parkinson's disease, it is now known to contribute to worse outcomes and a reduced quality of life for patients and carers, adding to health costs and extending disease burden. However, despite its clinical importance, there remains limited understanding of mechanisms underlying apathy. Here we investigated if insensitivity to reward might be a contributory factor and examined how this relates to severity of clinical symptoms. To do this we created novel ocular measures that indexed motivation level using pupillary and saccadic response to monetary incentives, allowing reward sensitivity to be evaluated objectively. This approach was tested in 40 patients with Parkinson's disease, 31 elderly age-matched control participants and 20 young healthy volunteers. Thirty patients were examined ON and OFF their dopaminergic medication in two counterbalanced sessions, so that the effect of dopamine on reward sensitivity could be assessed. Pupillary dilation to increasing levels of monetary reward on offer provided quantifiable metrics of motivation in healthy subjects as well as patients. Moreover, pupillary reward sensitivity declined with age. In Parkinson's disease, reduced pupillary modulation by incentives was predictive of apathy severity, and independent of motor impairment and autonomic dysfunction as assessed using overnight heart rate variability measures. Reward sensitivity was further modulated by dopaminergic state, with blunted sensitivity when patients were OFF dopaminergic drugs, both in pupillary response and saccadic peak velocity response to reward. These findings suggest that reward insensitivity may be a contributory mechanism to apathy and provide potential new clinical measures for improved diagnosis and monitoring of apathy. |
Sarah J. White; Denis Drieghe; Simon P. Liversedge; Adrian Staub The word frequency effect during sentence reading: A linear or nonlinear effect of log frequency? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 1, pp. 46–55, 2016. @article{White2016a,The effect of word frequency on eye movement behaviour during reading has been reported in many experimental studies. However, the vast majority of these studies compared only two levels of word frequency (high and low). Here we assess whether the effect of log word frequency on eye movement measures is linear, in an experiment in which a critical target word in each sentence was at one of three approximately equally spaced log frequency levels. Separate analyses treated log frequency as a categorical or a continuous predictor. Both analyses showed only a linear effect of log frequency on the likelihood of skipping a word, and on first fixation duration. Ex-Gaussian analyses of first fixation duration showed similar effects on distributional parameters in comparing high- and medium-frequency words, and medium- and low-frequency words. Analyses of gaze duration and the probability of a refixation suggested a nonlinear pattern, with a larger effect at the lower end of the log frequency scale. However, the nonlinear effects were small, and Bayes Factor analyses favoured the simpler linear models for all measures. The possible roles of lexical and post-lexical factors in producing nonlinear effects of log word frequency during sentence reading are discussed. |
Andrea Alamia; Alexandre Zénon Statistical regularities attract attention when task-relevant Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 42, 2016. @article{Alamia2016,Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e. the effect of color predictability on reaction times (RT), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the 2 colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task. |
Olga Dal Monte; Matthew Piva; Jason A. Morris; Steve W. C. Chang Live interaction distinctively shapes social gaze dynamics in rhesus macaques Journal Article In: Journal of Neurophysiology, vol. 116, no. 4, pp. 1626–1643, 2016. @article{DalMonte2016,The dynamic interaction of gaze between individuals is a hallmark of social cognition. However, very few studies have examined social gaze dynamics after mutual eye contact during real-time interactions. We used a highly quantifiable paradigm to assess social gaze dynamics between pairs of monkeys and modeled these dynamics using an exponential decay function to investigate sustained attention after mutual eye contact. When mon- keys were interacting with real partners compared with static images and movies of the same monkeys, we found a significant increase in the proportion of fixations to the eyes and a smaller dispersion of fixations around the eyes, indicating enhanced focal attention to the eye region. Notably, dominance and familiarity between the interact- ing pairs induced separable components of gaze dynamics that were unique to live interactions. Gaze dynamics of dominant monkeys after mutual eye contact were associated with a greater number of fixations to the eyes, whereas those of familiar pairs were associated with a faster rate of decrease in this eye-directed attention. Our findings endorse the notion that certain key aspects of social cognition are only captured during interactive social contexts and dependent on the elapsed time relative to socially meaningful events. |
Zhaohui Duan; Fuxing Wang; Jianzhong Hong Culture shapes how we look: Comparison between Chinese and African university students Journal Article In: Journal of Eye Movement Research, vol. 9, no. 6, pp. 1–10, 2016. @article{Duan2016,Previous cross-cultural studies have found that cultures can shape eye movement during scene perception, but those researches have been limited to the West. This study recruited Chinese and African students to document cultural effects on two phases of scene perception. In the free-viewing phase, Africans fixated more on the focal objects than Chinese, while Chinese paid more attention to the backgrounds than Africans especially on the fourth and fifth fixations. In the recognition phase, there was no cultural difference in perception, but Chinese recognized more objects than Africans. We conclude that cultural differences exist in scene perception when there is no explicit task and more clearly in its later period, and that some differences may be hidden in deeper processes (e.g., memory) during an explicit task. |
Efthymia C. Kapnoula; Bob McMurray Training alters the resolution of lexical interference: Evidence for plasticity of competition and inhibition Journal Article In: Journal of Experimental Psychology: General, vol. 145, no. 1, pp. 8–30, 2016. @article{Kapnoula2016,Language learning is generally described as a problem of acquiring new information (e.g., new words). However, equally important are changes in how the system processes known information. For example, a wealth of studies has suggested dramatic changes over development in how efficiently children recognize familiar words, but it is unknown what kind of experience-dependent mechanisms of plasticity give rise to such changes in real-time processing. We examined the plasticity of the language processing system by testing whether a fundamental aspect of spoken word recognition, lexical interference, can be altered by experience. Adult participants were trained on a set of familiar words over a series of 4 tasks. In the high-competition (HC) condition, tasks were designed to encourage coactivation of similar words (e.g., net and neck) and to require listeners to resolve this competition. Tasks were similar in the low-competition (LC) condition, but did not enhance this competition. Immediately after training, interlexical interference was tested using a visual world paradigm task. Participants in the HC group resolved interference to a fuller degree than those in the LC group, demonstrating that experience can shape the way competition between words is resolved. TRACE simulations showed that the observed late differences in the pattern of interference resolution can be attributed to differences in the strength of lexical inhibition. These findings inform cognitive models in many domains that involve competition/interference processes, and suggest an experience-dependent mechanism of plasticity that may underlie longer term changes in processing efficiency associated with both typical and atypical development. |
Antimo Buonocore; Robert D. McIntosh; David Melcher Beyond the point of no return: Effects of visual distractors on saccade amplitude and velocity Journal Article In: Journal of Neurophysiology, vol. 115, no. 2, pp. 752–762, 2016. @article{Buonocore2016a,Visual transients, such as a bright flash, reduce the proportion of saccades executed, ∼60–125 ms after flash onset, a phenomenon known as saccadic inhibition (SI). Across three experiments, we apply a similar time-course analysis to the amplitudes and velocities of saccades. Alongside the expected reduction of saccade frequency in the key time period, we report two perturbations of the “main sequence”: one before and one after the period of SI. First, saccades launched between 30 and 70 ms, following the flash, were hypometric, with peak speed exceeding that expected for a saccade of similar amplitude. This finding was in contrast to the common idea that saccades have passed a “point of no return,” ∼60 ms before launching, escaping interference from distractors. The early hypometric saccades observed were not a consequence of spatial averaging between target and distractor locations, as they were found not only following a localized central flash (experiment 1) but also following a spatially generalized flash (experiment 2). Second, across experiments, saccades launched at 110 ms postflash, toward the end of SI, had normal amplitude but a peak speed higher than expected for that amplitude, suggesting increased collicular excitation at the time of launching. Overall, the results show that saccades that escape inhibition following a visual transient are not necessarily unaffected but instead, can reveal interference in spatial and kinematic measures. |
Efthymia C. Kapnoula; Bob McMurray Newly learned word forms are abstract and integrated immediately after acquisition Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 491–499, 2016. @article{Kapnoula2016a,A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35-39, 2007; Gaskell & Dumay, Cognition, 89, 105-132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85-99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation. |
Martijn J. Schut; Jasper H. Fabius; Stefan Van Stigchel; Stefan Van der Stigchel Investigating the parameters of transsaccadic memory: inhibition of return impedes information acquisition near a saccade target Journal Article In: Visual Cognition, vol. 24, no. 2, pp. 141–154, 2016. @article{sfv16,A limited amount of visual information is retained between saccades, which is subsequently stored into a memory system, such as transsaccadic memory. Since the capacity of transsaccadic memory is limited, selection of information is crucial. Selection of relevant information is modulated by attentional processes such as the presaccadic shift of attention. This involuntary shift of attention occurs prior to execution of the saccade and leads to information acquisition at an intended saccade target. The aim of the present study was to investigate the influence that another attentional effect, inhibition of return (IOR), has on the information that gets stored into transsaccadic memory. IOR is the phenomenon where participants are slower to respond to a cue at a previously attended location. To this end, we used a transsaccadic memory paradigm in which stimuli, oriented on a horizontal axis relative to saccade direction, are only visible to the participant before executing a saccade. Previous research showed that items in close proximity to a saccade target are likely to be reported more accurately. In our current study, participants were cued to fixate one of the stimulus locations and subsequently refixated the centre fixation point before executing the transsaccadic memory task. Results indicate that information at a location near a saccade landing point is less likely to be acquired into transsaccadic memory when this location was previously associated with IOR. Furthermore, we found evidence which implicates a reduction of the overall amount of elements retained in transsaccadic memory when a location near a saccade target is associated with IOR. These results suggest that the presaccadic shift of attention may be modulated by IOR and thereby reduces information acquisition by transsaccadic memory. |
Andrej Vlasenko; Tadas Limba; Mindaugas Kiškis; Gintarė Gulevičiūtė Research on human emotion while playing a computer game using pupil recognition technology. Journal Article In: TEM Journal, vol. 5, no. 4, pp. 417–423, 2016. @article{Vlasenko2016,The article presents the results of an experiment during which the participants were playing an online game (poker), and while playing the game, a special video cam was recording the diameters of the player's eye pupils. Diameter data and calculations were based on these records with the aid of a computer program; then, diagrams of the diameter changes in the players' pupils were created (built) depending on the game situation. The study was conducted in a real life situation, when the players were playing online poker. The results of the study point out the connection between the changes in the psycho-emotional state of the players and the changes in their pupil diameters, where the emotional state is a critical factor affecting the operation of such systems. |
Andreas Wutz; Jan Drewes; David Melcher Nonretinotopic perception of orientation: Temporal integration of basic features operates in object-based coordinates Journal Article In: Journal of Vision, vol. 16, no. 10, pp. 1–15, 2016. @article{Wutz2016,Early, feed-forward visual processing is organized in a retinotopic reference frame. In contrast, visual feature integration on longer time scales can involve objectbased or spatiotopic coordinates. For example, in the Ternus-Pikler (T-P) apparent motion display, object identity is mapped across the object motion path. Here, we report evidence from three experiments supporting nonretinotopic feature integration even for the most paradigmatic example of retinotopically-defined features: orientation. We presented observers with a repeated series of T-P displays in which the perceived rotation of Gabor gratings indicates processing in either retinotopic or object-based coordinates. In Experiment 1, the frequency of perceived retinotopic rotations decreased exponentially for longer interstimulus intervals (ISIs) between T-P display frames, with objectbased percepts dominating after about 150-250 ms. In a second experiment, we show that motion and rotation judgments depend on the perception of a moving object during the T-P display ISIs rather than only on temporal factors. In Experiment 3, we cued the observers' attenti onal state either toward a retinotopic or object motion-based reference frame and then tracked both the observers' eye position and the time course of the perceptual bias while viewing identical T-P display sequences. Overall, we report novel evidence for spatiotemporal integration of even basic visual features such as orientation in nonretinotopic coordinates, in order to support perceptual constancy across self- and object motion. |
Delphine Lévy-Bencheton; Aarlenne Zein Khan; Denis Pélisson; Caroline Tilikete; Laure Pisella Adaptation of saccadic sequences with and without remapping Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 359, 2016. @article{LevyBencheton2016,It is relatively easy to adapt visually-guided saccades because the visual vector and the saccade vector match. The retinal error at the saccade landing position is compared to the prediction error, based on target location and efference copy. If these errors do not match, planning processes at the level(s) of the visual and/or motor vector processing are assumed to be inaccurate and the saccadic response is adjusted. In the case of a sequence of two saccades, the final error can be attributed to the last saccade vector or to the entire saccadic displacement. Here, we asked whether and how adaptation can occur in the case of remapped saccades, such as during the classic double-step saccade paradigm, where the visual and motor vectors of the second saccade do not coincide and so the attribution of error is ambiguous. Participants performed saccades sequences to two targets briefly presented prior to first saccade onset. The second saccade target was either briefly re-illuminated (sequential visually-guided task) or not (remapping task) upon first saccade offset. To drive adaptation, the second target was presented at a displaced location (backward or forward jump condition or control-no jump) at the end of the second saccade. Pre- and post-adaptation trials were identical, without the re-appearance of the target after the second saccade. For the 1st saccade endpoints, there was no change as a function of adaptation. For the 2nd saccade, there was a similar increase in gain in the forward jump condition (52% and 61% of target jump) in the two tasks, whereas the gain decrease in the backward condition was much smaller for the remapping task than for the sequential visually-guided task (41% vs. 94%). In other words, the absolute gain change was similar between backward and forward adaptation for remapped saccades. In conclusion, we show that remapped saccades can be adapted, suggesting that the error is attributed to the visuo-motor transformation of the remapped visual vector. The mechanisms by which adaptation takes place for remapped saccades may be similar to those of forward sequential visually-guided saccades, unlike those involved in adaptation for backward sequential visually-guided saccades. |
Ehab W. Hermena; Simon P. Liversedge; Denis Drieghe Parafoveal processing of Arabic diacritical marks Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 12, pp. 2021–2038, 2016. @article{Hermena2016,Diacritics are glyph-like marks on letters that convey vowel information in Arabic, thus allowing for accurate pronunciation and disambiguation of homographs. For skilled readers, diacritics are usually omitted except when their omission causes ambiguity. Undiacritized homographs are very common in Arabic and are predominantly heterophones (where each meaning sounds different), with 1 version more common (dominant) than the others (subordinate). In this study the authors investigated parafoveal processing of diacritics during reading. They presented native readers with heterophonic homographs embedded in sentences with diacritization that instantiated either dominant or subordinate pronunciations of the homographs. Using the boundary paradigm, they presented previews of these words carrying either: identical diacritization to the target; inaccurate diacritization, such that if the target had dominant diacritization, the preview contained subordinate diacritization, and vice versa; or no diacritics. The results showed that readers processed the identity of diacritics parafoveally, such that inaccurate previews of the diacritics resulted in inflated fixation durations, particularly for fixations originating at close launch sites. Moreover, our results clearly indicate that readers' expectation for dominant or subordinate diacritization patterns influences their parafoveal and foveal processing of diacritics. Specifically, a perceived absence of diacritics (either in no-diacritics previews, or because the eyes were too far away to process the presence of diacritics) induced an expectation for the dominant pronunciation, whereas the perceived presence of diacritics induced an expectation for the subordinate meaning. |
Mikael Lundqvist; Jonas Rose; Pawel Herman; Scott L. Brincat; Timothy J. Buschman; Earl K. Miller Gamma and beta bursts underlie working memory Journal Article In: Neuron, vol. 90, no. 1, pp. 152–164, 2016. @article{Lundqvist2016,Working memory is thought to result from sustained neuron spiking. However, computational models suggest complex dynamics with discrete oscillatory bursts. We analyzed local field potential (LFP) and spiking from the prefrontal cortex (PFC) of monkeys performing a working memory task. There were brief bursts of narrow-band gamma oscillations (45–100 Hz), varied in time and frequency, accompanying encoding and re-activation of sensory information. They appeared at a minority of recording sites associated with spiking reflecting the to-be-remembered items. Beta oscillations (20–35 Hz) also occurred in brief, variable bursts but reflected a default state interrupted by encoding and decoding. Only activity of neurons reflecting encoding/decoding correlated with changes in gamma burst rate. Thus, gamma bursts could gate access to, and prevent sensory interference with, working memory. This supports the hypothesis that working memory is manifested by discrete oscillatory dynamics and spiking, not sustained activity. |
Jessica Nelson Taylor; Charles A. Perfetti Eye movements reveal readers' lexical quality and reading experience Journal Article In: Reading and Writing, vol. 29, no. 6, pp. 1069–1103, 2016. @article{Taylor2016,Two experiments demonstrate that individual differences among normal adult readers, including lexical quality, are expressed in silent reading at the word level. In the first of two studies we identified major dimensions of variability among college readers and among words using factor analysis. We then examined the effects of these dimensions of variability on eye movements during paragraph reading. More experienced readers (who also were higher in reading speed) read words more quickly, especially less frequent words, while readers with higher lexical knowledge showed shorter early fixations, especially for more frequent words. These results suggest that individual differences in reading may reflect differences in the quality of lexical representations and in reading experience, which is a source of lexical quality. In a second study, we controlled the lexical knowledge readers obtained from new words through a training paradigm that varied exposure to a word's orthographic, phonological, and meaning constituents. Training exposure to orthographic and phonological constituents affected first pass reading measures, and phonological and meaning training affected second pass measures. Incomplete knowledge of word components slowed first pass reading times, com- pared to both more complete knowledge and no knowledge. Training effects were mediated by individual differences, pointing to lexical quality and reading experi- ence—which, combined reflect reading expertise—as important in word reading as part of text reading. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr Adaptation to size affects saccades with long but not short latencies Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 2, 2016. @article{Zimmermann2016a,Maintained exposure to a specific stimulus property— such as size, color, or motion—induces perceptual adaptation aftereffects, usually in the opposite direction to that of the adaptor. Here we studied how adaptation to size affects perceived position and visually guided action (saccadic eye movements) to that position. Subjects saccaded to the border of a diamond-shaped object after adaptation to a smaller diamond shape. For saccades in the normal latency range, amplitudes decreased, consistent with saccading to a larger object. Short-latency saccades, however, tended to be affected less by the adaptation, suggesting that they were only partly triggered by a signal representing the illusory target position. We also tested size perception after adaptation, followed by a mask stimulus at the probe location after various delays. Similar size adaptation magnitudes were found for all probe-mask delays. In agreement with earlier studies, these results suggest that the duration of the saccade latency period determines the reference frame that codes the probe location. |
Rutvik H. Desai; Wonil Choi; Vicky T. Lai; John M. Henderson Toward semantics in the wild: Activation to manipulable nouns in naturalistic reading Journal Article In: Journal of Neuroscience, vol. 36, no. 14, pp. 4050–4055, 2016. @article{Desai2016,The neural basis of language processing, in the context of naturalistic reading of connected text, is a crucial but largely unexplored area. Here we combined functional MRI and eye tracking to examine the reading of text presented as whole paragraphs in two experiments with human subjects. We registered high-temporal resolution eye-tracking data to a low-temporal resolution BOLD signal to extract responses to single words during naturalistic reading where two to four words are typically processed per second. As a test case of a lexical variable, we examined the response to noun manipulability. In both experiments, signal in the left anterior inferior parietal lobule and posterior inferior temporal gyrus and sulcus was positively correlated with noun manipulability. These regions are associated with both action performance and action semantics, and their activation is consistent with a number of previous studies involving tool words and physical tool use. The results show that even during rapid reading of connected text, where semantics of words may be activated only partially, the meaning of manipulable nouns is grounded in action performance systems. This supports the grounded cognition view of semantics, which posits a close link between sensory-motor and conceptual systems of the brain. On the methodological front, these results demonstrate that BOLD responses to lexical variables during naturalistic reading can be extracted by simultaneous use of eye tracking. This opens up new avenues for the study of language and reading in the context of connected text. |
Wendy Ming; Dimitrios J. Palidis; Miriam Spering; Martin J. McKeown Visual contrast sensitivity in early-stage parkinson's disease Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 13, pp. 5696–5704, 2016. @article{Ming2016,Purpose: Visual impairments are frequent in Parkinson's disease (PD) and impact normal functioning in daily activities. Visual contrast sensitivity is a powerful nonmotor sign for discriminating PD patients from controls. However, it is usually assessed with static visual stimuli. Here we examined the interaction between perception and eye movements in static and dynamic contrast sensitivity tasks in a cohort of mildly impaired, early-stage PD patients. Methods: Patients (n = 13) and healthy age-matched controls (n = 12) viewed stimuli of various spatial frequencies (0-8 cyc/deg) and speeds (0°/s, 10°/s, 30°/s) on a computer monitor. Detection thresholds were determined by asking participants to adjust luminance contrast until they could just barely see the stimulus. Eye position was recorded with a video-based eye tracker. Results: Patients' static contrast sensitivity was impaired in the intermediate spatial-frequency range and this impairment correlated with fixational instability. However, dynamic contrast sensitivity and patients' smooth pursuit were relatively normal. An independent component analysis revealed contrast sensitivity profiles differentiating patients and controls. Conclusions: Our study simultaneously assesses perceptual contrast sensitivity and eye movements in PD, revealing a possible link between fixational instability and perceptual deficits. Spatiotemporal contrast sensitivity profiles may represent an easily measurable metric as a component of a broader combined biometric for nonmotor features observed in PD. |
Zhongling Pi; Jianzhong Hong Learning process and learning outcomes of video podcasts including the instructor and PPT slides: A Chinese case Journal Article In: Innovations in Education and Teaching International, vol. 53, no. 2, pp. 135–144, 2016. @article{Pi2016,Video podcasts have become one of the fastest developing trends in learning and teaching. The study explored the effect of the presenting mode of educational video podcasts on the learning process and learning outcomes. Prior to viewing a video podcast, the 94 Chinese undergraduates participating in the study completed a demographic questionnaire and prior knowledge test. The learning process was investigated by eye-tracking and the learning outcome by a learning test. The results revealed that the participants using the video podcast with both the instructor and PPT slides gained the best learning outcomes. It was noted that they allocated much more visual attention to the instructor than to the PPT slides. It was additionally found that the 22 min was the time at which the participants reached the peak of mental fatigue. The results of our study imply that the use of educational technology is culture bound. |
Stephen Soncin; Donald C. Brien; Brian C. Coe; Alina Marin; Douglas P. Munoz Contrasting emotion processing and executive functioning in attention-Deficit/hyperactivity disorder and bipolar disorder Journal Article In: Behavioral Neuroscience, vol. 130, no. 5, pp. 531–543, 2016. @article{Soncin2016,Attention-deficit/hyperactivity disorder (ADHD) and bipolar disorder (BD) are highly comorbid and share executive function and emotion processing deficits, complicating diagnoses despite distinct clinical features. We compared performance on an oculomotor task that assessed these processes to capture subtle differences between ADHD and BD. The interaction between emotion processing and executive functioning may be informative because, although these processes overlap anatomically, certain regions that are compromised in each network are different in ADHD and BD. Adults, aged 18-62, with ADHD ( = 22), BD ( = 20), and healthy controls ( = 21) performed an interleaved pro- and antisaccade task (looking toward vs. looking away from a visual target, respectively). Task irrelevant emotional faces (fear, happy, sad, neutral) were presented on a subset of trials either before or with the target. The ADHD group made more direction errors (looked in the wrong direction) than controls. Presentation of negatively valenced (fear, sad) and ambiguous (neutral) emotional faces increased saccadic reaction time in BD only compared to controls, whereas longer presentation of sad faces modestly increased group differences. The antisaccade task differentiated ADHD from controls. Emotional processing further impaired processing speed in BD. We propose that the dorsolateral prefrontal cortex is critical in both processing systems, but the inhibitory signal this region generates is impacted by dysfunction in the emotion processing network, possibly at the orbitofrontal cortex, in BD. These results suggest there are differences in how emotion processing and executive functioning interact, which could be utilized to improve diagnostic specificity. |
Andrea Albonico; Manuela Malaspina; Emanuela Bricolo; Marialuisa Martelli; Roberta Daini Temporal dissociation between the focal and orientation components of spatial attention in central and peripheral vision Journal Article In: Acta Psychologica, vol. 171, pp. 85–92, 2016. @article{Albonico2016,Selective attention, i.e. the ability to concentrate one's limited processing resources on one aspect of the environment, is a multifaceted concept that includes different processes like spatial attention and its subcomponents of orienting and focusing. Several studies, indeed, have shown that visual tasks performance is positively influenced not only by attracting attention to the target location (orientation component), but also by the adjustment of the size of the attentional window according to task demands (focal component). Nevertheless, the relative weight of the two components in central and peripheral vision has never been studied. We conducted two experiments to explore whether different components of spatial attention have different effects in central and peripheral vision. In order to do so, participants underwent either a detection (Experiment 1) or a discrimination (Experiment 2) task where different types of cues elicited different components of spatial attention: a red dot, a small square and a big square (an optimal stimulus for the orientation component, an optimal and a sub-optimal stimulus for the focal component respectively). Response times and cue-size effects indicated a stronger effect of the small square or of the dot in different conditions, suggesting the existence of a dissociation in terms of mechanisms between the focal and the orientation components of spatial attention. Specifically, we found that the orientation component was stronger in periphery, while the focal component was noticeable only in central vision and characterized by an exogenous nature. |
Yu-Cin Jian; Chao-Jung Wu In: Computers in Human Behavior, vol. 61, pp. 622–632, 2016. @article{Jian2016a,Eye-tracking technology can reflect readers' sophisticated cognitive processes and explain the psychological meanings of reading to some extent. This study investigated the function of diagrams with numbered arrows and illustrated text in conveying the kinematic information of machine operation by recording readers' eye movements and reading tests. Participants read two diagrams depicting how a flushing system works with or without numbered arrows. Then, they read an illustrated text describing the system. The results showed the arrow group significantly outperformed the non-arrow group on the step-by-step test after reading the diagrams, but this discrepancy was reduced after reading the illustrated text. Also, the arrow group outperformed the non-arrow group on the troubleshooting test measuring problem solving. Eye movement data showed the arrow group spent less time reading the diagram and text which conveyed less complicated concept than the non-arrow group, but both groups allocated considerable cognitive resources on complicated diagram and sentences. Overall, this study found learners were able to construct less complex kinematic representation after reading static diagrams with numbered arrows, whereas constructing a more complex kinematic representation needed text information. Another interesting finding was kinematic information conveyed via diagrams is independent of that via text on some areas. |
Judith Lunn; Tim Donovan; Damien Litchfield; Charlie Lewis; Robert Davies; Trevor J. Crawford Saccadic eye movement abnormalities in children with epilepsy Journal Article In: PLoS ONE, vol. 11, no. 8, pp. e0160508, 2016. @article{Lunn2016,Childhood onset epilepsy is associated with disrupted developmental integration of sensorimotor and cognitive functions that contribute to persistent neurobehavioural comorbidities. The role of epilepsy and its treatment on the development of functional integration of motor and cognitive domains is unclear. Oculomotor tasks can probe neurophysiological and neurocognitive mechanisms vulnerable to developmental disruptions by epilepsy-related factors. The study involved 26 patients and 48 typically developing children aged 8–18 years old who performed a prosaccade and an antisaccade task. Analyses compared medicated chronic epilepsy patients and unmedicated controlled epilepsy patients to healthy control children on saccade latency, accuracy and dynamics, errors and correction rate, and express saccades. Patients with medicated chronic epilepsy had impaired and more variable processing speed, reduced accuracy, increased peak velocity and a greater number of inhibitory errors, younger unmedicated patients also showed deficits in error monitoring. Deficits were related to reported behavioural problems in patients. Epilepsy factors were significant predictors of oculomotor functions. An earlier age at onset predicted reduced latency of prosaccades and increased express saccades, and the typical relationship between express saccades and inhibitory errors was absent in chronic patients, indicating a persistent reduction in tonic cortical inhibition and aberrant cortical connectivity. In contrast, onset in later childhood predicted altered antisaccade dynamics indicating disrupted neurotransmission in frontoparietal and oculomotor networks with greater demand on inhibitory control. The observed saccadic abnormalities are consistent with a dysmaturation of subcortical-cortical functional connectivity and aberrant neurotransmission. Eye movements could be used to monitor the impact of epilepsy on neurocognitive development and help assess the risk for poor neurobehavioural outcomes. |
Rebecca B. Price; Inez M. Greven; Greg J. Siegle; Ernst H. W. Koster; Rudi De Raedt A Novel Attention Training Paradigm Based on Operant Conditioning of Eye Gaze: Preliminary Findings Journal Article In: Emotion, vol. 16, no. 1, pp. 110–116, 2016. @article{Price2016,A sequential injection system which consists of a syringe pump, a selector valve, a multi-port valve, a gas-liquid separator and a solenoid valve for the determination of arsenic by hydride generation atomic absorption spectrometry using tetrahydroborate as reductant was developed. The reduction time of sample with tetrahydoborate has increased by keeping the reactant in gas-liquid separator by using the solenoid valve. Various parameters affecting the performance of the sequential injection system were optimized, including reaction-time, carrier gas flow, sample volume, tetrahydroborate volume and concentration. Established sequential injection hydride generation technique was simple and automated operation. A sample throughput of 112/h was achieved with 400 μL samples with a precision of 2.0% RSD at 4 μg/L As (n = 10) and a detection limit of 0.09 μg/L. Good agreement with the certified values was obtained for the determination of arsenic in standard reference materials. |
Wendy Troop-Gordon; Robert D. Gordon; Laura Vogel-Ciernia; Elizabeth Ewing Lee; Kari J. Visconti Visual attention to dynamic scenes of ambiguous provocation and children's aggressive behavior Journal Article In: Journal of Clinical Child and Adolescent Psychology, pp. 1–16, 2016. @article{TroopGordon2016,Research on biases in attention related to children's aggression has yielded mixed results. Some research suggests that inattention to social cues and reliance on maladaptive social schemas underlie aggression. Other research suggests that maladaptive social schemas lead aggressive individuals to attend to nonhostile cues. The primary objective of this study was to test the proposition that aggression is related to delayed attention to cues followed by selective attention to nonhostile cues after the provocation has occurred. A second objective was to test whether these biases are associated with aggression only when children hold negative social schemas. The eye fixations of70 children (34 boys, 36 girls; Mage = 11.71 years) were monitored with an eye tracker as they watched video clips of child actors portraying scenes of ambiguous provocation. Aggression was measured using peer-, teacher-, and parent-reports, and children completed a measure ofantisocial and prosocial peer beliefs. Aggressive behavior was associated with greater time until fixation on the provocateur among youth who held antisocial peer beliefs. Aggression was also associated with greater time until fixation on an actor displaying empathy for the victim among children reporting low levels of prosocial peer beliefs. After the provocation, aggression was associated with suppressed attention to an amused peer among children who held negative peer beliefs. Increasing attention to cues in a scene ofambiguous provocation, in conjunction with fostering more positive beliefs about peers, may be effective in reducing hostile responding among aggressive youth. |
Melissa L. -H. Võ; Avigael M. Aizenman; Jeremy M. Wolfe You think you know where you looked? You better look again Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 10, pp. 1477–1481, 2016. @article{Vo2016,People are surprisingly bad at knowing where they have looked in a scene. We tested participants' ability to recall their own eye movements in 2 experiments using natural or artificial scenes. In each experiment, participants performed a change-detection (Exp.1) or search (Exp.2) task. On 25% of trials, after 3 seconds of viewing the scene, participants were asked to indicate where they thought they had just fixated. They responded by making mouse clicks on 12 locations in the unchanged scene. After 135 trials, observers saw 10 new scenes and were asked to put 12 clicks where they thought someone else would have looked. Although observers located their own fixations more successfully than a random model, their performance was no better than when they were guessing someone else's fixations. Performance with artificial scenes was worse, though judging one's own fixations was slightly superior. Even after repeating the fixation-location task on 30 scenes immediately after scene viewing, performance was far from the prediction of an ideal observer. Memory for our own fixation locations appears to add next to nothing beyond what common sense tells us about the likely fixations of others. These results have important implications for socially important visual search tasks. For example, a radiologist might think he has looked at "everything" in an image, but eye tracking data suggest that this is not so. Such shortcomings might be avoided by providing observers with better insights of where they have looked. (PsycINFO Database Record |
Liu D. Liu; Ralf M. Haefner; Christopher C. Pack A neural basis for the spatial suppression of visual motion perception Journal Article In: eLife, vol. 5, pp. 1–20, 2016. @article{Liu2016c,In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. |
