EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2018 |
Maria J. Barraza-Bernal; Katharina Rifai; Siegfried Wahl The retinal locus of fixation in simulations of progressing central scotomas Journal Article In: Journal of Vision, vol. 18, no. 1, pp. 1–12, 2018. @article{BarrazaBernal2018, Patients with central scotoma use a preferred retinal locus (PRL) of fixation to perform visual tasks. Some of the conditions that cause central scotoma are progressive, and as a consequence, the PRL needs to be adjusted throughout the progression. The present study investigates the peripheral locus of fixation in subjects under a simulation of progressive central scotoma. Five normally sighted subjects participated in the study. A foveally centered mask of varying size was presented to simulate the scotoma. Initially, subjects developed a peripheral locus of fixation under simulation of a 68 scotoma, which was used as a baseline. The progression was simulated in two separate conditions: a gradual progression and an abrupt progression. In the gradual progression, the diameter of the scotoma increased by a fixed amount of either 18 or 28 of visual angle, thus scotomas of 88, 108, and 118 of visual angle were simulated. In the abrupt progression, the diameter was adjusted individually to span the area of the visual field used by the current peripheral locus of fixation. Subjects located the peripheral locus of fixation along the same meridian under simulation of scotoma progression. Furthermore, no differences between the fixation stability of the baseline locus of fixation and the incremental progression locus of fixation were found whereas, in abrupt progression, the fixation stability decreased significantly. These results provide first insight into fixation behavior in a progressive scotoma and may contribute to the development of training tools for patients with progressive central maculopathies. |
Julian Basanovic; Lies Notebaert; Patrick J. F. Clarke; Colin MacLeod; Philippe Jawinski; Nigel T. M. Chen Inhibitory attentional control in anxiety: Manipulating cognitive load in an antisaccade task Journal Article In: PLoS ONE, vol. 13, no. 10, pp. e0205720, 2018. @article{Basanovic2018, Theorists have proposed that heightened anxiety vulnerability is characterised by reduced attentional control performance and have made the prediction in turn that elevating cognitive load will adversely impact attentional control performance for high anxious individuals to a greater degree than low anxious individuals. Critically however, existing attempts to test this prediction have been limited in their methodology and have presented inconsistent findings. Using a methodology capable of overcoming the limitations of previous research, the present study sought to investigate the effect of manipulating cognitive load on inhibitory attentional control performance of high anxious and low anxious individuals. High and low trait anxious participants completed an antisaccade task, requiring the execution of prosaccades towards, or antisaccades away from, emotionally toned stimuli while eye movements were recorded. Participants completed the antisaccade task under conditions that concurrently imposed a lesser cognitive load, or greater cognitive load. Analysis of participants' saccade latencies revealed high trait anxious participants demonstrated generally poorer inhibitory attentional control performance as compared to low trait anxious participants. Furthermore, conditions imposing greater cognitive load, as compared to lesser cognitive load, resulted in enhanced inhibitory attentional control performance across participants generally. Crucially however, analyses did not reveal an effect of cognitive load condition on anxiety-linked differences in inhibitory attentional control performance, indicating that elevating cognitive load did not adversely impact attentional control performance for high anxious individuals to a greater degree than low anxious individuals. Hence, the present findings are inconsistent with predictions made by some theorists and are in contrast to the findings of earlier investigations. These findings further highlight the need for research into the relationship between anxiety, attentional control, and cognitive load. |
Jonathan P. Batten; Tim J. Smith Saccades predict and synchronize to visual rhythms irrespective of musical beats Journal Article In: Visual Cognition, vol. 26, no. 9, pp. 695–718, 2018. @article{Batten2018, Music has been shown to entrain movement. One of the body's most frequent movements, saccades, are arguably subject to a timer that may also be susceptible to musical entrainment. We developed a continuous and highly-controlled visual search task and varied the timing of the search target presentation, it was either gaze-contingent, tap-contingent, or visually-timed. We found: (1) explicit control of saccadic timing is limited to gross duration variations and imprecisely synchronized; (2) saccadic timing does not implicitly entrain to musical beats, even when closely aligned in phase; (3) eye movements predict visual onsets produced by motor-movements (finger-taps) and externally-timed sequences, beginning fixation prior to visual onset; (4) eye movement timing can be rhythmic, synchronizing to both motor-produced and externally timed visual sequences; each unaffected by musical beats. These results provide evidence that saccadic timing is sensitive to the temporal demands of visual tasks and impervious to influence from musical beats. |
Vanessa Beanland; Choo Hong Tan; Bruce K. Christensen The unexpected killer: effects of stimulus threat and negative affectivity on inattentional blindness Journal Article In: Cognition and Emotion, vol. 32, no. 6, pp. 1374–1381, 2018. @article{Beanland2018, Inattentional blindness (IB) occurs when observers fail to detect unexpected objects or events. Despite the adaptive importance of detecting unexpected threats, relatively little research has examined how stimulus threat influences IB. The current study was designed to explore the effects of stimulus threat on IB. Past research has also demonstrated that individuals with elevated negative affectivity have an attentional bias towards threat-related stimuli; therefore, the current study also examined whether state and trait levels of negative affectivity predicted IB for threat-related stimuli. One hundred and eleven participants (87 female, aged 17–40 years) completed an IB task that included both threat-related and neutral unexpected stimuli, while their eye movements were tracked. Participants were significantly more likely to detect the threatening stimulus (19%) than the neutral stimulus (11%) p =.035, odds ratio (OR) = 4.0, 95% confidence interval OR [1.13, 14.17]. Neither state nor trait levels of negative affectivity were significantly associated with IB. These results suggest observers are more likely to detect threat-related unexpected objects, consistent with the threat superiority effect observed in other paradigms. However, most observers were blind to both unexpected stimuli, highlighting the profound influence of expectations and task demands on our ability to perceive even potentially urgent and life-threatening information. |
Melissa R. Beck; Rebecca R. Goldstein; Amanda E. Lamsweerde; Justin M. Ericson Attending globally or locally: Incidental learning of optimal visual attention allocation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 44, no. 3, pp. 387–398, 2018. @article{Beck2018, Attention allocation determines the information that is encoded into memory. Can participants learn to optimally allocate attention based on what types of information are most likely to change? The current study examined whether participants could incidentally learn that changes to either high spatial frequency (HSF) or low spatial frequency (LSF) Gabor patches were more probable and to use this incidentally learned probability information to bias attention during encoding. Participants detected changes in orientation in arrays of 6 Gabor patches: 3 HSF and 3 LSF. For half of the participants, an HSF patch changed orientation on 75% of the trials, and for the other half, an LSF patch changed orientation on 75% of the trials. Experiment 1 demonstrated a change probability effect and an attention allocation effect. Specifically, change detection performance was highest for the probable-change type, and participants learned to use a global spread of attention (fixating between Gabor patches) when LSF patches were most likely to change and to use a local allocation of attention (fixating directly on Gabor patches) when HSF patches were most likely to change. Experiments 2 and 3 replicated these effects and demonstrated that an internal monitoring system is sufficient for these effects. That is, the effects do not require explicit feedback or point rewards. This study demonstrates that incidental learning of probability information can affect the allocation of attention during encoding and can therefore affect what information is stored in visual working memory. |
Noah C. Benson; Keith W. Jamison; Michael J. Arcaro; An T. Vu; Matthew F. Glasser; Timothy S. Coalson; David C. Van Essen; Essa Yacoub; Kamil Ugurbil; Jonathan Winawer; Kendrick N. Kay The Human Connectome Project 7 Tesla retinotopy dataset: Description and population receptive field analysis Journal Article In: Journal of Vision, vol. 18, no. 13, pp. 1–22, 2018. @article{Benson2018, About a quarter of human cerebral cortex is dedicated mainly to visual processing. The large-scale spatial organization of visual cortex can be measured with functional magnetic resonance imaging (fMRI) while subjects view spatially modulated visual stimuli, also known as ‘‘retinotopic mapping.'' One of the datasets collected by the Human Connectome Project involved ultra high-field (7 Tesla) fMRI retinotopic mapping in 181 healthy young adults (1.6-mm resolution), yielding the largest freely available collection of retinotopy data. Here, we describe the experimental paradigm and the results of model-based analysis of the fMRI data. These results provide estimates of population receptive field position and size. Our analyses include both results from individual subjects as well as results obtained by averaging fMRI time series across subjects at each cortical and subcortical location and then fitting models. Both the group-average and individual-subject results reveal robust signals across much of the brain, including occipital, temporal, parietal, and frontal cortex as well as subcortical areas. The group-average results agree well with previously published parcellations of visual areas. In addition, split-half analyses show strong within-subject reliability, further demonstrating the high quality of the data. We make publicly available the analysis results for individual subjects and the group avera ge, as well as associated stimuli and analysis code. These resources provide an opportunity for studying fine-scale individual variability in cortical and subcortical organization and the properties of high-resolution fMRI. In addition, they provide a set of observations that can be compared with other Human Connectome Project measures acquired in these same participants. |
Giacomo Benvenuti; Yuzhi Chen; Charu Ramakrishnan; Karl Deisseroth; Wilson S. Geisler; Eyal Seidemann Scale-invariant visual capabilities explained by topographic representations of luminance and texture in primate V1 Journal Article In: Neuron, vol. 100, no. 6, pp. 1504–1512.e4, 2018. @article{Benvenuti2018, Benvenuti et al. describe a novel retinotopic representation of low-spatial-frequency luminance stimuli in V1 of behaving macaques. This distributed representation could solve the “aperture problem” for computation of orientation of low-spatial-frequency stimuli by single V1 neurons. |
Akitoshi Ogawa; Atsushi Ueshima; Keigo Inukai; Tatsuya Kameda Deciding for others as a neutral party recruits risk-neutral perspective-taking: Model-based behavioral and fMRI experiments Journal Article In: Scientific Reports, vol. 8, pp. 12857, 2018. @article{Ogawa2018, Risky decision making for others is ubiquitous in our societies. Whereas financial decision making for oneself induces strong concern about the worst outcome (maximin concern) as well as the expected value, behavioral and neural characteristics of decision making for others are less well understood. We conducted behavioral and functional magnetic resonance imaging (fMRI) experiments to examine the neurocognitive underpinnings of risky decisions for an anonymous other, using decisions for self as a benchmark. We show that, although the maximin concern affected both types of decisions equally strongly, decision making for others recruited a more risk-neutral computational mechanism than decision making for self. Specifically, participants exhibited more balanced information search when choosing a risky option for others. Activity of right temporoparietal junction (rTPJ, associated with cognitive perspective taking) was parametrically modulated by options' expected values in decisions for others, and by the minimum amounts in decisions for self. Furthermore, individual differences in self-reported empathic concern modified these attentional and neural processes. Overall, these results indicate that the typical maximin concern is attenuated in a risk-neutral direction in decisions for others as compared to self. We conjecture that, given others' diverse preferences, deciding as a neutral party may cognitively recruit such risk-neutrality. |
Marissa Ogren; Joseph M. Burling; Scott P. Johnson Family expressiveness relates to happy emotion matching among 9-month-old infants Journal Article In: Journal of Experimental Child Psychology, vol. 174, pp. 29–40, 2018. @article{Ogren2018, Perceiving and understanding the emotions of those around us is an imperative skill to develop early in life. An infant's family environment provides most of their emotional exemplars in early development. However, the relation between the early development of emotion perception and family expressiveness remains understudied. To investigate this potential link to early emotion perception development, we examined 38 infants at 9 months of age. We assessed infants' ability to match emotions across facial and vocal modalities using an intermodal matching paradigm for angry–neutral, happy–neutral, and sad–neutral pairings. We also attained family expressiveness information via parent report. Our results indicate a significant positive relation between emotion matching and family expressiveness specific to the happy–neutral condition. However, we found no evidence for emotion matching for the infants as a group in any of the three conditions. These results suggest that family expressiveness does relate to emotion matching for the earliest developing emotional category among 9-month-old infants and that emotion matching with multiple emotions at this age is a challenging task. |
Sven Ohl; Martin Rolfs Saccadic selection of stabilized items in visuospatial working memory Journal Article In: Consciousness and Cognition, vol. 64, pp. 32–44, 2018. @article{Ohl2018, Saccadic eye movements prioritize the memory of visual stimuli that had previously been seen at the saccade target. In two experiments, we assessed whether this influence is limited to fragile memory traces or if saccades can also affect consolidated representations in visuospatial working memory (VSWM). To interfere with fragile memory traces, we presented visual masks at different delays following the offset of a memory array and simultaneously prompted participants to generate a saccade to one location. Masking was very effective: Memory performance was lowest right after the disappearance of the memory array and gradually increased for later mask onsets. In spite of that, memory was best for stimuli congruent with the saccade target. This advantage was largest at shortest delays and then decreased over the course of a second. Insofar as only consolidated representations survive interference from masks, we conclude that saccades exert spatially selective biases on stable representations in VSWM. |
Gouki Okazawa; Long Sha; Braden A. Purcell; Roozbeh Kiani Psychophysical reverse correlation reflects both sensory and decision-making processes Journal Article In: Nature Communications, vol. 9, pp. 3479, 2018. @article{Okazawa2018, Goal directed behavior depends on both sensory mechanisms that gather information from the outside world and decision-making mechanisms that select appropriate behavior based on that sensory information. Psychophysical reverse correlation is commonly used to quantify how fluctuations of sensory stimuli influence behavior and is generally believed to uncover the spatiotemporal weighting functions of sensory processes. Here we show that reverse correlations also reflect decision-making processes and can deviate significantly from the true sensory filters. Specifically, sensory and motor delays and trial-to-trial variability of decision times cause systematic distortions in psychophysical kernels that should not be attributed to sensory mechanisms. Similarly, changes of decision bound and mechanisms of evidence integration systematically alter psychophysical reverse correlations. We show that ignoring details of the decision-making process results in misinterpretation of reverse correlations, but proper use of these details turns reverse correlation into a powerful method for studying both sensory and decision-making mechanisms. |
Timothy D. Oleskiw; Amy Nowack; Anitha Pasupathy Joint coding of shape and blur in area V4 Journal Article In: Nature Communications, vol. 9, pp. 466, 2018. @article{Oleskiw2018, Edge blur, a prevalent feature of natural images, is believed to facilitate multiple visual processes including segmentation and depth perception. Furthermore, image descriptions that explicitly combine blur and shape information provide complete representations of naturalistic scenes. Here we report the first demonstration of blur encoding in primate visual cortex: neurons in macaque V4 exhibit tuning for both object shape and boundary blur, with observed blur tuning not explained by potential confounds including stimulus size, intensity, or curvature. A descriptive model wherein blur selectivity is cast as a distinct neural process that modulates the gain of shape-selective V4 neurons explains observed data, supporting the hypothesis that shape and blur are fundamental features of a sufficient neural code for natural image representation in V4. |
Bettina Olk; Alina Dinu; David J. Zielinski; Regis Kopper Measuring visual search and distraction in immersive virtual reality Journal Article In: Royal Society Open Science, vol. 5, pp. 1–15, 2018. @article{Olk2018, An important issue of psychological research is how experiments conducted in the laboratory or theories based on such experiments relate to human performance in daily life. Immersive virtual reality (VR) allows control over stimuli and conditions at increased ecological validity. The goal of the present study was to accomplish a transfer of traditional paradigms that assess attention and distraction to immersive VR. To further increase ecological validity we explored attentional effects with daily objects as stimuli instead of simple letters. Participants searched for a target among distractors on the countertop of a virtual kitchen. Target–distractor discriminability was varied and the displays were accompanied by a peripheral flanker that was congruent or incongruent to the target. Reaction time was slower when target–distractor discriminability was low and when flankers were incongruent. The results were replicated in a second experiment in which stimuli were presented on a computer screen in two dimensions. The study demonstrates the successful translation of traditional paradigms and manipulations into immersive VR and lays a foundation for future research on attention and distraction in VR. Further, we provide an outline for future studies that should use features of VR that are not available in traditional laboratory research. |
Katya Olmos-Solis; Anouk Mariette Loon; Christian N. L. Olivers Pupil dilation reflects task relevance prior to search Journal Article In: Journal of Cognition, vol. 1, no. 1, pp. 1–15, 2018. @article{OlmosSolis2018, When observers search for a specific target, it is assumed that they activate a representation of the task relevant object in visual working memory (VWM). This representation – often referred to as the template – guides attention towards matching visual input. In two experiments we tested whether the pupil response can be used to differentiate stimuli that match the task-relevant template from irrelevant input. Observers memorized a target color to be searched for in a multi-color visual search display, presented after a delay period. In Experiment 1, one color appeared at the start of the trial, which was then automatically the search template. In Experiments 2, two colors were presented, and a retro-cue indicated which of these was relevant for the upcoming search task. Crucially, before the search display appeared, we briefly presented one colored probe stimulus. The probe could match either the relevant-template color, the non-cued color (irrelevant), or be a new color not presented in the trial. We measured the pupil response to the probe as a signature of task relevance. Experiment 1 showed significantly smaller pupil size in response to probes matching the search template than for irrelevant colors. Experiment 2 replicated the template matching effect and allowed us to rule out that it was solely due to repetition priming. Taken together, we show that the pupil responds selectively to participants' target template prior to search. |
Elena V. Orekhova; Olga V. Sysoeva; Justin F. Schneiderman; Sebastian Lundström; Ilia A. Galuta; Dzerasa E. Goiaeva; Andrey O. Prokofyev; Bushra Riaz; Courtney Keeler; Nouchine Hadjikhani; Christopher Gillberg; Tatiana A. Stroganova Input-dependent modulation of MEG gamma oscillations reflects gain control in the visual cortex Journal Article In: Scientific Reports, vol. 8, pp. 8451, 2018. @article{Orekhova2018, Gamma-band oscillations arise from the interplay between neural excitation (E) and inhibition (I) and may provide a non-invasive window into the state of cortical circuitry. A bell-shaped modulation of gamma response power by increasing the intensity of sensory input was observed in animals and is thought to reflect neural gain control. Here we sought to find a similar input-output relationship in humans with MEG via modulating the intensity of a visual stimulation by changing the velocity/ temporal-frequency of visual motion. In the first experiment, adult participants observed static and moving gratings. The frequency of the MEG gamma response monotonically increased with motion velocity whereas power followed a bell-shape. In the second experiment, on a large group of children and adults, we found that despite drastic developmental changes in frequency and power of gamma oscillations, the relative suppression at high motion velocities was scaled to the same range of values across the life-span. In light of animal and modeling studies, the modulation of gamma power and frequency at high stimulation intensities characterizes the capacity of inhibitory neurons to counterbalance increasing excitation in visual networks. Gamma suppression may thus provide a non- invasive measure of inhibitory-based gain control in the healthy and diseased brain. |
Jessica L. O'Rielly; Anna Ma-Wyatt Changes to online control and eye-hand coordination with healthy ageing Journal Article In: Human Movement Science, vol. 59, pp. 244–257, 2018. @article{ORielly2018, Goal directed movements are typically accompanied by a saccade to the target location. Online control plays an important part in correction of a reach, especially if the target or goal of the reach moves during the reach. While there are notable changes to visual processing and motor control with healthy ageing, there is limited evidence about how eye-hand coordination during online updating changes with healthy ageing. We sought to quantify differences between older and younger people for eye-hand coordination during online updating. Participants completed a double step reaching task implemented under time pressure. The target perturbation could occur 200, 400 and 600 ms into a reach. We measured eye position and hand position throughout the trials to investigate changes to saccade latency, movement latency, movement time, reach characteristics and eye-hand latency and accuracy. Both groups were able to update their reach in response to a target perturbation that occurred at 200 or 400 ms into the reach. All participants demonstrated incomplete online updating for the 600 ms perturbation time. Saccade latencies, measured from the first target presentation, were generally longer for older participants. Older participants had significantly increased movement times but there was no significant difference between groups for touch accuracy. We speculate that the longer movement times enable the use of new visual information about the target location for online updating towards the end of the movement. Interestingly, older participants also produced a greater proportion of secondary saccades within the target perturbation condition and had generally shorter eye-hand latencies. This is perhaps a compensatory mechanism as there was no significant group effect on final saccade accuracy. Overall, the pattern of results suggests that online control of movements may be qualitatively different in older participants. |
Tanya Orlov; Ehud Zohary Object representations in human visual cortex formed through temporal integration of dynamic partial shape views Journal Article In: Journal of Neuroscience, vol. 38, no. 3, pp. 659–678, 2018. @article{Orlov2018, We typically recognize visual objects, by utilizing the spatial layout of their parts, simultaneously present on the retina. Thus, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can faithfully represent shape in such conditions. However, sometimes, integration over time is required to determine object shape. To study shape extraction through temporal integration of successive partial-shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant, at the sameretinal location. Yet, observers perceived a coherent whole shape instead of a jumbled pattern.Using fMRI and multivoxel-pattern analysis we searched for brain regions that encode temporally-integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit-orientation. We show that slit-invariant shape information is most accurate in LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape-slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was dramatically reduced. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space, as assessed by perceptual similarity-judgment tests. Thus, LOC is likely to mediate temporal integration of slit-dependent shape-views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in LOC, regardless of integration processes required to generate the shape percept. |
Eduard Ort; Johannes J. Fahrenfort; Christian N. L. Olivers Lack of free choice reveals the cost of multiple-target search within and across feature dimensions Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 8, pp. 1904–1917, 2018. @article{Ort2018, Having to look for multiple targets typically results in switch costs. However, using a gaze-contingent eyetracking paradigm with multiple color-defined targets, we have recently shown that the emergence of switch costs depends on whether observers can choose a target or a target is being imposed upon them. Here, using a similar paradigm, we tested whether these findings generalize to the situation in which targets are specified across different feature dimensions. We instructed participants to simultaneously search for, and then fixate, either of two possible targets presented among distractors. The targets were defined as either two colors, two shapes, or one color and one shape. In one condition, only one ofthe two targets was available in each display, so that the choice was imposed. In the other condition, both targets would be present in each display, which gave observers free choice over what to search for. Consistent with our earlier findings, switch costs emerged when targets were imposed, whereas no switch costs emerged when target selection was free, irrespective ofthe dimension in which the targets were defined. The results are consistent with the operation of different modes of control in multiple-target search, with switch costs emerging whenever reactive control is required and being reduced or absent when displays allow for proactive control. |
Julie Ouerfelli-Ethier; Basma Elsaeid; Julie Desgroseilliers; Douglas P. Munoz; Gunnar Blohm; Aarlenne Zein Khan Anti-saccades predict cognitive functions in older adults and patients with Parkinson's disease Journal Article In: PLoS ONE, vol. 13, no. 11, pp. e0207589, 2018. @article{OuerfelliEthier2018, A major component of cognitive control is the ability to act flexibly in the environment by either behaving automatically or inhibiting an automatic behaviour. The interleaved pro/anti-saccade task measures cognitive control because the task relies on one's abilities to switch flexibly between pro and anti-saccades, and inhibit automatic saccades during anti-saccade trials. Decline in cognitive control occurs during aging or neurological illnesses such as Parkinson's disease (PD), and indicates decline in other cognitive abilities, such as memory. However, little is known about the relationship between cognitive control and other cognitive processes. Here we investigated whether anti-saccade performance can predict decision-making, visual memory, and pop-out and serial visual search performance. We tested 34 younger adults, 22 older adults, and 20 PD patients on four tasks: an interleaved pro/anti-saccade, a spatial visual memory, a decision-making and two types of visual search (pop-out and serial) tasks. Anti-saccade performance was a good predictor of decision-making and visual memory abilities for both older adults and PD patients, while it predicted visual search performance to a larger extent in PD patients. Our results thus demonstrate the suitability of the interleaved pro/anti-saccade task as a cognitive marker of cognitive control in aging and PD populations. |
Parisa Abedi Khoozani; Gunnar Blohm Neck muscle spindle noise biases reaches in a multisensory integration task Journal Article In: Journal of Neurophysiology, vol. 120, no. 3, pp. 893–909, 2018. @article{AbediKhoozani2018, Reference frame Transformations (RFTs) are crucial components of sensorimotor transformations in the brain. Stochasticity in RFTs has been suggested to add noise to the transformed signal due to variability in transformation parameter estimates (e.g. angle) as well as the stochastic nature of computations in spiking networks of neurons. Here, we varied the RFT angle together with the associated variability and evaluated the behavioral impact in a reaching task that required variability-dependent visual-proprioceptive multi-sensory integration. Crucially, reaches were performed with the head either straight or rolled 30deg to either shoulder and we also applied neck loads of 0 or 1.8kg (left or right) in a 3x3 design, resulting in different combinations of estimated head roll angle magnitude and variance required in RFTs. A novel 3D stochastic model of multi-sensory integration across reference frames was fitted to the data and captured our main behavioral findings: (1) neck load biased head angle estimation across all head roll orientations resulting in systematic shifts in reach errors; (2) Increased neck muscle tone led to increased reach variability, due to signal-dependent noise; (3) both head roll and neck load created larger angular errors in reaches to visual targets away from the body compared to reaches toward the body. These results show that noise in muscle spindles and stochasticity in general have a tangible effect on RFTs underlying reach planning. Since RFTs are omnipresent in the brain, our results could have implication for processes as diverse as motor control, decision making, posture / balance control, and perception. |
Dekel Abeles; Roy Amit; Shlomit Yuval-Greenberg Oculomotor behavior during non-visual tasks: The role of visual saliency Journal Article In: PLoS ONE, vol. 13, no. 6, pp. e0198242, 2018. @article{Abeles2018, During visual exploration or free-view, gaze positioning is largely determined by the tendency to maximize visual saliency: more salient locations are more likely to be fixated. However, when visual input is completely irrelevant for performance, such as with non-visual tasks, this saliency maximization strategy may be less advantageous and potentially even disruptive for task-performance. Here, we examined whether visual saliency remains a strong driving force in determining gaze positions even in non-visual tasks. We tested three alternative hypotheses: a) That saliency is disadvantageous for non-visual tasks and therefore gaze would tend to shift away from it and towards non-salient locations; b) That saliency is irrelevant during non-visual tasks and therefore gaze would not be directed towards it but also not away-from it; c) That saliency maximization is a strong behavioral drive that would prevail even during non-visual tasks. |
Vaidehi S. Natu; Jesse Gomez; Kalanit Grill-Spector; Brianna Jeska; Michael Barnett Development differentially sculpts receptive fields across early and high-level human visual cortex Journal Article In: Nature Communications, vol. 9, pp. 788, 2018. @article{Natu2018, Receptive fields (RFs) processing information in restricted parts of the visual field are a key property of visual system neurons. However, how RFs develop in humans is unknown. Using fMRI and population receptive field (pRF) modeling in children and adults, we determine where and how pRFs develop across the ventral visual stream. Here we report that pRF properties in visual field maps, from the first visual area, V1, through the first ventro-occipital area, VO1, are adult-like by age 5. However, pRF properties in face-selective and character- selective regions develop into adulthood, increasing the foveal coverage bias for faces in the right hemisphere and words in the left hemisphere. Eye-tracking indicates that pRF changes are related to changing fixation patterns on words and faces across development. These findings suggest a link between face and word viewing behavior and the differential development of pRFs across visual cortex, potentially due to competition on foveal coverage. |
Matthias Nau; Tobias Navarro Schröder; Jacob L. S. Bellmund; Christian F. Doeller Hexadirectional coding of visual space in human entorhinal cortex Journal Article In: Nature Neuroscience, vol. 21, no. 2, pp. 188–190, 2018. @article{Nau2018, Entorhinal grid cells map the local environment, but their involvement beyond spatial navigation remains elusive. We examined human functional MRI responses during a highly controlled visual tracking task and show that entorhinal cortex exhibited a sixfold rotationally symmetric signal encoding gaze direction. Our results provide evidence for a grid-like entorhinal code for visual space and suggest a more general role of the entorhinal grid system in coding information along continuous dimensions. |
Claire K. Naughtin; Jason B. Mattingley; Angela D. Bender; Paul E. Dux Decoding early and late cortical contributions to individuation of attended and unattended objects Journal Article In: Cortex, vol. 99, pp. 45–54, 2018. @article{Naughtin2018, To isolate a visual stimulus as a unique object with a specific spatial location and time of occurrence, it is necessary to first register (individuate) the stimulus as a distinct perceptual entity. Recent investigations into the neural substrates of object individuation have suggested it is subserved by a distributed neural network, but previous manipulations of individuation load have introduced extraneous visual confounds, which might have yielded ambiguous findings, particularly in early cortical areas. Furthermore, while it has been assumed that selective attention is required for object individuation, there is no definitive evidence on the brain regions recruited for attended and ignored objects. Here we addressed these issues by combining functional magnetic resonance imaging (fMRI) with a novel object-enumeration paradigm in which to-be-individuated objects were defined by illusory contours, such that the physical elements of the display remained constant across individuation conditions. Multi-voxel pattern analyses revealed that attended objects modulated patterns of activity in early visual cortex, as well as frontal and parietal brain areas, as a function of object-individuation load. These findings suggest that object indi-viduation recruits both early and later cortical areas, consistent with theoretical accounts proposing that this operation acts at the junction of feed-forward and feedback processing stages in visual analysis. We also found dissociations between brain regions involved in individuation of attended and unattended objects, suggesting that voluntary spatial attention influences the brain regions recruited for this process. |
Gavin Jun Peng Ng; Alejandro Lleras; Simona Buetti Fixed-target efficient search has logarithmic efficiency with and without eye movements Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 7, pp. 1752–1762, 2018. @article{Ng2018, Stage 1 processing in visual search (e.g., efficient search) has long been thought to be unaffected by factors such as set size or lure–distractor similarity (or at least to be only minimally affected). Recent research from Buetti, Cronin, Madison, Wang, and Lleras (Journal of Experimental Psychology: General, 145, 672–707, 2016) showed that in efficient visual search with a fixed target, reaction times increase logarithmically as a function of set size and, further, that the slope of these logarithmic functions is modulated by target–distractor similarity. This has led to the proposal that the cognitive architecture of Stage 1 processing is parallel, of unlimited capacity, and exhaustive in nature. Such an architecture produces reaction time functions that increase logarithmically with set size (as opposed to being unaffected by it). However, in the previous studies, eye movements were not monitored. It is thus possible that the logarithmicity of the reaction time functions emerged simply as an artifact of eye movements rather than as a reflection of the underlying cognitive architecture. Here we ruled out the possibility that eye movements resulted in the observed logarithmic functions, by asking participants to keep their eyes at fixation while completing fixed-target efficient visual search tasks. The logarithmic RT functions still emerged even when participants were not allowed to make eye movements, thus providing further support for our proposal. Additionally, we found that search efficiency is slightly improved when eye movements are restricted and lure–target similarity is relatively high. |
Ewa Niechwiej-Szwedo; Anthony Tapper; David A. Gonzalez; Ryan M. Bradley; Robin Duncan Saccade latency delays in young apolipoprotein E (APOE) epsilon 4 carriers Journal Article In: Behavioural Brain Research, vol. 353, pp. 91–97, 2018. @article{NiechwiejSzwedo2018, The apolipoprotein E (APOE) epsilon 4 isoform has been associated with a significantly greater risk of developing late onset Alzheimer's disease (AD). However, the negative effects of APOE-ε4 allele on cognitive function vary across the lifespan: reduced memory and executive function have been found in older individuals but, paradoxically, young APOE-ε4 carriers perform better on cognitive tests and show higher neural efficiency. This study aimed to assess the association between APOE genotype and saccade latency using a prosaccade and antisaccade task in young individuals (N = 97, age: 17–35 years). Results showed that prosaccade latency was significantly delayed in a group of ε4 carriers in comparison to non-carriers, which was due to a lower rate of signal accumulation rather than a change in the criterion threshold. In contrast, there was no significant genotype difference for antisaccade latency in this young cohort. These results indicate that prosaccade latency may be useful in establishing the APOE behavioural phenotype, which could ultimately assist with distinguishing between normal and pathological aging. |
Atsushi Noritake; Taihei Ninomiya; Masaki Isoda Social reward monitoring and valuation in the macaque brain Journal Article In: Nature Neuroscience, vol. 21, no. 10, pp. 1452–1462, 2018. @article{Noritake2018, Behaviors are influenced by rewards to both oneself and others, but the neurons and neural connections that monitor and evaluate rewards in social contexts are unknown. To address this issue, we devised a social Pavlovian conditioning procedure for pairs of monkeys. Despite being constant in amount and probability, the subjective value of forthcoming self-rewards, as indexed by licking and choice behaviors, decreased as partner-reward probability increased. This value modulation was absent when the conspecific partner was replaced by a physical object. Medial prefrontal cortex neurons selectively monitored self-reward and partner-reward information, whereas midbrain dopaminergic neurons integrated this information into a subjective value. Recordings of local field potentials revealed that responses to reward-predictive stimuli in medial prefrontal cortex started before those in dopaminergic midbrain nuclei and that neural information flowed predominantly in a medial prefrontal cortex-to-midbrain direction. These findings delineate a dedicated pathway for subjective reward evaluation in social environments. |
Caoilte Ó Ciardha; Janice Attard-Johnson; Markus Bindemann Latency-based and psychophysiological measures of sexual interest show convergent and concurrent validity Journal Article In: Archives of Sexual Behavior, vol. 47, no. 3, pp. 637–649, 2018. @article{OCiardha2018, Latency-based measures of sexual interest require additional evidence of validity, as do newer pupil dilation approaches. A total of 102 community men completed six latency-based measures of sexual interest. Pupillary responses were recorded during three of these tasks and in an additional task where no participant response was required. For adult stimuli, there was a high degree of intercorrelation between measures, suggesting that tasks may be measuring the same underlying construct (convergent validity). In addition to being correlated with one another, measures also predicted participants' self-reported sexual interest, demonstrating concurrent validity (i.e., the ability of a task to predict a more validated, simultaneously recorded, measure). Latency-based and pupillometric approaches also showed preliminary evidence of concurrent validity in predicting both self-reported interest in child molestation and viewing pornographic material containing children. Taken together, the study findings build on the evidence base for the validity of latency-based and pupillometric measures of sexual interest. |
Thomas P. O'Connell; Marvin M. Chun Predicting eye movement patterns from fMRI responses to natural scenes Journal Article In: Nature Communications, vol. 9, pp. 5159, 2018. @article{OConnell2018, Eye tracking has long been used to measure overt spatial attention, and computational models of spatial attention reliably predict eye movements to natural images. However, researchers lack techniques to noninvasively access spatial representations in the human brain that guide eye movements. Here, we use functional magnetic resonance imaging (fMRI) to predict eye movement patterns from reconstructed spatial representations evoked by natural scenes. First, we reconstruct fixation maps to directly predict eye movement patterns from fMRI activity. Next, we use a model-based decoding pipeline that aligns fMRI activity to deep convolutional neural network activity to reconstruct spatial priority maps and predict eye movements in a zero-shot fashion. We predict human eye movement patterns from fMRI responses to natural scenes, provide evidence that visual representations of scenes and objects map onto neural representations that predict eye movements, and find a novel three-way link between brain activity, deep neural network models, and behavior. |
Jayne Morriss; Eugene McSorley; Carien M. Reekum I don't know where to look: the impact of intolerance of uncertainty on saccades towards non-predictive emotional face distractors Journal Article In: Cognition and Emotion, vol. 32, no. 5, pp. 953–962, 2018. @article{Morriss2018, Attentional bias to uncertain threat is associated with anxiety disorders. Here we examine the extent to which emotional face distractors (happy, angry and neutral) and individual differences in intolerance of uncertainty (IU), impact saccades in two versions of the “follow a cross” task. In both versions of the follow the cross task, the probability of receiving an emotional face distractor was 66.7%. To increase perceived uncertainty regarding the location of the face distractors, in one of the tasks additional non-predictive cues were presented before the onset of the face distractors and target. We did not find IU to impact saccades towards non-cued face distractors. However, we found IU, over Trait Anxiety, to impact saccades towards non-predictive cueing of face distractors. Under these conditions, IU individuals' eyes were pulled towards angry face distractors and away from happy face distractors overall, and the speed of this deviation of the eyes was determined by the combination of the cue and emotion of the face. Overall, these results suggest a specific role of IU on attentional bias to threat during uncertainty. These findings highlight the potential of intolerance of uncertainty-based mechanisms to help understand anxiety disorder pathology and inform potential treatment targets. |
Pim Mostert; Anke Marit Albers; Loek Brinkman; Larisa Todorova; Peter Kok; Floris P. Lange Eye movement-related confounds in neural decoding of visual working memory representations Journal Article In: eNeuro, vol. 5, no. 4, pp. 1–14, 2018. @article{Mostert2018a, A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds. |
Pim Mostert; Sander Bosch; Nadine Dijkstra; Marcel A. J. Gerven; Floris P. Lange Differential temporal dynamics during visual imagery and perception Journal Article In: eLife, vol. 7, pp. 1–16, 2018. @article{Mostert2018, Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. |
Alex Muhl-Richardson; Katherine Cornes; Hayward J. Godwin; Matthew Garner; Julie A. Hadwin; Simon P. Liversedge; Nick Donnelly Searching for two categories of target in dynamic visual displays impairs monitoring ability Journal Article In: Applied Cognitive Psychology, vol. 32, no. 4, pp. 440–449, 2018. @article{MuhlRichardson2018, Target onsets in dynamically changing displays can be predicted when contingencies exist between different stimulus states over time. In the present study, we examined predictive monitoring when participants searched dynamically changing displays of numbers and colored squares for a color target, a number target, or both. Stimuli were presented in both contiguous and discrete spatial configurations. Response time (RT) and accuracy were recorded, and evidence of predictive monitoring was assessed via first fixations and refixations of target‐predictive stimuli. RTs to target onsets and evidence of predictive monitoring were reduced in dual‐target, relative to single‐target, conditions. Further, predictive monitoring did not speed RTs but was influenced by display configuration. In particular, discrete displays impaired monitoring for number targets in the dual‐target condition. Implications exist for real‐world visual tasks involving multiple target categories and for visual display design. |
Alex Muhl-Richardson; Hayward J. Godwin; Matthew Garner; Julie A. Hadwin; Simon P. Liversedge; Nick Donnelly Individual differences in search and monitoring for color targets in dynamic visual displays Journal Article In: Journal of Experimental Psychology: Applied, vol. 24, no. 4, pp. 564–577, 2018. @article{MuhlRichardson2018a, Many real-world tasks now involve monitoring visual representations of data that change dynamically over time. Monitoring dynamically changing displays for the onset of targets can be done in two ways: detecting targets directly, post-onset, or predicting their onset from the prior state of distractors. In the present study, participants' eye movements were measured as they monitored arrays of 108 colored squares whose colors changed systematically over time. Across three experiments, the data show that participants detected the onset of targets both directly and predictively. Experiments 1 and 2 showed that predictive detection was only possible when supported by sequential color changes that followed a scale ordered in color space. Experiment 3 included measures of individual differences in working memory capacity (WMC) and anxious affect and a manipulation of target prevalence in the search task. It found that predictive monitoring for targets, and decisions about target onsets, were influenced by interactions between individual differences in verbal and spatial WMC and intolerance of uncertainty, a characteristic that reflects worry about uncertain future events. The results have implications for the selection of individuals tasked with monitoring dynamic visual displays for target onsets. |
Romy Müller; Maarten L. Jung Partner reactions and task set selection: Compatibility is more beneficial in the stronger task Journal Article In: Acta Psychologica, vol. 185, pp. 188–202, 2018. @article{Mueller2018, Anticipated reactions performed by a partner affect action planning but it is unclear how they affect the selection of task sets. Therefore, four experiments varied partner reaction compatibility while subjects performed two tasks of asymmetric strength. Experiment 1 used an attentional selection paradigm that required reacting to endogenously or exogenously cued targets. The standard benefit for compatible partner reactions was only observed in the stronger task, whereas in the weaker task incompatible reactions reduced distractibility by irrelevant stimulus features. Experiment 2 replicated this interaction between task type and compatibility in a picture-word interference paradigm. It was hypothesized that the weaker task requires shielding the current goal from distraction by incompatible partner reactions, which leads to a generalized reduction of distractor interference. To test this hypothesis, Experiment 3 replicated Experiment 2 but forced subjects to attend to partner reactions. The interaction between task type and compatibility disappeared. To test whether task asymmetry is a necessary condition for this interaction, Experiment 4 used an attentional selection paradigm but reduced the difference in task strength. Compatibility benefits were found in both tasks. Taken together, the results suggest that while anticipated partner reactions can affect task set selection, their specific effects depend on selection demands. |
Dinavahi V. P. S. Murty; Vinay Shirhatti; Poojya Ravishankar; Supratim Ray Large visual stimuli induce two distinct gamma oscillations in primate visual cortex Journal Article In: Journal of Neuroscience, vol. 38, no. 11, pp. 2730–2744, 2018. @article{Murty2018, Recent studies have shown the existence of two gamma rhythms in the hippocampus subserving different functions but, to date, primate studies in primary visual cortex have reported a single gamma rhythm. Here, we show that large visual stimuli induce a slow gamma (25–45 Hz) in area V1 of two awake adult female bonnet monkeys and in the EEG of 15 human subjects (7 males and 8 females), in addition to the traditionally known fast gamma (45–70 Hz). The two rhythms had different tuning characteristics for stimulus orientation, contrast, drift speed, and size. Further, fast gamma had short latency, strongly entrained spikes and was coherent over short distances, reflecting short-range processing, whereas slow gamma appeared to reflect long-range processing. Together, two gamma rhythms can potentially provide better coding or communication mechanisms and a more comprehensive biomarker for diagnosis of mental disorders. |
Andriy Myachykov; Simon Garrod; Christoph Scheepers Attention and memory play different roles in syntactic choice during sentence production Journal Article In: Discourse Processes, vol. 55, no. 2, pp. 218–229, 2018. @article{Myachykov2018, Attentional control of referential information is an important contributor to the structure of discourse. We investigated how attention and memory interplay during visually situated sentence production. We manipulated speakers' attention to the agent or the patient of a described event by means of a referential or a dot visual cue. We also manipulated whether the cue was implicit or explicit by varying its duration (70 ms vs. 700 ms). Participants used passive voice more often when their attention was directed to the patient's location, regardless of whether the cue duration. This effect was stronger when the cue was explicit rather than implicit, especially for passive-voice sentences. Analysis of sentence onset latencies showed a divergent pattern: Latencies were shorter (1) when the agent was cued, (2) when the cue was explicit, and (3) when the (explicit) cue was referential. (1) and (2) indicate facilitated sentence planning when the cue supports a canonical (active voice) sentence frame and when speakers had more time to plan their sentences, whereas (3) suggests that sentence planning was sensitive to whether the cue was informative with regard to the cued referent. We propose that differences between production likelihoods and production latencies indicate distinct contributions from attentional focus and memorial activation to sentence planning: Although the former partly predicts syntactic choice, the latter facilitates syntactic assembly (i.e., initiating overt sentence generation). |
Nicholas E. Myers; Sammi R. Chekroud; Mark G. Stokes; Anna C. Nobre Benefits of flexible prioritization in working memory can arise without costs Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 44, no. 3, pp. 398–411, 2018. @article{Myers2018, Most recent models conceptualize working memory (WM) as a continuous resource, divided up according to task demands. When an increasing number of items need to be remembered, each item receives a smaller chunk of the memory resource. These models predict that the allocation of attention to high-priority WM items during the retention interval should be a zero-sum game: improvements in remembering cued items come at the expense of uncued items because resources are dynamically transferred from uncued to cued representations. The current study provides empirical data challenging this model. Four precision retrocueing WM experiments assessed cued and uncued items on every trial. This permitted a test for trade-off of the memory resource. We found no evidence for trade-offs in memory across trials. Moreover, robust improvements in WM performance for cued items came at little or no cost to uncued items that were probed afterward, thereby increasing the net capacity of WM relative to neutral cueing conditions. An alternative mechanism of prioritization proposes that cued items are transferred into a privileged state within a response-gating bottleneck, in which an item uniquely controls upcoming behavior. We found evidence consistent with this alternative. When an uncued item was probed first, report of its orientation was biased away from the cued orientation to be subsequently reported. We interpret this bias as competition for behavioral control in the output-driving bottleneck. Other items in WM did not bias each other, making this result difficult to explain with a shared resource model. |
Joseph C. Nah; Marco Neppi-Modona; Lars Strother; Marlene Behrmann; Sarah Shomstein Object width modulates object-based attentional selection Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 6, pp. 1375–1389, 2018. @article{Nah2018, Visual input typically includes a myriad of objects, some of which are selected for further processing. While these objects vary in shape and size, most evidence supporting object-based guidance of attention is drawn from paradigms employing two identical objects. Importantly, object size is a readily perceived stimulus dimension, and whether it modulates the distribution of attention remains an open question. Across four experiments, the size of the objects in the display was manipulated in a modified version of the two-rectangle paradigm. In Experiment 1, two identical parallel rectangles of two sizes (thin or thick) were presented. Experiments 2–4 employed identical trapezoids (each having a thin and thick end), inverted in orientation. In the experiments, one end of an object was cued and participants performed either a T/L discrimination or a simple target-detection task. Combined results show that, in addition to the standard object-based attentional advantage, there was a further attentional benefit for processing information contained in the thick versus thin end of objects. Additionally, eye-tracking measures demonstrated increased saccade precision towards thick object ends, suggesting that Fitts's Law may play a role in object-based attentional shifts. Taken together, these results suggest that object-based attentional selection is modulated by object width. |
Yaser Merrikhi; Kelsey Clark; Behrad Noudoost Concurrent influence of top-down and bottom-up inputs on correlated activity of Macaque extrastriate neurons Journal Article In: Nature Communications, vol. 9, pp. 5393, 2018. @article{Merrikhi2018, Correlations between neurons can profoundly impact the information encoding capacity of a neural population. We studied how maintenance of visuospatial information affects correlated activity in visual areas by recording the activity of neurons in visual area MT of rhesus macaques during a spatial working memory task. Correlations between MT neurons depended upon the spatial overlap between neurons' receptive fields. These correlations were influenced by the content of working memory, but the effect of a top-down memory signal differed in the presence or absence of bottom-up visual input. Neurons representing the same area of space showed increased correlations when remembering a location in their receptive fields in the absence of visual input, but decreased correlations in the presence of a visual stimulus. This set of results reveals the correlating nature of top-down signals influencing visual areas and uncovers how such a correlating signal, in interaction with bottom-up information, could enhance sensory representations. |
Andra Mihali; Allison G. Young; Lenard A. Adler; Michael M. Halassa; Wei Ji Ma A low-level perceptual correlate of behavioral and clinical deficits in ADHD Journal Article In: Computational Psychiatry, vol. 2, pp. 141–163, 2018. @article{Mihali2018, In many studies of attention-deficit hyperactivity disorder (ADHD), stimulus encoding and processing (perceptual function) and response selection (executive function) have been intertwined. To dissociate deficits in these functions, we introduced a task that parametrically varied low-level stimulus features (orientation and color) for fine-grained analysis of perceptual function. It also required participants to switch their attention between feature dimensions on a trial-by-trial basis, thus taxing executive processes. Furthermore, we used a response paradigm that captured task-irrelevant motor output (TIMO), reflecting failures to use the correct stimulus-response rule. ADHD participants had substantially higher perceptual variability than controls, especially for orientation, as well as higher TIMO. In both ADHD and controls, TIMO was strongly affected by the switch manipulation. Across participants, the perceptual variability parameter was correlated with TIMO, suggesting that perceptual deficits are associated with executive function deficits. Based on perceptual variability alone, we were able to classify participants into ADHD and controls with a mean accuracy of about 77%. Participants' self-reported General Executive Composite score correlated not only with TIMO but also with the perceptual variability parameter. Our results highlight the role of perceptual deficits in ADHD and the usefulness of computational modeling of behavior in dissociating perceptual from executive processes. |
Laura Mikula; Marilyn Jacob; Trang Tran; Laure Pisella; Aarlenne Zein Khan Spatial and temporal dynamics of presaccadic attentional facilitation before pro- and antisaccades Journal Article In: Journal of Vision, vol. 18, no. 11, pp. 1–16, 2018. @article{Mikula2018, The premotor theory of attention and the visual attention model make different predictions about the temporal and spatial allocation of presaccadic attentional facilitation. The current experiment investigated the spatial and temporal dynamics of presaccadic attentional facilitation during pro- and antisaccade planning; we investigated whether attention shifts only to the saccade goal location or to the target location or elsewhere, and when. Participants performed a dual-task paradigm with blocks of either anti- or prosaccades and also discriminated symbols appearing at different locations before saccade onset (measure of attentional allocation). In prosaccades blocks, correct prosaccade discrimination was best at the target location, while during errors, discrimination was best at the location opposite to the target location. This pattern was inversed in antisaccades blocks, although discrimination remained high opposite to the target location. In addition, we took the benefit of a large range of saccadic landing positions and showed that performance across both types of saccades was best at the actual saccade goal location (where the eye will actually land) rather than at the instructed position. Finally, temporal analyses showed that discrimination remained highest at the saccade goal location, from long before to closer to saccade onset, increasing slightly for antisaccades closer to saccade onset. These findings are n line with the premises of the premotor theory of attention, showing that attentional allocation is primarily linked both temporally and spatially to the saccade goal location. |
Laura Mikula; Sofia Sahnoun; Laure Pisella; Gunnar Blohm; Aarlenne Zein Khan Vibrotactile information improves proprioceptive reaching target localization Journal Article In: PLoS ONE, vol. 13, no. 7, pp. e0199627, 2018. @article{Mikula2018a, When pointing to parts of our own body (e.g., the opposite index finger), the position of the target is derived from proprioceptive signals. Consistent with the principles of multisensory integration, it has been found that participants better matched the position of their index finger when they also had visual cues about its location. Unlike vision, touch may not provide additional information about finger position in space, since fingertip tactile information theoretically remains the same irrespective of the postural configuration of the upper limb. However, since tactile and proprioceptive information are ultimately coded within the same population of posterior parietal neurons within high-level spatial representations, we nevertheless hypothesized that additional tactile information could benefit the processing of proprioceptive signals. To investigate the influence of tactile information on proprioceptive localization, we asked 19 participants to reach with the right hand towards the opposite unseen index finger (proprioceptive target). Vibrotactile stimuli were applied to the target index finger prior to movement execution. We found that participants made smaller errors and more consistent reaches following tactile stimulation. These results demonstrate that transient touch provided at the proprioceptive target improves subsequent reaching precision and accuracy. Such improvement was not observed when tactile stimulation was delivered to a distinct body part (the shoulder). This suggests a specific spatial integration of touch and proprioception at the level of high-level cortical body representations, resulting in touch improving position sense. |
M. Berk Mirza; Rick A. Adams; Christoph Mathys; Karl J. Friston Human visual exploration reduces uncertainty about the sensed world Journal Article In: PLoS ONE, vol. 13, no. 1, pp. e0190429, 2018. @article{Mirza2018, In previous papers, we introduced a normative scheme for scene construction and epistemic (visual) searches based upon active inference. This scheme provides a principled account of how people decide where to look, when categorising a visual scene based on its contents. In this paper, we use active inference to explain the visual searches of normal human subjects; enabling us to answer some key questions about visual foraging and salience attribution. First, we asked whether there is any evidence for ‘epistemic foraging'; i.e. exploration that resolves uncertainty about a scene. In brief, we used Bayesian model comparison to compare Markov decision process (MDP) models of scan-paths that did–and did not–contain the epistemic, uncertainty-resolving imperatives for action selection. In the course of this model comparison, we discovered that it was necessary to include non-epistemic (heuristic) policies to explain observed behaviour (e.g., a reading-like strategy that involved scanning from left to right). Despite this use of heuristic policies, model comparison showed that there is substantial evidence for epistemic foraging in the visual exploration of even simple scenes. Second, we compared MDP models that did–and did not–allow for changes in prior expectations over successive blocks of the visual search paradigm. We found that implicit prior beliefs about the speed and accuracy of visual searches changed systematically with experience. Finally, we characterised intersubject variability in terms of subject-specific prior beliefs. Specifically, we used canonical correlation analysis to see if there were any mixtures of prior expectations that could predict between-subject differences in performance; thereby establishing a quantitative link between different behavioural phenotypes and Bayesian belief updating. We demonstrated that better scene categorisation performance is consistently associated with lower reliance on heuristics; i.e., a greater use of a generative model of the scene to direct its exploration. |
Ada D. Mishler; Mark B. Neider Redundancy gain for categorical targets depends on display configuration and duration Journal Article In: Visual Cognition, vol. 26, no. 6, pp. 393–404, 2018. @article{Mishler2018, Redundancy gain is an improvement in speeded target detection when the number of targets associated with a single response is increased within a single display. The effect has been clearly demonstrated with specific targets, but it is not clear if it occurs in categorization tasks with non- identical targets. The current study tested the effect of target redundancy on speed and accuracy in a go/no-go categorization task. Targets were digits tilted 45° to the left, and were displayed in unilateral, bilateral, or central displays for either 1500 ms or 100 ms. Redundancy gain only occurred for brief targets displayed bilaterally in the upper visual field. The results indicate that redundancy gain is possible for categorization tasks with some bilateral configurations, supporting a role for interhemispheric processing in redundancy gain. Additionally, the results may indicate that processing strategies mask redundancy gain when participants can view targets for a long period of time. |
Aleksandra Mitrovic; Jürgen Goller; Pablo P. L. Tinio; Helmut Leder How relationship status and sociosexual orientation influence the link between facial attractiveness and visual attention Journal Article In: PLoS ONE, vol. 13, no. 11, pp. e0207477, 2018. @article{Mitrovic2018, Facial attractiveness captures and binds visual attention, thus affecting visual exploration of our environment. It is often argued that this effect on attention has evolutionary functions related to mating. Although plausible, such perspectives have been challenged by recent behavioral and eye-tracking studies, which have shown that the effect on attention is moder- ated by various sex- and goal-related variables such as sexual orientation. In the present study, we examined how relationship status and sociosexual orientation moderate the link between attractiveness and visual attention. We hypothesized that attractiveness leads to longer looks and that being single as well as being more sociosexually unrestricted, enhances the effect of attractiveness. Using an eye-tracking free-viewing paradigm, we tested 150 heterosexual men and women looking at images of urban real-world scenes depicting two people differing in facial attractiveness. Participants additionally provided attractiveness ratings of all stimuli. We analyzed the correlations between how long faces were looked at and participants' ratings of attractiveness and found that more attractive faces—especially of the other sex—were looked at longer. We also found that more socio- sexually unrestricted participants who were single had the highest attractiveness-attention correlation. Our results show that evolutionary predictions cannot fully explain the attractive- ness-attention correlation; perceiver characteristics and motives moderate this relationship. |
Ce Mo; Dongjun He; Fang Fang Attention priority map of face images in human early visual cortex Journal Article In: Journal of Neuroscience, vol. 38, no. 1, pp. 149 –157, 2018. @article{Mo2018, Attention priority maps are topographic representations that are utilized for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, while investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/invered face images from fMRI BOLD signals in human early visual areas V1-V3 based on a voxel-wise population receptive field model and behaviorally characterized the priority map as the first saccadic eye movement pattern when subjects performed a face matching task, relative to the condition in which subjects performed a phase-scrambled face matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be well predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance, namely the image configuration. |
Tobias Moehler; Katja Fiehler Effects of central and peripheral cueing on perceptual and saccade performance Journal Article In: Vision Research, vol. 143, pp. 26–33, 2018. @article{Moehler2018, Previous research on the spatiotemporal dynamics of exogenous and endogenous attentional allocation during saccade preparation yielded conflicting results. We hypothesize that this can be explained by the cueing type used to orient attention in a perceptual task. We investigated the time-course of attentional allocation as a function of cueing type (central vs peripheral), spatial congruency of the cued perceptual and saccade task locations, and cue validity in a dual-task paradigm. Participants performed a visual discrimination task during saccade preparation. We found that central and peripheral cues differentially affected the time-course of attentional allocation depending on spatial congruency and cue validity. Peripheral cues quickly and transiently oriented attention to the cued location. In the congruent condition, attention was maintained by the pre-saccadic attention shift, but declined in the spatially incongruent condition. Central cues slowly oriented attention to the cued location. In the congruent condition, attention was boosted by the pre-saccadic attention shift compared to a slower increase in the spatially incongruent condition. The pre-saccadic attention shift – the automatic and obligatory shift of attention to the saccade target – observed in the invalid spatially incongruent condition was not differentially affected by the cueing type orienting attention away from it. Our results suggest that exogenous and endogenous attention is dynamically and flexibly allocated to cued locations during saccade preparation while pre-saccadic attentional resources are progressively shifted to the saccade target irrespective of the cueing type. We argue that attentional selection for perception represents a partially independent process in contrast to the pre-saccadic attention shift. |
Daniel S. Asfaw; Pete R. Jones; M. M. Vera; Nicholas D. Smith; David P. Crabb Does glaucoma alter eye movements when viewing images of natural scenes ? A between-eye study Journal Article In: Investigative Ophthalmology & Visual Science, vol. 59, no. 8, pp. 3189–3198, 2018. @article{Asfaw2018a, PURPOSE. To investigate whether glaucoma produces measurable changes in eye movements. METHODS. Fifteen glaucoma patients with asymmetric vision loss (difference in mean deviation [MD] > 6 dB between eyes) were asked to monocularly view 120 images of natural scenes, presented sequentially on a computer monitor. Each image was viewed twice—once each with the better and worse eye. Patients' eye movements were recorded with an Eyelink 1000 eye-tracker. Eye-movement parameters were computed and compared within participants (better eye versus worse eye). These parameters included a novel measure: saccadic reversal rate (SRR), as well as more traditional metrics such as saccade amplitude, fixation counts, fixation duration, and spread of fixation locations (bivariate contour ellipse area [BCEA]). In addition, the associations of these parameters with clinical measures of vision were investigated. RESULTS. In the worse eye, saccade amplitude (p = 0.012; -13%) and BCEA (p = 0.005; -16 %) were smaller, while SRR was greater (p = 0.018; +16%). There was a significant correlation between the intereye difference in BCEA, and differences in MD values (Spearman's r = 0.65; p = 0.01), while differences in SRR were associated with differences in visual acuity (Spearman's r = 0.64; p = 0.01). Furthermore, between-eye differences in BCEA were a significant predictor of between-eye differences in MD: for every 1-dB difference in MD, BCEA reduced by 6.2% (95% confidence interval, 1.6%–10.3%). CONCLUSIONS. Eye movements are altered by visual field loss, and these changes are related to changes in clinical measures. Eye movements recorded while passively viewing images could potentially be used as biomarkers for visual field damage. |
Carolina Astudillo; Kristofher Muñoz; Pedro E. Maldonado Emotional content modulates attentional visual orientation during free viewing of natural images Journal Article In: Frontiers in Human Neuroscience, vol. 12, pp. 459, 2018. @article{Astudillo2018, Visual attention is the process that enables us to select relevant visual stimuli in our environment to achieve a goal or perform adaptive behaviors. In this process, bottom-up mechanisms interact with top-down mechanisms underlying the automatic and voluntary orienting of attention. Cognitive functions, such as emotional processing, can influence visual attention by increasing or decreasing the resources destined for processing stimuli. The relationship between attention and emotion has been explored mainly in the field of automatic attentional capturing; especially, emotional stimuli are suddenly presented and detection rates or reaction times are recorded. Unlike these paradigms, natural visual scenes may be comprised in multiple stimuli with different emotional valences. In this setting, the mechanisms supporting voluntary visual orientation, under the influence of the emotional components of stimuli, are unknown. We employed a mosaic of pictures with different emotional valences (positive, negative, and neutral) and explored the dynamics of attentional visual orientation, assessed by eye tracking and measurements of pupil diameter. We found that pictures with affective content display increased dwelling times when compared to neutral pictures with a larger effect for negative pictures. The valence, regardless of the arousal levels, was the main factor driving the behavioral modulation of visual orientation. On the other hand, the visual exploration was accompanied by a systematic pupillary response, with the pupil contraction and dilation influenced by the arousal levels, with minor effects driven by the valence. Our results emphasize that arousal and valence should be considered different dimensions of emotional processing both interacting with cognitive processes such as visual attention. |
Étienne Aumont; Veronique D. Bohbot; Gregory L. West Spatial learners display enhanced oculomotor performance Journal Article In: Journal of Cognitive Psychology, vol. 30, no. 8, pp. 872–879, 2018. @article{Aumont2018, Attention is important during navigation processes that rely on a cognitive map, as spatial relationships between environmental landmarks need to be selected, encoded, and learned. Spatial learners navigate using this process of cognitive map formation, which relies on the hippocampus. Conversely, response learners memorise a series of actions to navigate, which relies on the caudate nucleus. The present study aimed to investigate the relationship between spatial learning and oculomotor performance. We tested 23 response learners and 23 spatial learners, as determined by the 4-on-8 virtual maze, on an antisaccade task with a gap and emotional visual stimulus manipulation. Spatial learners displayed decreased saccadic reaction time latencies compared to response learners. Performance cost from the gap manipulation was significantly higher in response learners. These results could represent an attentional practice effect through the use of spatial strategies during navigation or a more global increase in cognitive function amongst spatial learners. |
Vladislav Ayzenberg; Meghan R. Hickey; Stella F. Lourenco Pupillometry reveals the physiological underpinnings of the aversion to holes Journal Article In: PeerJ, vol. 6, pp. 1–19, 2018. @article{Ayzenberg2018, An unusual, but common, aversion to images with clusters of holes is known as trypophobia. Recent research suggests that trypophobic reactions are caused by visual spectral properties also present in aversive images of evolutionary threatening animals (e.g., snakes and spiders). However, despite similar spectral properties, it remains unknown whether there is a shared emotional response to holes and threatening animals. Whereas snakes and spiders are known to elicit a fear reaction, associated with the sympathetic nervous system, anecdotal reports from self-described trypophobes suggest reactions more consistent with disgust, which is associated with activation of the parasympathetic nervous system. Here we used pupillometry in a novel attempt to uncover the distinct emotional response associated with a trypophobic response to holes. Across two experiments, images of holes elicited greater constriction compared to images of threatening animals and neutral images. Moreover, this effect held when controlling for level of arousal and accounting for the pupil grating response. This pattern of pupillary response is consistent with involvement of the parasympathetic nervous system and suggests a disgust, not a fear, response to images of holes. Although general aversion may be rooted in shared visual-spectral properties, we propose that the specific emotion is determined by cognitive appraisal of the distinct image content. |
Habiba Azab; Benjamin Y. Hayden Correlates of economic decisions in the dorsal and subgenual anterior cingulate cortices Journal Article In: European Journal of Neuroscience, vol. 47, no. 8, pp. 979–993, 2018. @article{Azab2018, The anterior cingulate cortex can be divided into distinct ventral (subgenual, sgACC) and dorsal (dACC), portions. The role of dACC in value-based decision-making is hotly debated, while the role of sgACC is poorly understood. We recorded neuronal activity in both regions in rhesus macaques performing a token-gambling task. We find that both encode many of the same vari- ables; including integrated offered values of gambles, primary as well as secondary reward outcomes, number of current tokens and anticipated rewards. Both regions exhibit memory traces for offer values and putative value comparison signals. Both regions use a consistent scheme to encode the value of the attended option. This result suggests that neurones do not appear to be spe- cialized for specific offers (that is, neurones use an attentional as opposed to labelled line coding scheme). We also observed some differences between the two regions: (i) coding strengths in dACC were consistently greater than those in sgACC, (ii) neu- rones in sgACC responded especially to losses and in anticipation of primary rewards, while those in dACC showed more bal- anced responding and (iii) responses to the first offer were slightly faster in sgACC. These results indicate that sgACC and dACC have some functional overlap in economic choice, and are consistent with the idea, inspired by neuroanatomy, which sgACC may serve as input to dACC. |
Theda Backen; Stefan Treue; Julio C. Martinez-Trujillo Encoding of spatial attention by primate prefrontal cortex neuronal ensembles Journal Article In: eNeuro, vol. 5, no. 1, pp. 1–19, 2018. @article{Backen2018, Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n < 60) yielded substantially higher decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared to the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and non-spatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. Significance Statement Single neurons in the primate lateral prefrontal cortex (LPFC) are known to encode the spatial location of attended stimuli as well as other visual features. Here, we investigate how these single neuron coding properties translate into how ensembles of neurons encode information. Our results show that LPFC neuronal ensembles encode both the allocation of attention and the direction of motion of moving stimuli with higher efficiency than single units. Furthermore, relatively small ensembles reach the same decoding accuracy as the full ensembles. Our findings indicate that information coding by neuronal ensembles within the LPFC depends on complex network properties that cannot be solely estimated from coding properties of individual neurons. |
Brett Bahle; Valerie M. Beck; Andrew Hollingworth The architecture of interaction between visual working memory and visual attention Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 44, no. 7, pp. 992–1011, 2018. @article{Bahle2018, In five experiments, we examined whether a task-irrelevant item in visual working memory (VWM) interacts with perceptual selection when VWM must also be used to maintain a template representation of a search target. This question is critical to distinguishing between competing theories specifying the architecture of interaction between VWM and attention. The single-item template hypothesis (SIT) posits that only a single item in VWM can be maintained in a state that interacts with attention. Thus, the secondary item should be inert with respect to attentional guidance. The multiple-item template hypothesis (MIT) posits that multiple items can be maintained in a state that interacts with attention; thus, both the target representation and the secondary item should be capable of guiding selection. This question has been addressed previously in attention capture studies, but the results have been ambiguous. Here, we modified these earlier paradigms to optimize sensitivity to capture. Capture by a distractor matching the secondary item in VWM was observed consistently across multiple types of search task (abstract arrays and natural scenes), multiple dependent measures (search RT and oculomotor capture), multiple memory dimensions (color and shape), and multiple search stimulus dimensions (color, shape, common objects), providing strong support for the MIT. |
Zahra Bahmani; Mohammad Reza Daliri; Yaser Merrikhi; Kelsey Clark; Behrad Noudoost Working memory enhances cortical representations via spatially specific coordination of spike times Journal Article In: Neuron, vol. 97, no. 4, pp. 967–979.e6, 2018. @article{Bahmani2018, The online maintenance and manipulation of information in working memory (WM) is essential for guiding behavior based on our goals. Understanding how WM alters sensory processing in pursuit of different behavioral objectives is therefore crucial to establish the neural basis of our goal-directed behavior. Here we show that, in the middle temporal (MT) area of rhesus monkeys, the power of the local field potentials in the αβ band (8–25 Hz) increases, reflecting the remembered location and the animal's performance. Moreover, the content of WM determines how coherently MT sites oscillate and how synchronized spikes are relative to these oscillations. These changes in spike timing are not only sufficient to carry sensory and memory information, they can also account for WM-induced sensory enhancement. These results provide a mechanistic-level understanding of how WM alters sensory processing by coordinating the timing of spikes across the neuronal population, enhancing the sensory representation of WM targets. When examining primate extrastriate visual responses, Bahmani et al. find that, in the absence of rate changes, working memory mainly affects αβ oscillations and spike timing. These changes are associated with better visual processing, suggesting how working memory benefits sensory areas. |
Robin S. Baker; Henry W. Fields; F. Michael Beck; Allen R. Firestone; Stephen F. Rosenstiel Objective assessment of the contribution of dental esthetics and facial attractiveness in men via eye tracking Journal Article In: American Journal of Orthodontics and Dentofacial Orthopedics, vol. 153, no. 4, pp. 523–533, 2018. @article{Baker2018, Introduction: Recently, greater emphasis has been placed on smile esthetics in dentistry. Eye tracking has been used to objectively evaluate attention to the dentition (mouth) in female models with different levels of dental esthetics quantified by the aesthetic component of the Index of Orthodontic Treatment Need (IOTN). This has not been accomplished in men. Our objective was to determine the visual attention to the mouth in men with different levels of dental esthetics (IOTN levels) and background facial attractiveness, for both male and female raters, using eye tracking. Methods: Facial images of men rated as unattractive, average, and attractive were digitally manipulated and paired with validated oral images, IOTN levels 1 (no treatment need), 7 (borderline treatment need), and 10 (definite treatment need). Sixty-four raters meeting the inclusion criteria were included in the data analysis. Each rater was calibrated in the eye tracker and randomly viewed the composite images for 3 seconds, twice for reliability. Results: Reliability was good or excellent (intraclass correlation coefficients, 0.6-0.9). Significant interactions were observed with factorial repeated-measures analysis of variance and the Tukey-Kramer method for density and duration of fixations in the interactions of model facial attractiveness by area of the face (P <0.0001, P <0.0001, respectively), dental esthetics (IOTN) by area of the face (P <0.0001, P <0.0001, respectively), and rater sex by area of the face (P = 0.0166 |
Romy S. Bakker; Luc P. J. Selen; W. Pieter Medendorp Reference frames in the decisions of hand choice Journal Article In: Journal of Neurophysiology, vol. 119, pp. 1809–1817, 2018. @article{Bakker2018, For the brain to decide on a reaching movement, it needs to select which hand to use. A number of body-centered factors affect this decision, such as the anticipated movement costs of each arm, recent choice success, handedness, and task demands. While the position of each hand relative to the target is also known to be an important spatial factor, it is unclear which reference frames coordinate the spatial aspects in the decisions of hand choice. Here we tested the role of gaze-and head-centered reference frames in a hand selection task. With their head and gaze oriented in different directions, we measured hand choice of 19 right-handed subjects instructed to make unimanual reaching movements to targets at various directions relative to their body. Using an adaptive procedure, we determined the target angle that led to equiprobable right/left hand choices. When gaze remained fixed relative to the body this balanced target angle shifted systematically with head orientation, and when head orientation remained fixed this choice measure shifted with gaze. These results suggest that a mixture of head-and gaze-centered reference frames is involved in the spatially guided decisions of hand choice, perhaps to flexibly bind this process to the mechanisms of target selection. |
Sabrina Baldofski; Patrick Lüthold; Ingmar Sperling; Anja Hilbert Visual attention to pictorial food stimuli in individuals with night eating syndrome: an eye-tracking study Journal Article In: Behavior Therapy, vol. 49, no. 2, pp. 262–272, 2018. @article{Baldofski2018, Night eating syndrome (NES) is characterized by excessive evening and/or nocturnal eating episodes. Studies indicate an attentional bias towards food in other eating disorders. For NES, however, evidence of attentional food processing is lacking. Attention towards food and non-food stimuli was compared using eye-tracking in 19 participants with NES and 19 matched controls without eating disorders during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in initial fixation position or gaze duration. However, a significant orienting bias to food compared to non-food was found within the NES group, but not in controls. A significant attentional maintenance bias to non-food compared to food was found in both groups. Detection times did not differ between groups in the search task. Only in NES, attention to and faster detection of non-food stimuli were related to higher BMI and more evening eating episodes. The results might indicate an attentional approach-avoidance pattern towards food in NES. However, further studies should clarify the implications of attentional mechanisms for the etiology and maintenance of NES. |
Munirah Bangee; Pamela Qualter Examining the visual processing patterns of lonely adults Journal Article In: Scandinavian Journal of Psychology, vol. 59, no. 4, pp. 351–359, 2018. @article{Bangee2018, Prior research has shown that loneliness is associated with hypervigilance to social threats, with eye‐tracking research showing lonely people display a specific attentional bias when viewing social rejection and social exclusion video footage (Bangee, Harris, Bridges, Rotenberg & Qualter, 2014; Qualter, Rotenberg, Barrett et al., 2013). The current study uses eye‐tracker methodology to examine whether that attentional bias extends to negative emotional faces and negative social non‐rejecting stimuli, or whether it could be explained only as a specific bias to social rejection/exclusion. It is important to establish whether loneliness relates to a specific or general attention bias because it may explain the maintenance of loneliness. Participants (N = 43 |
Bengi Baran; David Correll; Tessa C. Vuper; Alexandra Morgan; Simon J. Durrant; Dara S. Manoach; Robert Stickgold Spared and impaired sleep-dependent memory consolidation in schizophrenia Journal Article In: Schizophrenia Research, vol. 199, pp. 83–89, 2018. @article{Baran2018, Objective: Cognitive deficits in schizophrenia are the strongest predictor of disability and effective treatment is lacking. This reflects our limited mechanistic understanding and consequent lack of treatment targets. In schizophrenia, impaired sleep-dependent memory consolidation correlates with reduced sleep spindle activity, suggesting sleep spindles as a potentially treatable mechanism. In the present study we investigated whether sleep-dependent memory consolidation deficits in schizophrenia are selective. Methods: Schizophrenia patients and healthy individuals performed three tasks that have been shownto undergo sleep-dependent consolidation: the Word Pair Task (verbal declarative memory), the Visual Discrimination Task (visuoperceptual procedural memory), and the Tone Task (statistical learning). Memory consolidation was tested 24 h later, after a night of sleep. Results: Compared with controls, schizophrenia patients showed reduced overnight consolidation ofword pair learning. In contrast, both groups showed similar significant overnight consolidation of visuoperceptual procedural memory. Neither group showed overnight consolidation of statistical learning. Conclusion: The present findings extend the known deficits in sleep-dependent memory consolidation in schizophrenia to verbal declarative memory, a core, disabling cognitive deficit. In contrast, visuoperceptual procedural memorywas spared. These findings support the hypothesis that sleep-dependent memory consolidation deficits in schizophrenia are selective, possibly limited to tasks that rely on spindles. These findings reinforce the importance ofdeficient sleep-dependent memory consolidation among the cognitive deficits ofschizophrenia and suggest sleep physiology as a potentially treatable mechanism. |
Antoine Barbot; Sirui Liu; Ruth Kimchi; Marisa Carrasco Attention enhances apparent perceptual organization Journal Article In: Psychonomic Bulletin & Review, vol. 25, no. 5, pp. 1824–1832, 2018. @article{Barbot2018, Perceptual organization and selective attention are two crucial processes that influence how we perceive visual information. The former structures complex visual inputs into coherent units, whereas the later selects relevant information. Attention and perceptual organization can modulate each other, affecting visual processing and performance in various tasks and conditions. Here, we tested whether attention can alter the way multiple elements appear to be perceptually organized. We manipulated covert spatial attention using a rapid serial visual presentation task, and measured perceptual organization of two multielements arrays organized by luminance similarity as rows or columns, at both the attended and unattended locations. We found that the apparent perceptual organization of the multielement arrays is intensified when attended and attenuated when unattended. We ruled out response bias as an alternative explanation. These findings reveal that attention enhances the appearance of perceptual organization, a midlevel vision process, altering the way we perceive our visual environment. |
Allison M. Londerée; Megan E. Roberts; Mary E. Wewers; Ellen Peters; Amy K. Ferketich; Dylan D. Wagner Adolescent attentional bias toward real-world flavored e-cigarette marketing Journal Article In: Tobacco Regulatory Science, vol. 4, no. 6, pp. 57–65, 2018. @article{Londeree2018, Objectives: E-cigarettes are now the most commonly-used tobacco product among adoles- cents; yet, little work has examined how the appealing food and flavor cues used in their mar- keting might attract adolescents' attention, thereby increasing willingness to try these prod- ucts. In the present study, we tested whether advertisements for fruit/sweet/savory-flavored (“flavored”) e-cigarettes attracted adolescent attention in real-world scenes more than tobacco flavored (“unflavored”) e-cigarettes. Additionally, we examined the relationship between ado- lescent attentional bias and willingness to try flavored e-cigarettes. Methods: Participants were 46 adolescents (age range: 16-18 years). All participants took part in an eye-tracking paradigm that examined attentional bias to flavored and unflavored e-cigarette advertisements embed- ded in pictures of real-world storefront scenes. Afterwards, participants' willingness to try fla- vored and unflavored e-cigarettes was assessed. Results: In support of our primary hypothesis, adolescents looked longer and fixated more frequently on flavored (vs unflavored) e-cigarette advertisements. Moreover, this attentional bias towards flavored e-cigarette advertisements predicted a greater willingness to try flavored vs unflavored e-cigarettes. Conclusions: These findings suggest that flavored e-cigarette marketing attracts the attention of adolescents, in- creases their willingness to try flavored e-cigarette products, and could, therefore, put them at greater risk for tobacco initiation. Key |
Elizabeth S. Lorenc; Kartik K. Sreenivasan; Derek E. Nee; Annelinde R. E. Vandenbroucke; Mark D'Esposito Flexible coding of visual working memory representations during distraction Journal Article In: Journal of Neuroscience, vol. 38, no. 23, pp. 5267–5276, 2018. @article{Lorenc2018, Visual working memory (VWM) recruits a broad network of brain regions, including prefrontal, parietal, and visual cortices. Recent evidence supports a “sensory recruitment” model ofVWM, whereby precise visual details are maintained in the same stimulus-selective regions responsible for perception. A key question in evaluating the sensory recruitment model is how VWM representations persist through distracting visual input, given that the early visual areas that putatively represent VWM content are susceptible to interference from visual stimulation. Toaddress this question,weused a functional magnetic resonance imaging inverted encodingmodelapproach to quantitatively assess the effect of distractors on VWM representations in early visual cortex and the intraparietal sulcus (IPS), another region previously implicated in the storage of VWM information. This approach allowed us to reconstruct VWM representations for orientation, both before and after visual interference, and to examine whether oriented distractors systematically biased these representations. In our human participants (both male and female), we found that orientation information was maintained simultaneously in early visual areas and IPS in anticipation ofpossible distraction, and these representations persisted in the absence ofdistraction. Importantly, early visual representations were susceptible to interference; VWM orientations reconstructed from visual cortex were significantly biased toward distractors, corresponding to a small attractive bias in behavior. In contrast, IPS representations did not show such a bias. These results provide quantitative insight into the effect ofinterference onVWMrepresentations, and they suggest a dynamic tradeoffbetween visual and parietal regions that allows flexible adaptation to task demands in service ofVWM. |
Gerard M. Loughnane; Daniel P. Newman; Sarita Tamang; Simon P. Kelly; Redmond G. O'Connell Antagonistic interactions between microsaccades and evidence accumulation processes during decision formation Journal Article In: Journal of Neuroscience, vol. 38, no. 9, pp. 2163–2176, 2018. @article{Loughnane2018, Despite their small size, microsaccades can impede stimulus detections if executed at inopportune times. Although it has been shown that microsaccades evoke both inhibitory and excitatory responses across different visual regions, their impact on the higher-level neural decision processes that bridge sensory responses to action selection has yet to be examined. Here, we show that when human observers monitor stimuli for subtle feature changes, the occurrence of microsaccades long after (up to 800 ms) change onset predicts slower reaction times and this is accounted for by momentary suppression of neural signals at each key stage of decision formation: visual evidence encoding, evidence accumulation, and motor preparation. Our data further reveal that, independent of the timing of the change events, the onset of neural decision formation coincides with a systematic inhibition of microsaccade production, persisting until the perceptual report is executed. Our combined behavioral and neural measures highlight antagonistic interactions between microsaccade occurrence and evidence accumulation during visual decision-making tasks. |
Eric Lowet; Bruno Gomes; Karthik Srinivasan; Huihui Zhou; Robert John Schafer; Robert Desimone Enhanced neural processing by covert attention only during microsaccades directed toward the attended stimulus Journal Article In: Neuron, vol. 99, no. 1, pp. 207–214.e3, 2018. @article{Lowet2018, Attention can be “covertly” directed without eye movements; yet, even during fixation, there are continuous microsaccades (MSs). In areas V4 and IT of macaques, we found that firing rates and stimulus representations were enhanced by attention but only following a MS toward the attended stimulus. The onset of neural attentional modulations was tightly coupled to the MS onset. The results reveal a major link between the effects of covert attention on cortical visual processing and the overt movement of the eyes. |
Qilin Lu; Xiaoxiao Wang; Lin Li; Bensheng Qiu; Shihui Wei; Bernhard A. Sabel; Yifeng Zhou Visual rehabilitation training alters attentional networks in hemianopia: An fMRI study Journal Article In: Clinical Neurophysiology, vol. 129, no. 9, pp. 1832–1841, 2018. @article{Lu2018c, Objective: Hemianopia is a visual field defect following post-chiasmatic damage. We now applied functional magnetic resonance imaging (fMRI) in hemianopic patients before and after visual rehabilitation training (VRT) to examine the impact of VRT on attentional function networks. Methods: Seven chronic hemianopic patients with post- chiasmatic lesions carried out a VRT for five weeks under fixation control. Before vs. after intervention, we assessed the area of residual vision (ARV), contrast sensitivity function (CSF) and functional MRI data and correlated them with each other. Results: VRT significantly improved the visual function of grating detection at the training location. Using fMRI, we found that the training led to a strengthening of connectivity between the right temporoparietal junction (rTPJ) to the insula and the anterior cingulate cortex (ACC), all of which belong to the cortical attentional network. However, no significant correlation between alterations of brain activity and improvements of either CSF or ARV was found. Conclusion: Visual rehabilitation training partially restored the deficient visual field sectors and could improve attentional network function in hemianopia. Significance: Our MRI results highlight the role of attention and the rTPJ activation as one, but not the only, component of VRT in hemianopia. |
Yiliang Lu; Jiapeng Yin; Zheyuan Chen; Hongliang Gong; Ye Liu; Liling Qian; Xiaohong Li; Rui Liu; Ian Max Andolina; Wei Wang Revealing detail along the visual hierarchy: Neural clustering preserves acuity from V1 to V4 Journal Article In: Neuron, vol. 98, no. 2, pp. 417–428.e3, 2018. @article{Lu2018a, How primates perceive objects along with their detailed features remains a mystery. This ability to make fine visual discriminations depends upon a high-acuity analysis of spatial frequency (SF) along the visual hierarchy from V1 to inferotemporal cortex. By studying the transformation of SF across macaque parafoveal V1, V2, and V4, we discovered SF-selective functional domains in V4 encoding higher SFs up to 12 cycles/°. These intermittent higher-SF-selective domains, surrounded by domains encoding lower SFs, violate the inverse relationship between SF preference and retinal eccentricity. The neural activities of higher- and lower-SF domains correspond to local and global features, respectively, of the same stimuli. Neural response latencies in high-SF domains are around 10 ms later than in low-SF domains, consistent with the coarse-to-fine nature of perception. Thus, our finding of preserved resolution from V1 into V4, separated both spatially and temporally, may serve as a connecting link for detailed object representation. How do we perceive scenes or objects yet resolve their fine details? Lu et al. found that high spatial detail organizes in spatiotemporally separated neural clusters within primate intermediate area V4, preserving visual acuity from early toward higher cortical areas. |
Zhanglong Lu An eye movement study on the relationship between multiple implicit sequence learning and attention Journal Article In: Psychology and Behavioral Sciences, vol. 7, no. 1, pp. 8–13, 2018. @article{Lu2018b, The purpose of this study was to explore the relationship between multiple implicit sequence learning and attention. A one-factor between-subjects experimental design was used, with attentional load (low vs. high) as between-subjects variable. Eye-movement technology was adopted, and saccadic reaction time was as dependent measure. Forty healthy volunteers were randomly assigned to high attentional load condition and low attentional load condition. The results showed that: (1) Saccadic reaction time in high attentional load condition was longer than low attentional load condition's; (2) Both the primary sequence and the secondary sequence could be learned no matter whether in low attentional load condition or in high attentional load condition; (3) the sequence learning scores did not differ from primary sequence and secondary sequence. These findings suggest that there are no attentional limitations on the learning of multiple sequence learning. |
Zhanglong Lu; Jieqiong Lin; Xiaoyu Li An experimental study on relationship between subliminal emotion and implicit sequence learning: Evidence from eye movements Journal Article In: International Journal of Psychological and Brain Sciences, vol. 3, no. 1, pp. 1–6, 2018. @article{Lu2018d, The relationship between emotion and implicit sequence learning is one of the basic question in the fields of implicit learning. The current study adopted serial reaction time (SRT) task paradigm and eye-movement technology to explore the effect of emotion on implicit learning. A one-factor between-subjects experimental design was used, with subliminal emotion (positive vs. negative) as between-subjects variable. Dependent measure was saccadic reaction time. Results were showed as follows: (1) saccadic reaction time was less in positive emotion group than negative emotion group; (2) saccadic reaction time was decreasing with increasing blocks; (3) there was significant interaction between group and block, and simple effect analysis indicated that the saccadic reaction time in positive emotion group was decreasing with increasing blocks, while there was no significant block effect in negative emotion group; (4) the amount of implicit sequence leaning was significantly higher in positive emotion group than negative emotion group. The findings suggest that positive emotion promote implicit sequence learning. |
Zijian Lu; Mathias Klinghammer; Katja Fiehler The role of gaze and prior knowledge on allocentric coding of reach targets Journal Article In: Journal of Vision, vol. 18, no. 4, pp. 22, 2018. @article{Lu2018, In this study, we investigated the influence of gaze and prior knowledge about the reach target on the use of allocentric information for memory-guided reaching. Participants viewed a breakfast scene with five objects in the background and six objects on the table. Table objects served as potential reach targets. Participants first encoded the scene and, after a short delay, a test scene was presented with one table object missing and one, three, or five table objects horizontally shifted in the same direction. Participants performed a memory-guided reaching movement toward the position of the missing object on a blank screen. In order to examine the influence of gaze, participants either freely moved their gaze (free-view) or kept gaze at a fixation point (fixation) throughout the trial. The effect of prior knowledge was investigated by informing participants about the reach target either before (preview) or after (nonpreview) scene encoding. Our results demonstrate that humans use allocentric information for reaching even if a stable retinal reference is available. However, allocentric coding of reach targets is stronger when gaze is free and prior knowledge about the reach target is missing. |
Heather D. Lucas; Melissa C. Duff; Neal J. Cohen The hippocampus promotes effective saccadic information gathering in humans Journal Article In: Journal of Cognitive Neuroscience, vol. 31, no. 2, pp. 186–201, 2018. @article{Lucas2018, It is well established that the hippocampus is critical for memory. Recent evidence suggests that one function of hippocampal memory processing is to optimize how people actively explore the world. Here we demonstrate that the link between the hippocampus and exploration extends even to the moment-to-moment use of eye movements during visuospatial memory encoding. In Experiment 1, we examined relationships between study-phase eye movements in healthy individuals and subsequent performance on a spatial reconstruction test. In addition to quantitative measures of viewing behaviors (e.g., how many fixations or saccades were deployed during study), we used the information–theoretic measure of entropy to assess the amount of randomness or disorganization in participants' scanning behaviors. We found that the use of scanpaths during study that were lower in entropy (e.g., more organized, less random) predicted more accurate spatial reconstruction both within and between participants. Scanpath entropy was a better predictor of reconstruction accuracy than were the quantitative measures of viewing. In Experiment 2, we found that individuals with hippocampal amnesia tended to engage in viewing patterns that were higher in entropy (less organized) relative to healthy comparisons. These findings reveal a critical role of the hippocampus in guiding eye movement exploration to optimize visuospatial relational memory. |
Steven G. Luke; Emily S. Darowski; Shawn D. Gale Predicting eye-movement characteristics across multiple tasks from working memory and executive control Journal Article In: Memory & Cognition, vol. 46, no. 5, pp. 826–839, 2018. @article{Luke2018b, Individual differences in working memory (WM) and executive control are stable, related to cognitive task performance, and clinically predictive. Between-participant differences in eye movements are also highly reliable (Carter & Luke, Journal of Experimental Psychology: Human Perception and Performance, 2018; Henderson & Luke, JournalofExperimental Psychology: Human Perception and Performance, 40(4), 1390–1400, 2014). However, little is known about how higher order individual differences in cognition are related to these eye-movement characteristics. In the present study, healthy college-age participants performed several individual difference tasks to measure WM span and executive control. Participants also performed three eye-movement tasks: reading, visual search, and scene viewing. Across all tasks, higherWMscoreswere related to reduced skewness in fixation duration distributions. In reading, higherWMscores predicted longer saccades. In scene viewing, higherWMscores predicted longer fixations. Theoretical and clinical implications of these findings are discussed. |
Thomas Zhihao Luo; John H. R. Maunsell Attentional changes in either criterion or sensitivity are associated with robust modulations in lateral prefrontal cortex Journal Article In: Neuron, vol. 97, no. 6, pp. 1382–1393.e7, 2018. @article{Luo2018a, Visual attention is associated with neuronal changes across the brain, and these widespread signals are generally assumed to underlie a unitary mechanism of attention. However, using signal detection theory, attention-related effects on performance can be partitioned into changes in either the subject's criterion or sensitivity. Neuronal modulations associated with only sensitivity changes were previously observed in visual cortex, raising questions about which structures mediate attention-related changes in criterion and whether individual neurons are involved in multiple components of attention. Here, we recorded from monkey lateral prefrontal cortex (LPFC) and found that, in contrast to visual cortex, neurons in LPFC changed their firing rates, pairwise correlation, and Fano factor when subjects changed either their criterion or their sensitivity. These results indicate that attention-related neuronal modulations in separate brain regions are not a monolithic signal and instead can be linked to distinct behavioral changes. Luo and Maunsell show that the modulations in prefrontal cortex correspond to multiple components of attention and differ from modulations in visual cortex, indicating that different brain structures underlie distinct attentional mechanisms and that attention is not a unitary process. |
Patrick Lüthold; Junpeng Lao; Lingnan He; Xinyue Zhou; Roberto Caldara Waldo reveals cultural differences in return fixations Journal Article In: Visual Cognition, vol. 26, no. 10, pp. 817–830, 2018. @article{Luethold2018, Humans routinely perform visual search towards targets to adapt to the environment. These sequences of ballistic eye movements are shaped by a combination of top–down and bottom– up factors. Recent research documented that human observers display cultural-specific fixation patterns in a range of visual processing tasks. In particular, eye movement strategies extracting information from faces clearly differs between Western Caucasian (WC) and East Asian (EA) observers. However, whether such cultural differences are also present for visual scene processing remains debated. To this aim, we recorded the eye movements of WC and EA observers while they were solving visual search problems parametrically varying in difficulty: Where's Waldo. Both groups had a comparable familiarity with the Waldo books reaching a comparable level of accuracy in target detection. Both cultural groups also showed a comparable temporal effect on inhibition of return, with longer fixation durations when saccades were performed to a return location compared to other locations. Westerners, however, located Waldo faster than Easterners. Interestingly, this modulation of speed was likely related to differences occurring on the low-level mechanisms of spatial inhibition of return, with EA observers returning more often to previously visited locations than the WC observers. This suboptimal eye movement strategy in the Easterners might be engendered by their cultural perceptual bias consisting in a greater use of extra-foveal information. Overall, our data point towards the existence of a subtle, but significant difference in the processing of visual scenes across observers from different cultures during active visual search. |
Liya Ma; Kevin J. Skoblenick; Kevin D. Johnston; Stefan Everling Ketamine alters lateral prefrontal oscillations in a rule-based working memory gask Journal Article In: Journal of Neuroscience, vol. 38, no. 10, pp. 2482–2494, 2018. @article{Ma2018a, Acute administration of N-methyl-D-aspartate receptor (NMDAR) antagonists in healthy humans and animals produces working memory deficits similar to those observed in schizophrenia. However, it is unclear whether they also lead to altered low-frequency (<=60Hz) neural oscillatory activities similar to those associated with schizophrenia during working memory processes. Here we recorded local field potentials (LFPs) and single unit activity from the lateral prefrontal cortex (LPFC) of three male rhesus macaque monkeys while they performed a rule-based prosaccade and antisaccade working memory task, both before and after systemic injections of a subanesthetic dose (<=0.7mg/kg) of ketamine. Accompanying working-memory impairment, ketamine enhanced the low gamma band (30-60Hz) and dampened the beta band (13-30Hz) oscillatory activities in the LPFC during both delay periods and inter-trial intervals. It also increased task-related alpha-band activities, likely reflecting compromised attention. Beta-band oscillations may be especially relevant to working memory processes, as stronger beta power weakly but significantly predicted shorter saccadic reaction time. Also in beta band, ketamine reduced the performance-related oscillation as well as the rule information encoded in the spectral power. Ketamine also reduced rule information in the spike-field phase consistency in almost all frequencies up to 60Hz. Our findings support NMDAR antagonists in non-human primates as a meaningful model for altered neural oscillations and synchrony, which reflect a disorganized network underlying the working memory deficits in schizophrenia. |
Richard J. Macatee; Katherine A. McDermott; Brian J. Albanese; Norman B. Schmidt; Jesse R. Cougle Distress intolerance moderation of attention to emotion: An eye-tracking study Journal Article In: Cognitive Therapy and Research, vol. 42, no. 1, pp. 48–62, 2018. @article{Macatee2018, Distress intolerance (DI) is an important individual difference reflective of the inability to endure aversive affective states and is relevant to multiple clinical populations, but underlying emotional processing mechanisms remain unclear. The current study used eye-tracking to examine biased attention towards emotional stimuli at baseline and in the context of acute stress in a non-clinical sample (N = 165). We hypothesized that DI would incrementally predict greater stressor-elicited increases in sustained/delayed disengagement, but not initial orientation/ facilitated engagement negative (i.e., threat, dysphoric) attention biases, and that DI's association with maladaptive stress regulation would depend on these increases. Partially consistent with predictions, DI was only independently associated with stressor-elicited increases in sustained negative bias and, unexpectedly, decreases in sustained positive bias. Further, DI and change in sustained threat bias marginally interacted to predict cardiovascular but not subjective anxious mood recovery. Theoretical implications are discussed. |
W. Joseph MacInnes; Roopali Bhatnagar No supplementary evidence of attention to a spatial cue when saccadic facilitation is absent Journal Article In: Scientific Reports, vol. 8, pp. 13289, 2018. @article{MacInnes2018, Attending a location in space facilitates responses to targets at that location when the time between cue and target is short. Certain types of exogenous cues – such as sudden peripheral onsets – have been described as reflexive and automatic. Recent studies however, have been showing many cases where exogenous cues are less automatic than previously believed and do not always result in facilitation. A lack of the behavioral facilitation, however, does not automatically necessitate a lack of underlying attention to that location. We test exogenous cueing in two experiments where facilitation is and is not likely to be observed with saccadic responses. We also test alternate measures linked to the allocation of attention such as saccadic curvature, microsaccades and pupil size. As expected, we find early facilitation as measured by saccadic reaction time when CTOAs are predictable but not when they are randomized within a block. We find no impact of the cue on microsaccade direction for either experiment, and only a slight dip in the frequency of microsaccades after the cue. We do find that change in pupil size to the cue predicts the magnitude of the validity effect, but only in the experiment where facilitation was observed. In both experiments, we observed a tendency for saccadic curvature to deviate away from the cued location and this was stronger for early CTOAs and toward vertical targets. Overall, we find that only change in pupil size is consistent with observed facilitation. Saccadic curvature is influenced by the onset of the cue, buts its direction is indicative of oculomotor inhibition whether we see RT facilitation or not. Microsaccades were not diagnostic in either experiment. Finally, we see little to no evidence of attention at the cued location in any additional measures when facilitation of saccadic responses is absent. |
Christopher R. Madan; Janine Bayer; Matthias Gamer; Tina B. Lonsdorf; Tobias Sommer Visual complexity and affect: Ratings reflect more than meets the eye Journal Article In: Frontiers in Psychology, vol. 8, pp. 2368, 2018. @article{Madan2018, Pictoral stimuli can vary on many dimensions, several aspects of which are captured by ‘visual complexity'. Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing, which might bias perception. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias to could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli. |
Kazutaka Maeda; Jun Kunimatsu; Okihide Hikosaka Amygdala activity for the modulation of goal-directed behavior in emotional contexts Journal Article In: PLoS Biology, vol. 16, no. 6, pp. e2005339, 2018. @article{Maeda2018, Choosing valuable objects and rewarding actions is critical for survival. While such choices must be made in a way that suits the animal's circumstances, the neural mechanisms underlying such context-appropriate behavior are unclear. To address this question, we devised a context-dependent reward-seeking task for macaque monkeys. Each trial started with the appearance of one of many visual scenes containing two or more objects, and the monkey had to choose the good object by saccade to get a reward. These scenes were categorized into two dimensions of emotional context: dangerous versus safe and rich versus poor. We found that many amygdala neurons were more strongly activated by dangerous scenes, by rich scenes, or by both. Furthermore, saccades to target objects occurred more quickly in dangerous than in safe scenes and were also quicker in rich than in poor scenes. Thus, amygdala neuronal activity and saccadic reaction times were negatively correlated in each monkey. These results suggest that amygdala neurons facilitate targeting saccades predictably based on aspects of emotional context, as is necessary for goal-directed and social behavior. |
Aoife Mahon; Alasdair D. F. Clarke; Amelia R. Hunt The role of attention in eye-movement awareness Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 7, pp. 1691–1704, 2018. @article{Mahon2018, People are unable to accurately report on their own eye movements most of the time. Can this be explained as a lack of attention to the objects we fixate? Here, we elicited eye-movement errors using the classic oculomotor capture paradigm, in which people tend to look at sudden onsets even when they are irrelevant. In the first experiment, participants were able to report their own errors on about a quarter of the trials on which they occurred. The aim of the second experiment was to assess what differentiates errors that are detected from those that are not. Specifically, we estimated the relative influence of two possible factors: how long the onset distractor was fixated (dwell time), and a measure of how much attention was allocated to the onset distractor. Longer dwell times were associated with awareness of the error, but the measure of attention was not. The effect of the distractor identity on target discrimination reaction time was similar whether or not the participant was aware they had fixated the distractor. The results suggest that both attentional and oculomotor capture can occur in the absence of awareness, and have important implications for our understanding of the relationship between attention, eye movements, and awareness. |
Guido Maiello; MiYoung Kwon; Peter J. Bex Three-dimensional binocular eye–hand coordination in normal vision and with simulated visual impairment Journal Article In: Experimental Brain Research, vol. 236, no. 3, pp. 691–709, 2018. @article{Maiello2018, Sensorimotor coupling in healthy humans is demonstrated by the higher accuracy of visually tracking intrinsically—rather than extrinsically—generated hand movements in the fronto-parallel plane. It is unknown whether this coupling also facilitates vergence eye movements for tracking objects in depth, or can overcome symmetric or asymmetric binocular visual impairments. Human observers were therefore asked to track with their gaze a target moving horizontally or in depth. The movement of the target was either directly controlled by the observer's hand or followed hand movements executed by the observer in a previous trial. Visual impairments were simulated by blurring stimuli independently in each eye. Accuracy was higher for self-generated movements in all conditions, demonstrating that motor signals are employed by the oculomo- tor system to improve the accuracy of vergence as well as horizontal eye movements. Asymmetric monocular blur affected horizontal tracking less than symmetric binocular blur, but impaired tracking in depth as much as binocular blur. There was a critical blur level up to which pursuit and vergence eye movements maintained tracking accuracy independent of blur level. Hand–eye coordination may therefore help compensate for functional deficits associated with eye disease and may be employed to augment visual impairment rehabilitation. |
Manuela Malaspina; Andrea Albonico; Junpeng Lao; Roberto Caldara; Roberta Daini Mapping self-face recognition strategies in congenital prosopagnosia Journal Article In: Neuropsychology, vol. 32, no. 2, pp. 123–137, 2018. @article{Malaspina2018, OBJECTIVE: Recent evidence showed that individuals with congenital face processing impairment (congenital prosopagnosia [CP]) are highly accurate when they have to recognize their own face (self-face advantage) in an implicit matching task, with a preference for the right-half of the self-face (right perceptual bias). Yet the perceptual strategies underlying this advantage are unclear. Here, we aimed to verify whether both the self-face advantage and the right perceptual bias emerge in an explicit task, and whether those effects are linked to a different scanning strategy between the self-face and unfamiliar faces. METHOD: Eye movements were recorded from 7 CPs and 13 controls, during a self/other discrimination task of stimuli depicting the self-face and another unfamiliar face, presented upright and inverted. RESULTS: Individuals with CP and controls differed significantly in how they explored faces. In particular, compared with controls, CPs used a distinct eye movement sampling strategy for processing inverted faces, by deploying significantly more fixations toward the nose and mouth areas, which resulted in more efficient recognition. Moreover, the results confirmed the presence of a self-face advantage in both groups, but the eye movement analyses failed to reveal any differences in the exploration of the self-face compared with the unfamiliar face. Finally, no bias toward the right-half of the self-face was found. CONCLUSIONS: Our data suggest that the self-face advantage emerges both in implicit and explicit recognition tasks in CPs as much as in good recognizers, and it is not linked to any specific visual exploration strategies. |
George L. Malcolm; Edward H. Silson; Jennifer R. Henry; Chris I. Baker Transcranial magnetic stimulation to the occipital place area biases gaze during scene viewing Journal Article In: Frontiers in Human Neuroscience, vol. 12, pp. 189, 2018. @article{Malcolm2018, We can understand viewed scenes and extract task-relevant information within a few hundred milliseconds. This process is generally supported by three cortical regions that show selectivity for scene images: parahippocampal place area (PPA), medial place area (MPA) and occipital place area (OPA). Prior studies have focused on the visual information each region is responsive to, usually within the context of recognition or navigation. Here, we move beyond these tasks to investigate gaze allocation during scene viewing. Eye movements rely on a scene's visual representation to direct saccades, and thus foveal vision. In particular, we focus on the contribution of OPA, which is: (i) located in occipito-parietal cortex, likely feeding information into parts of the dorsal pathway critical for eye movements; and (ii) contains strong retinotopic representations of the contralateral visual field. Participants viewed scene images for 1034 ms while their eye movements were recorded. On half of the trials, a 500 ms train of five transcranial magnetic stimulation (TMS) pulses was applied to the participant's cortex, starting at scene onset. TMS was applied to the right hemisphere over either OPA or the occipital face area (OFA), which also exhibits a contralateral visual field bias but shows selectivity for face stimuli. Participants generally made an overall left-toright, top-to-bottom pattern of eye movements across all conditions. When TMS was applied to OPA, there was an increased saccade latency for eye movements toward the contralateral relative to the ipsilateral visual field after the final TMS pulse (400 ms). Additionally, TMS to the OPA biased fixation positions away from the contralateral side of the scene compared to the control condition, while the OFA group showed no such effect. There was no effect on horizontal saccade amplitudes. These combined results suggest that OPA might serve to represent local scene information that can then be utilized by visuomotor control networks to guide gaze allocation in natural scenes. |
Anton Malienko; Vanessa Harrar; Aarlenne Zein Khan Contrasting effects of exogenous attention on saccades and reaches Journal Article In: Journal of Vision, vol. 18, no. 9, pp. 1–16, 2018. @article{Malienko2018, Previous studies have shown that eye and arm movements tend to be intrinsically coupled in their behavior. There is, however, no consensus on whether planning of eye and arm movements is based on shared or independent representations. One way to gain insight into these processes is to compare how exogenous attentional modulation influences the temporal and spatial characteristics of the eye and the arm during single or combined movements. Thirteen participants (M ¼22.8 years old, SD¼1.5) performed single or combined movements to an eccentric target. A behaviorally irrelevant cue flashed just before the target at different locations. There was no effect of the cue on the saccade or reach amplitudes, whether they were performed alone or together. We found no differences in overall reaction times (RTs) between single and combined movements. With respect to the effect of the cue, both saccades and reaches followed a similar pattern with the shortest RTs when the cue was closest to the target, which we propose reflects effector-independent processes. Compared to when no cue was presented before the target, saccade RTs were generally inhibited by the irrelevant cue with increasing cue-target distance. In contrast, reach RTs showed strong facilitation at the target location and less facilitation at farther distances. We propose that this reflects the presence of effector- dependent processes. The similarities and differences in RTs between the saccades and reaches are consistent with effector-dependent and -independent processes working in parallel. |
Mariya E. Manahova; Pim Mostert; Peter Kok; Jan-Mathijs Schoffelen; Floris P. Lange Stimulus familiarity and expectation jointly modulate neural activity in the visual ventral stream Journal Article In: Journal of Cognitive Neuroscience, vol. 30, no. 9, pp. 1366–1377, 2018. @article{Manahova2018, Prior knowledge about the visual world can change how a visual stimulus is processed. Two forms of prior knowledge are often distinguished: stimulus familiarity (i.e., whether a stimulus has been seen before) and stimulus expectation (i.e., whether a stimulus is expected to occur, based on the context). Neurophysiological studies in monkeys have shown suppression of spiking activity both for expected and for familiar items in object-selective inferotemporal cortex. It is an open question, however, if and how these types of knowledge interact in their modulatory effects on the sensory response. To address this issue and to examine whether previous findings generalize to noninvasively measured neural activity in humans, we separately manipulated stimulus familiarity and expectation while noninvasively recording human brain activity using magnetoencephalography. We observed independent suppression of neural activity by familiarity and expectation, specifically in the lateral occipital complex, the putative human homologue of monkey inferotemporal cortex. Familiarity also led to sharpened response dynamics, which was predominantly observed in early visual cortex. Together, these results show that distinct types of sensory knowledge jointly determine the amount of neural resources dedicated to object processing in the visual ventral stream. |
Alon Mann; Ilana Naveh; Ehud Zohary On the superiority of visual processing in spatiotopic coordinates Journal Article In: Vision Research, vol. 150, pp. 15–23, 2018. @article{Mann2018, Organisms exploit spatiotemporal regularities in the environment to optimize goal attainment. For example, in experimental conditions, repetition of a stimulus at the same position speeds up response time. A recent study reported that this spatial priming occurs even when the eyes move between trials, indicating that the target is encoded in spatiotopic coordinates (Attention, Perception & Psychophysics 78, (2016) 114–132). However, in that study, the relevant position of the repeated stimulus eliciting spatiotopic priming, was always at the screen center. Using a similar paradigm, we find that reaction times for screen-centered targets are markedly shorter than for retinally-equidistant target positions. When this center preference is taken into account, the alleged spatiotopic priming effects are dramatically reduced, though not totally eliminated. In a second experiment, we show that the preferred central stimulus position is encoded in allocentric coordinates (e.g. screen position) rather than in an egocentric frame of reference (e.g. straight ahead). The better performance at the screen center, irrespective of gaze direction or seating position, is likely to reflect an optimal choice for the allocation of spatial attention. |
Alexandria C. Marino; James A. Mazer Saccades trigger predictive updating of attentional topography in area V4 Journal Article In: Neuron, vol. 98, no. 2, pp. 429–438.e4, 2018. @article{Marino2018, During natural behavior, saccades and attention act together to allocate limited neural resources. Attention is generally mediated by retinotopic visual neurons; therefore, specific neurons representing attended features change with each saccade. We investigated the neural mechanisms that allow attentional targeting in the face of saccades. Specifically, we looked for predictive changes in attentional modulation state or receptive field position that could stabilize attentional representations across saccades in area V4, known to be necessary for attention-dependent behavior. We recorded from neurons in monkeys performing a novel spatiotopic attention task, in which performance depended on accurate saccade compensation. Measurements of attentional modulation revealed a predictive attentional “hand-off” corresponding to a presaccadic transfer of attentional state from neurons inside the attentional focus before the saccade to those that will be inside the focus after the saccade. The predictive nature of the hand-off ensures that attentional brain maps are properly configured immediately after each saccade. Using a novel behavioral task, Marino and Mazer report that the attentional modulation state of neurons in extrastriate area V4 is updated before saccade onset. This attentional hand-off is independent of changes in receptive field position and represents a new type of perisaccadic updating. |
Marialuisa Martelli; Andrea Albonico; Emanuela Bricolo; Eleonora Frasson; Roberta Daini Focusing and orienting spatial attention differently modulate crowding in central and peripheral vision Journal Article In: Journal of Vision, vol. 18, no. 3, pp. 1–17, 2018. @article{Martelli2018, The allocation of attentional resources to a particular location or object in space involves two distinct processes: an orienting process and a focusing process. Indeed, it has been demonstrated that performance of different visual tasks can be improved when a cue, such as a dot, anticipates the position of the target (orienting), or when its dimensions (as in the case of a small square) inform about the size of the attentional window (focusing). Here, we examine the role of these two components of visuo-spatial attention (orienting and focusing) in modulating crowding in peripheral (Experiment 1 and Experiment 3a) and foveal (Experiment 2 and Experiment 3b) vision. The task required to discriminate the orientation of a target letter "T," close to acuity threshold, presented with left and right "H" flankers, as a function of target-flanker distance. Three cue types have been used: a red dot, a small square, and a big square. In peripheral vision (Experiment 1 and Experiment 3a), we found a significant improvement with the red dot and no advantage when a small square was used as a cue. In central vision (Experiment 2 and Experiment 3b), only the small square significantly improved participants' performance, reducing the critical distance needed to recover target identification. Taken together, the results indicate a behavioral dissociation of orienting and focusing attention in their capability of modulating crowding. In particular, we confirmed that orientation of attention can modulate crowding in visual periphery, while we found that focal attention can modulate foveal crowding. |
Aimee Martin; Stefanie I. Becker How feature relationships influence attention and awareness: Evidence from eye movements and EEG Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 44, no. 12, pp. 1865–1883, 2018. @article{Martin2018, Many everyday tasks require selecting relevant objects in the visual field while ignoring irrelevant information. A widely held belief is that attention is tuned to the exact feature value(s) of a sought-after target object (e.g., color, shape). In contrast, subsequent studies have shown that attentional orienting (capture) is often determined by the relative feature(s) that the target has relative to other irrelevant items surrounding (e.g., redder, larger). However, it is unknown whether conscious awareness is also determined by relative features. Alternatively, awareness could be more strongly determined by exact feature values, which seem to determine dwelling on objects. The present study examined eye movements in a color search task with different types of irrelevant distractors to test (a) whether dwelling is more strongly influenced by exact feature matches than relative matches, and (b) which of the processes (capture vs. dwelling) is more important for conscious awareness of the distractor. A second experiment used an electrophysiological marker of attention (N2pc in the electroencephalogram of participants) to test whether the results generalize to covert attention shifts. As expected, the results revealed that the initial capture of attention was strongest for distractors matching the relative color of the target, whereas similarity to the target was the most important determiner for dwelling. Awareness was more strongly determined by the initial capture of attention than dwelling. These results provide important insights into the interplay of attention and awareness and highlight the importance of considering relative, context-dependent features in theories of awareness. |
N. A. Martin-Key; Erich W. Graf; W. J. Adams; G. Fairchild Facial emotion recognition and eye movement behaviour in conduct disorder Journal Article In: Journal of Child Psychology and Psychiatry, vol. 59, no. 3, pp. 247–257, 2018. @article{MartinKey2018, Background: Conduct Disorder (CD) is associated with impairments in facial emotion recognition. However, it is unclear whether such deficits are explained by a failure to attend to emotionally informative face regions, such as the eyes, or by problems in the appraisal of emotional cues. Method: Male and female adolescents with CD and varying levels of callous-unemotional (CU) traits and age- and sex-matched typically developing (TD) controls (aged 13–18) categorised the emotion of dynamic and morphed static faces. Concurrent eye tracking was used to relate categorisation performance to participants' allocation of overt attention. Results: Adolescents with CD were worse at emotion recognition than TD controls, with deficits observed across static and dynamic expressions. In addition, the CD group fixated less on the eyes when viewing fearful and sad expressions. Across all participants, higher levels of CU traits were associated with fear recognition deficits and reduced attention to the eyes of surprised faces. Within the CD group, however, higher CU traits were associated with better fear recognition. Overall, males were worse at recognising emotions than females and displayed a reduced tendency to fixate the eyes. Discussion: Adolescents with CD, and particularly males, showed deficits in emotion recognition and fixated less on the eyes when viewing emotional faces. Individual differences in fixation behaviour predicted modest variations in emotion categorisation. However, group differences in fixation were small and did not explain the much larger group differences in categorisation performance, suggesting that CD-related deficits in emotion recognition were not mediated by abnormal fixation patterns. |
Jun Maruta; Lisa A. Spielman; Umesh Rajashekar; Jamshid Ghajar Association of visual tracking metrics with post-concussion symptomatology Journal Article In: Frontiers in Neurology, vol. 9, pp. 611, 2018. @article{Maruta2018, Attention impairment may provide a cohesive neurobiological explanation for clusters of clinical symptoms that occur after a concussion; therefore, objective quantification of attention is needed. Visually tracking a moving target is an attention-dependent sensorimotor function, and eye movement can be recorded easily and objectively to quantify performance. Our previous work suggested the utility of gaze-target synchronization metrics of a predictive visual tracking task in concussion screening and recovery monitoring. Another objectively quantifiable performance measure frequently suggested for concussion screening is simple visuo-manual reaction time (simple reaction time, SRT). Here, we used visual tracking and SRT tasks to assess changes between pre- and post-concussion performances and explore their relationships to post-concussion symptomatology. Athletes participating in organized competitive sports were recruited. Visual tracking and SRT records were collected from the recruited athlete pool as baseline measures over a four-year period. When athletes experienced a concussion, they were re-assessed within two weeks of their injury. We present the data from a total of 29 concussed athletes. Post-concussion symptom burden were assessed with the Rivermead Post-Concussion Symptoms Questionnaire and subscales of the Brain Injury Screening Questionnaire. Post-concussion changes in visual tracking and SRT performance were examined using a paired t-test. Correlations of changes in visual tracking and SRT performance to symptom burden were examined using Pearson's coefficients. Post-concussion changes in visual tracking performance were not consistent among the athletes. However, changes in several visual tracking metrics had moderate to strong correlations to symptom scales (|r| up to 0.68). On the other hand, while post-concussion SRT performance was reduced (p < 0.01), the changes in the performance metrics were not meaningfully correlated to symptomatology (|r| ≤ 0.33). Results suggest that visual tracking performance metrics reflect clinical symptoms when assessed within two weeks of concussion. Evaluation of concussion requires assessments in multiple domains because the clinical profiles are heterogeneous. While most individuals show recovery within a week of injury, others experience prolonged recovery periods. Visual tracking performance metrics may serve as a biomarker of debilitating symptoms of concussion implicating attention as a root cause of such pathologies. |
Anna Marzecová; Antonio Schettino; Andreas Widmann; Iria SanMiguel; Sonja A. Kotz; Erich Schröger Attentional gain is modulated by probabilistic feature expectations in a spatial cueing task: ERP evidence Journal Article In: Scientific Reports, vol. 8, pp. 54, 2018. @article{Marzecova2018, Several theoretical and empirical studies suggest that attention and perceptual expectations influence perception in an interactive manner, whereby attentional gain is enhanced for predicted stimuli. The current study assessed whether attention and perceptual expectations interface when they are fully orthogonal, i.e., each of them relates to different stimulus features. We used a spatial cueing task with block-wise spatial attention cues that directed attention to either left or right visual field, in which Gabor gratings of either predicted (more likely) or unpredicted (less likely) orientation were presented. The lateralised posterior N1pc component was additively influenced by attention and perceptual expectations. Bayesian analysis showed no reliable evidence for the interactive effect of attention and expectations on the N1pc amplitude. However, attention and perceptual expectations interactively influenced the frontally distributed anterior N1 component (N1a). The attention effect (i.e., enhanced N1a amplitude in the attended compared to the unattended condition) was observed only for the gratings of predicted orientation, but not in the unpredicted condition. These findings suggest that attention and perceptual expectations interactively influence visual processing within 200 ms after stimulus onset and such joint influence may lead to enhanced endogenous attentional control in the dorsal fronto-parietal attention network. |
Delphine Massendari; Matteo Lisi; Thérèse Collins; Patrick Cavanagh Memory-guided saccades show effect of perceptual illusion whereas visually-guided saccades do not Journal Article In: Journal of Neurophysiology, vol. 119, pp. 62–72, 2018. @article{Massendari2018, The double-drift stimulus (a drifting Gabor with orthogonal internal motion) generates a large discrepancy between its physical and perceived path. Surprisingly, saccades directed to the double-drift stimulus land along the physical, and not perceived, path (Lisi M, Cavanagh P. Curr Biol 25: 2535⫺2540, 2015). We asked whether memory-guided saccades exhibited the same dissociation from perception. Participants were asked to keep their gaze centered on a fixation dot while the double- drift stimulus moved back and forth on a linear path in the periphery. The offset of the fixation was the go signal to make a saccade to the target. In the visually guided saccade condition, the Gabor kept moving on its trajectory after the go signal but was removed once the saccade began. In the memory conditions, the Gabor disappeared before or at the same time as the go-signal (0- to 1,000-ms delay) and participants made a saccade to its remembered location. The results showed that visually guided saccades again targeted the physical rather than the perceived location. However, memory saccades, even with 0-ms delay, had landing positions shifted toward the perceived location. Our result shows that memory- and visually guided saccades are based on different spatial information |
Nicolas Masson; Clément Letesson; Mauro Pesenti Time course of overt attentional shifts in mental arithmetic: Evidence from gaze metrics Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 4, pp. 1009–1019, 2018. @article{Masson2018, Processing numbers induces shifts of spatial attention in probe detection tasks, with small numbers orienting attention to the left and large numbers to the right side of space. This has been interpreted as supporting the concept of a mental number line with number magnitudes ranging from left to right, from small to large numbers. Recently, the investigation of this spatial-numerical link has been extended to mental arithmetic with the hypothesis that solving addition or subtraction problems might induce attentional displacements, rightward or leftward, respectively. At the neurofunctional level, the activations elicited by the solving of additions have been shown to resemble those induced by rightward eye movements. However, the possible behavioural counterpart of these activations has not yet been observed. Here, we investigated overt attentional shifts with a target detection task primed by addition and subtraction problems (2-digit ± 1-digit operands) in participants whose gaze orientation was recorded during the presentation of the problems and while calculating. No evidence of early overt attentional shifts was observed while participants were hearing the first operand, the operator or the second operand, but they shifted their gaze towards the right during the solving step of addition problems. These results show that gaze shifts related to arithmetic problem solving are elicited during the solving procedure and suggest that their functional role is to access, from the first operand, the representation of the result. |
James Mathew; Pierre-Michel Bernier; Frederic R. Danion Asymmetrical relationship between prediction and control during visuomotor adaptation Journal Article In: eNeuro, vol. 5, no. 6, pp. 1–12, 2018. @article{Mathew2018, Current theories suggest that the ability to control the body and to predict its associated sensory consequences is key for skilled motor behavior. It is also suggested that these abilities need both be updated when the mapping between motor commands and sensory consequences is altered. Here we challenge this view by investigating the transfer of adaptation to rotated visual feedback between one task in which human participants had to control a cursor with their hand in order to track a moving target, and another in which they had to predict with their eyes the visual consequences of their hand movement on the cursor. Hand and eye tracking performances were evaluated respectively through cursor-target and gaze-cursor distance. Results reveal a striking dissociation: although prior adaptation of hand tracking greatly facilitates eye tracking, adaptation of eye tracking does not transfer to hand tracking. We conclude that although the update of control is associated with the update of prediction, prediction can be updated independently of control. To account for this pattern of results we propose that task demands mediate the update of prediction and control. Although a joint update of prediction and control seemed mandatory for success in our hand tracking task, the update of control was only facultative for success in our eye tracking task. More generally those results promote the view that prediction and control are mediated by separate neural processes and suggest that people can learn to predict movement consequences without necessarily promoting their ability to control these movements. |
Zhongling Pi; Jianzhong Hong; Weiping Hu Interaction of the originality of peers' ideas and students' openness to experience in predicting creativity in online collaborative groups Journal Article In: British Journal of Educational Technology, pp. 1–14, 2018. @article{Pi2018, There has been growing interest in the possibility that peers' ideas can stimulate students' creativity in group contexts. However, empirical research on this issue is limited. This study tested the hypothesis that peers' ideas and students' openness to experience would predict student creativity and attention to peers' ideas. Undergraduate students (n = 60) completed creative tasks in an online collaborative group in which the ideas of “peers” were pre‐programmed. Mixed ANOVAs showed support for the hypothesis. Specifically, students with higher (but not lower) openness, when exposed to a high (but not low) rate of peers' original ideas, paid greater attention to those ideas and were more creative. The implication for education is that teachers should encourage students to share more of their original ideas and to pay more attention to peers' ideas in online collaborative groups. |
Charisse B. Pickron; Arjun Iyer; Eswen Fava; Lisa S. Scott Learning to individuate: The specificity of labels differentially impacts infant visual attention Journal Article In: Child Development, vol. 89, no. 3, pp. 698–710, 2018. @article{Pickron2018, This study examined differences in visual attention as a function of label learning from 6 to 9 months of age. Before and after 3 months of parent-directed storybook training with computer-generated novel objects, event-related potentials and visual fixations were recorded while infants viewed trained and untrained images (n = 23). Relative to a pretraining, a no-training control group (n = 11), and to infants trained with category- level labels (e.g., all labeled “Hitchel”), infants trained with individual-level labels (e.g., “Boris,”“Jamar”) displayed increased visual attention and neural differentiation of objects after training. |
Aleks Pieczykolan; Lynn Huestegge Sources of interference in cross-modal action: Response selection, crosstalk, and general dual-execution costs Journal Article In: Psychological Research, vol. 82, no. 1, pp. 109–120, 2018. @article{Pieczykolan2018, Performing several actions simultaneously usually yields interference, which is commonly explained by referring to theoretical concepts such as crosstalk and structural limitations associated with response selection. While most research focuses on dual-task scenarios (involving two independent tasks), we here study the role of response selection and crosstalk for the control of cross-modal response compounds (saccades and manual responses) triggered by a single stimulus. In two experiments, participants performed single responses and spatially compatible versus incompatible dual-response compounds (crosstalk manipulation) in conditions with or without response selection requirements (i.e., responses either changed randomly between trials or were constantly repeated within a block). The results showed that substantial crosstalk effects were only present when response (compound) selection was required, not when a pre-selected response compound was merely repeated throughout a block of trials. We suggest that cross-response crosstalk operates on the level of response selection (during the activation of response codes), not on the level of response execution (when participants can rely on pre-activated response codes). Furthermore, we observed substantial residual dual-response costs even when neither response incompatibility nor response selection requirements were present. This suggests additional general dual-execution interference that occurs on a late, execution-related processing stage and even for two responses in rather distinct (manual and oculomotor) output modules. Generally, the results emphasize the importance of considering oculomotor interference in theorizing on multiple-action control. |
Alessandro Piras; Milena Raffi; Monica Perazzolo; Salvatore Squatrito Influence of heading perception in the control of posture Journal Article In: Journal of Electromyography and Kinesiology, vol. 39, pp. 89–94, 2018. @article{Piras2018, The optic flow visual input directly influences the postural control. The aim of the present study was to examine the relationship between visually induced heading perception and postural stability, using optic flow stimulation. The dots were accelerated to simulate a heading direction to the left or to the right of the vertical midline. The participants were instructed to indicate the perceived optic flow direction by making a saccade to the simulated heading direction. We simultaneously acquired electromyographyc and center of pressure (COP) signals. We analysed the postural sway during three different epochs: (i) the first 500 ms after the stimulus onset, (ii) 500 ms before saccade onset, epoch in which the perception is achieved and, (iii) 500 ms after saccade onset. Participants exhibited a greater postural instability before the saccade, when the perception of heading was achieved, and the sway increased further after the saccade. These results indicate that the conscious representation of the self-motion affects the neural control of posture more than the mere visual motion, producing more instability when visual signals are contrasting with eye movements. It could be that part of these effects are due to the interactions between gaze shift and optic flow. |