EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2019 |
Zhuoting Zhu; Yin Hu; Chimei Liao; Stuart Keel; Ren Huang; Yanping Liu; Mingguang He Visual span and cognitive factors affect Chinese reading speed Journal Article In: Journal of Vision, vol. 19, no. 14, pp. 1–11, 2019. @article{Zhu2019d, Visual span, which is the number of recognizable letters seen without moving the eyes, has been proven to impose a sensory limitation for alphabetic reading speed (Chung, 2011; Chung, Legge, & Cheung, 2004; Lee, Kwon, Legge, & Gefroh, 2010; Legge, Ahn, Klitz, & Luebker, 1997; Legge, Hooven, Klitz, Stephen Mansfield, & Tjan, 2002; D. Yu, Cheung, Legge, & Chung, 2010). However, little is known about the effects of visual span on Chinese reading performance. Of note, Chinese text differs greatly from that of the alphabetic writing system. There are no spaces between words, and readers are forced to utilize their lexical knowledge to segment Chinese characters into meaningful words, thus increasing the relative importance of cognitive/linguistic factors in reading performance. Therefore, the aim of the present study is to explore whether visual span and cognitive/linguistic factors have independent effects on Chinese reading speed. Visual span profiles, cognitive/linguistic factors indicated by word frequency, and Chinese sentence-reading performance were collected from 28 native Chinese-speaking subjects. We found that the visual-span size and cognitive/linguistic factors independently contributed to Chinese sentence-reading speed (all ps < 0.05). We concluded that both the visual-span size and cognitive/linguistic factors represented bottlenecks for Chinese sentence-reading speed. |
Rongjuan Zhu; Xuqun You; Shuoqiu Gan; Jinwei Wang Spatial attention shifts in addition and subtraction arithmetic: Evidence of eye movement Journal Article In: Perception, vol. 48, no. 9, pp. 835–849, 2019. @article{Zhu2019a, Recently, it has been proposed that solving addition and subtraction problems can evoke horizontal shifts of spatial attention. However, prior to this study, it remained unclear whether orienting shifts of spatial attention relied on actual arithmetic processes (i.e., the activated magnitude) or the semantic spatial association of the operator. In this study, spatial–arithmetic associations were explored through three experiments using an eye tracker, which attempted to investigate the mechanism of those associations. Experiment 1 replicated spatial–arithmetic associations in addition and subtraction problems. Experiments 2 and 3 selected zero as the operand to investigate whether these arithmetic problems could induce shifts of spatial attention. Experiment 2 indicated that addition and subtraction problems (zero as the second operand, i.e., 2 + 0) do not induce shifts of spatial attention. Experiment 3 showed that addition and subtraction arithmetic (zero as the first operand, i.e., 0 + 2) do facilitate rightward and leftward eye movement, respectively. This indicates that the operator alone does not induce horizontal eye movement. However, our findings support the idea that solving addition and subtraction problems is associated with horizontal shifts of spatial attention. |
Matteo Toscani; Ezgi I. Yücel; Katja Doerschner Gloss and speed judgments yield different fine tuning of saccadic sampling in dynamic scenes Journal Article In: i-Perception, vol. 10, no. 6, pp. 1–10, 2019. @article{Toscani2019, Image motion contains potential cues about the material properties of objects. In earlier work, we proposed motion cues that could predict whether a moving object would be perceived as shiny or matte. However, whether the visual system uses these cues is still uncertain. Herein, we use the tracking of eye movements as a tool to understand what visual information observers use when engaged in material perception. Observers judged either the gloss or the speed of moving blobby shapes in an eye tracking experiment. Results indicate that during glossiness judgments, participants tend to look at gloss-diagnostic dynamic features more than during speed judgments. This suggests a fine tuning of the visual system to properties of moving stimuli: Task relevant information is actively singled out and processed in a dynamically changing environment. |
Tobiasz Trawiński; Natalie Mestry; Beth Harland; Simon P. Liversedge; Hayward J. Godwin; Nick Donnelly The spectatorship of portraits by naïve beholders Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, pp. 1–18, 2019. @article{Trawinski2019, The spectatorship of portraits by naïve viewers (beholders) was explored in a single experiment. Twenty-five participants rated their liking for 142 portraits painted by Courbet (36 paintings), Fantin-Latour (36 paintings), and Manet (70 paintings) on a 4-point Likert scale. The portraits were classified in terms of focused versus ambiguous nature of sitter gaze and the presence of salient features in the context beyond sitters. Participants rated portraits while having their eye movements recorded. The portraits were split into regions of interest (ROIs) defined by faces, bodies, and context. Participants also completed individual difference measures of attention and task focus. Results showed naïve spectatorship to be subject to attentional capture by faces. Paradoxically, the presence of salient features in the context amplified the attentional capture by faces through increasing participants liking of portraits. Attentional capture by faces was also influenced by sitter gaze and task focus. Unsurprisingly, the spectatorship of portraits by naïve beholders is dominated by faces, but the extent of this dominance is influenced by exogenous and endogenous attentional factors. |
J. M. Traynor; A. Gough; E. Duku; David I. Shore; G. B. C. Hall Eye tracking effort expenditure and autonomic arousal to social and circumscribed interest stimuli in Autism Spectrum Disorder Journal Article In: Journal of Autism and Developmental Disorders, vol. 49, pp. 1988–2002, 2019. @article{Traynor2019, The social communicative deficits and repetitive behaviours seen in Autism Spectrum Disorder (ASD) may be affected by altered stimulus salience and reward attribution. The present study used eye tracking and a behavioural measure to index effort expenditure, arousal, and attention, during viewing of images depicting social scenes and subject-specific circumscribed interests in a group of 10 adults with ASD (mean age 25.4 years) and 19 typically-developing controls (mean age 20.7 years) Split-plot and one-way repeated measures ANOVAs were used to explore results. A significant difference between the ASD and control group was found in the amount of effort expended to view social and circumscribed images. The ASD group also displayed significant differences in pupillary response to social and circumscribed images, indicative of changes in autonomic arousal. Overall, the results support the social motivation hypothesis in ASD (Chevallier et al., Trends Cogn Sci 16(4):231-239, 2012) and suggest a role for autonomic arousal in the ASD symptom dyad. |
Sébastien Tremblay; Florian Pieper; Adam Sachs; Ridha Joober; Julio Martinez-Trujillo The effects of methylphenidate (Ritalin) on the neurophysiology of the monkey caudal prefrontal cortex Journal Article In: eNeuro, vol. 6, no. 1, pp. 1–17, 2019. @article{Tremblay2019, Methylphenidate (MPH), commonly known as Ritalin, is the most widely prescribed drug worldwide to treat patients with attention deficit disorders. Although MPH is thought to modulate catecholamine neurotransmission in the brain, it remains unclear how these neurochemical effects influence neuronal activity and lead to attentional enhancements. Studies in rodents overwhelmingly point to the lateral prefrontal cortex (LPFC) as a main site of action of MPH. To understand the mechanism of action of MPH in a primate brain, we recorded the responses of neuronal populations using chronic multielectrode arrays implanted in the caudal LPFC of two macaque monkeys while the animals performed an attention task (N 2811 neuronal recordings). Over different recording sessions (N 55), we orally administered either various doses of MPH or a placebo to the animals. Behavioral analyses revealed positive effects of MPH on task performance at specific doses. However, analyses of individual neurons activity, noise correlations, and neuronal ensemble activity using machine learning algorithms revealed no effects of MPH. Our results suggest that the positive behavioral effects of MPH observed in primates (including humans) may not be mediated by changes in the activity of caudal LPFC neurons. MPH may enhance cognitive performance by modulating neuronal activity in other regions of the attentional network in the primate brain. |
Pei -Yi Tsai; Hsiao-Ching She; Sheng-Chang Chen; Li-Yu Huang; Wen-Chi Chou; Jeng-Ren Duann; Tzyy-Ping Jung Eye fixation-related fronto-parietal neural network correlates of memory retrieval Journal Article In: International Journal of Psychophysiology, vol. 138, pp. 57–70, 2019. @article{Tsai2019, Eye movements are considered to be informative with regard to the underlying cognitive processes of human beings. Previous studies have reported that eye movements are associated with which scientific concepts are retrieved correctly. Moreover, other studies have also suggested that eye movements involve the cooperative activity of the human brain's fronto-parietal circuits. Less research has been conducted to investigate whether fronto-parietal EEG oscillations are associated with the retrieval processing of scientific concepts. Our findings in this study demonstrated that the fronto-parietal network is indeed crucial for successful memory retrieval. In short, significantly lower theta augmentation in the frontal midline and lower alpha suppression in the right parietal region were observed at the 5th eye fixation for physics concepts that were correctly retrieved than for those that were incorrectly retrieved. Moreover, the visual cortex in the occipital lobe exhibits a significantly greater theta augmentation followed by an alpha suppression following each eye fixation, while a right fronto-parietal asymmetry was also found for the successful retrieval of presentations of physics concepts. In particular, the study results showed that eye fixation-related frontal midline theta power and right parietal alpha power at the 5th eye fixation have the greatest predictive power regarding the correctness of the retrieval of physics concepts. |
Pei-Yi Tsai; Ting-Ting Yang; Hsiao-Ching She; Sheng-Chang Chen Leveraging college students' scientific evidence-based reasoning performance with eye-tracking-supported metacognition Journal Article In: Journal of Science Education and Technology, vol. 28, no. 6, pp. 613–627, 2019. @article{Tsai2019a, This study specifically focuses on examining whether the eye-tracking-supported metacognition would benefit science majors' and nonscience majors' scientific evidence-based reasoning performance. Thirty-nine science majors and forty-one nonscience majors were recruited to participate in an online scientific evidence-based reasoning task. Data regarding the students' online learning process and eye movement behaviors were simultaneously collected. The results indicated that the science majors not only significantly outperformed the nonscience majors in terms of reasoning performance but also allocated significantly greater eye movements during their first time of processing scientific evidence-based reasoning task. Immediately after the task, the eye-tracking-supported metacognition provided each student with individualized feedback regarding their eye movement behaviors, such as their eye fixation sequence, durations, and locations. With such immediate feedback, the students were provided an opportunity to engage in self-monitoring, evaluating, and calibrating their approaches in order to revise their final answers. After the application of this eye-tracking-supported metacognition, both the science majors and the nonscience majors all made significant improvements in their scientific evidence-based reasoning performance. However, no statistically significant differences in the reasoning performance or visual attention of the science majors and nonscience majors were found. This study demonstrated that the use of eye-tracking-supported metacognition was not only able to maximize the performance of both the science majors and nonscience majors but that it also bridged the gap in performance between the two groups. |
Antigoni Tsiami; Petros Koutras; Athanasios Katsamanis; Argiro Vatakis; Petros Maragos A behaviorally inspired fusion approach for computational audiovisual saliency modeling Journal Article In: Signal Processing: Image Communication, vol. 76, pp. 186–200, 2019. @article{Tsiami2019, Human attention is highly influenced by multi-modal combinations of perceived sensory information and especially audiovisual information. Although systematic behavioral experiments have provided evidence that human attention is multi-modal, most bottom-up computational attention models, namely saliency models for fixation prediction, focus on visual information, largely ignoring auditory input. In this work, we aim to bridge the gap between findings from neuroscience concerning audiovisual attention and the computational attention modeling, by creating a 2-D bottom-up audiovisual saliency model. We experiment with various fusion schemes for integrating state-of-the-art auditory and visual saliency models in a single audiovisual attention/saliency model based on behavioral findings, that we validate in two experimental levels: (1) using results from behavioral experiments aiming to reproduce the results in a mostly qualitative manner and to ensure that our modeling is in line with behavioral findings, and (2) using 6 different databases with audiovisual human eye-tracking data. For this last purpose, we have also collected eye-tracking data for two databases: ETMD, a movie database that contains highly edited videos (movie clips), and SumMe, a database that contains unstructured and unedited user videos. Experimental results indicate that our proposed audiovisual fusion schemes in most cases improve performance compared to visual-only models, without any prior knowledge of the video/audio content. Also, they can be generalized and applied to any auditory saliency model and any visual spatio-temporal saliency model. |
Naime Tugac; David A. Gonzalez; Kimihiro Noguchi; Ewa Niechwiej-Szwedo In: Experimental Eye Research, vol. 183, pp. 76–83, 2019. @article{Tugac2019, Binocular vision provides the most accurate and precise depth information; however, many people have impairments in binocular visual function. It is possible that other sensory inputs could be used to obtain reliable depth information when binocular vision is not available. However, it is currently unknown whether depth information from another modality improves target localization in depth during action execution. Therefore, the goal of this study was to assess whether somatosensory input improves target localization during the performance of a precision placement task. Visually normal young adults (n = 15) performed a bead threading task during binocular and monocular viewing in two experimental conditions where needle location was specified by 1) vision only, or 2) vision and somatosensory input, which was provided by the non-dominant limb. Performance on the task was assessed using spatial and temporal kinematic measures. In accordance with the hypothesis, results showed that the interval spent placing the bead on the needle was significantly shorter during monocular viewing when somatosensory input was available in comparison to a vision only condition. In contrast, results showed no evidence to support that somatosensory input about the needle location affects trajectory control. These findings demonstrate that the central nervous system relies predominately on visual input during reach execution, however, somatosensory input can be used to facilitate the performance of the precision placement task. |
Massimo Turatto; Francesca Bonetti; Cinzia Chiandetti; David Pascucci Context-specific distractors rejection: Contextual cues control long-term habituation of attentional capture by abrupt onsets Journal Article In: Visual Cognition, vol. 27, no. 3-4, pp. 291–304, 2019. @article{Turatto2019, The possibility to counteract visual distraction is fundamental for an efficient interaction with the environment, particularly when a salient but irrelevant stimulation repeatedly impinges our visual system. Previous studies have shown that such unwanted attentional capture is subject to habituation, a widespread form of behavioural plasticity that allows rejecting the unwanted stimulation. Although habituation is generally considered to be a non-associative form of learning, here we directly tested the possibility that habituation of attentional capture triggered by a salient onset is context specific. In two experiments we showed that distractor filtering achieved via habituation was specific for the visual context (naturalistic or geometric) in which the distractor was presented. When the same distractor presented during the training phase appeared in a new context in the test phase, a recovery of the previously habituated capture was observed. By contrast, no recovery of capture was found when the background did not change. Habituation mechanisms provide a straightforward explanation for our findings, which show that distractor filtering is achieved by taking into account the spatial context in which the distracting stimulus is encountered. |
E. Sabrina Twilhaar; Jorrit F. Kieviet; Catharina E. Bergwerff; Martijn J. J. Finken; Ruurd M. Elburg; Jaap Oosterlaan Social adjustment in adolescents born very preterm: Evidence for a cognitive basis of social problems Journal Article In: Journal of Pediatrics, vol. 213, pp. 66–73, 2019. @article{Twilhaar2019, Objective: To increase the understanding of social adjustment and autism spectrum disorder symptoms in adolescents born very preterm by studying the role of emotion recognition and cognitive control processes in the relation between very preterm birth and social adjustment. Study design: A Dutch cohort of 61 very preterm and 61 full-term adolescents aged 13 years participated. Social adjustment was rated by parents, teachers, and adolescents and autism spectrum disorder symptoms by parents. Emotion recognition was assessed with a computerized task including pictures of child faces expressing anger, fear, sadness, and happiness with varying intensity. Cognitive control was assessed using a visuospatial span, antisaccade, and sustained attention to response task. Performance measures derived from these tasks served as indicators of a latent cognitive control construct, which was tested using confirmatory factor analysis. Mediation analyses were conducted with emotion recognition and cognitive control as mediators of the relation between very preterm birth and social problems. Results: Very preterm adolescents showed more parent- and teacher-rated social problems and increased autism spectrum disorder symptomatology than controls. No difference in self-reported social problems was observed. Moreover, very preterm adolescents showed deficits in emotion recognition and cognitive control compared with full-term adolescents. The relation between very preterm birth and parent-rated social problems was significantly mediated by cognitive control but not by emotion recognition. Very preterm birth was associated with a 0.67-SD increase in parent-rated social problems through its negative effect on cognitive control. Conclusions: The present findings provide strong evidence for a central role of impaired cognitive control in the social problems of adolescents born very preterm. |
Israel Vaca-Palomares; Donald C. Brien; Brian C. Coe; Adriana Ochoa-Morales; Leticia Martínez-Ruano; Douglas P. Munoz; Juan Fernandez-Ruiz Implicit learning impairment identified via predictive saccades in Huntington's disease correlates with extended cortico-striatal atrophy Journal Article In: Cortex, vol. 121, pp. 89–103, 2019. @article{VacaPalomares2019, The ability to anticipate events and execute motor commands prior to a sensory event is an essential capability for human's everyday life. This implicitly learned anticipatory behavior depends on the past performance of repeated sensorimotor interactions timed with external cues. This kind of predictive behavior has been shown to be compromised in neurological disorders such as Huntington disease (HD), in which neural atrophy includes key cortical and basal ganglia regions. To investigate the neural basis of the anticipatory behavioral deficits in HD we used a predictive-saccade paradigm that requires predictive control to generate saccades in a metronomic temporal pattern. This is ideal because the integrity of the oculomotor network that includes the striatum and prefrontal, parietal, occipital and temporal cortices can be analyzed using structural MRI. Our results showed that the HD patients had severe predictive saccade deficits (i.e., an inability to reduce saccade reaction time in predictive condition), which are accentuated in patients with more severe motor deterioration. Structural imaging analyses revealed that these anticipatory deficits correlated with grey-matter atrophy in frontal, parietal-occipital and striatal regions. These findings indicate that the predictive saccade control deficits in HD are related to an extended cortico-striatal atrophy. This suggests that eye movement measurement could be a reliable marker of the progression of cognitive deficits in HD. |
Avinash R. Vaidya; Lesley K. Fellows Ventromedial frontal lobe damage affects interpretation, not exploration, of emotional facial expressions Journal Article In: Cortex, vol. 113, pp. 312–328, 2019. @article{Vaidya2019, Recognizing and distinguishing the emotional states of those around us is crucial for adaptive social behavior. Previous work has shown that damage to the ventromedial frontal lobe (VMF) impairs recognition of subtle emotional facial expressions and affects fixation patterns to face stimuli. However, whether this relates to deficits in acquiring or interpreting facial expression information remains unclear. We tested 37 patients with frontal lobe damage, including 17 subjects with VMF lesions, in a series of emotion recognition tasks with different gaze manipulations. Subjects were asked to rate neutral, subtle and extreme emotional expressions while freely examining faces, while instructed to look only at the eyes, and in a gaze-contingent condition that required top-down direction of eye movements to reveal the stimulus. People with VMF damage were worse at detecting subtle disgust during free viewing and confused extreme emotional expressions more than healthy controls. However, fixation patterns did not differ systematically between groups during free or gaze-contingent viewing conditions. Moreover, instruction to fixate only the eyes did not improve the performance of VMF damaged subjects. These data argue that VMF is not necessary for normal fixations to emotional face stimuli, and that impairments in emotion recognition after VMF damage do not stem from impaired information gathering, as indexed by patterns of fixation. |
Raphael Vallat; David Meunier; Alain Nicolas; Perrine Ruby Hard to wake up? The cerebral correlates of sleep inertia assessed using combined behavioral, EEG and fMRI measures Journal Article In: NeuroImage, vol. 184, pp. 266–278, 2019. @article{Vallat2019, The first minutes following awakening from sleep are typically marked by reduced vigilance, increased sleepiness and impaired performance, a state referred to as sleep inertia. Although the behavioral aspects of sleep inertia are well documented, its cerebral correlates remain poorly understood. The present study aimed at filling this gap by measuring in 34 participants the changes in behavioral performance (descending subtraction task, DST), EEG spectral power, and resting-state fMRI functional connectivity across three time points: before an early-afternoon 45-min nap, 5 min after awakening from the nap and 25 min after awakening. Our results showed impaired performance at the DST at awakening and an intrusion of sleep-specific features (spectral power and functional connectivity) into wakefulness brain activity, the intensity of which was dependent on the prior sleep duration and depth for the functional connectivity (14 participants awakened from N2 sleep, 20 from N3 sleep). Awakening in N3 (deep) sleep induced the most robust changes and was characterized by a global loss of brain functional segregation between task-positive (dorsal attention, salience, sensorimotor) and task-negative (default mode) networks. Significant correlations were observed notably between the EEG delta power and the functional connectivity between the default and dorsal attention networks, as well as between the percentage of mistake at the DST and the default network functional connectivity. These results highlight (1) significant correlations between EEG and fMRI functional connectivity measures, (2) significant correlations between the behavioral aspect of sleep inertia and measures of the cerebral functioning at awakening (both EEG and fMRI), and (3) the important difference in the cerebral underpinnings of sleep inertia at awakening from N2 and N3 sleep. |
Thomas D. W. Wilcockson; Diako Mardanbegi; Peter Sawyer; Hans Gellersen; Baiqiang Xia; Trevor J. Crawford Oculomotor and inhibitory control in dyslexia Journal Article In: Frontiers in Systems Neuroscience, vol. 12, pp. 66, 2019. @article{Wilcockson2019a, Previous research has suggested that people with dyslexia may have an impairment of inhibitory control. The oculomotor system is vulnerable to interference at various levels of the system, from high level cognitive control to peripheral neural pathways. Therefore, in this work we examined two forms of oculomotor inhibition and two forms of oculomotor interference at high and low levels of the control system. This study employed a prosaccade, antisaccade, and a recent distractor eyemovement task (akin to a spatial negative priming) in order to explore high level cognitive control and the inhibition of a competing distractor. To explore low-level control we examined the frequency of microsaccades and post-saccade oscillations. The findings demonstrated that dyslexics have an impairment of volitional inhibitory control, reflected in the antisaccade task. In contrast, inhibitory control at the location of a competing distractor was equivalent in the dyslexic and non-dyslexic groups. There was no difference in the frequency of microsaccades between the two groups. However, the dyslexic group generated larger microsaccades prior to the target onset in the prosaccade and the antisaccade tasks.The groups did not differ in the frequency or in the morphology of the post-saccade oscillations. These findings reveal that the word reading and attentional difficulties of dyslexic readers cannot be attributed to an impairment in the inhibition of a visual distractor or interference from low-level oculomotor instability. We propose that the inhibitory impairment in dyslexia occurs at a higher cognitive level, perhaps in relation to the process of attentional disengagement. |
Thomas D. W. Wilcockson; Emmanuel M. Pothos; Andrew C. Parrott Substance usage intention does not affect attentional bias: Implications from Ecstasy/MDMA users and alcohol drinkers Journal Article In: Addictive Behaviors, vol. 88, pp. 175–181, 2019. @article{Wilcockson2019, Background: An attentional bias towards substance-related stimuli has been demonstrated with alcohol drinkers and many other types of substance user. There is evidence to suggest that the strength of an attentional bias may vary as a result of context (or use intention), especially within Ecstasy/MDMA users. Objective: Our aim was to empirically investigate attentional biases by observing the affect that use intention plays in recreational MDMA users and compare the findings with that of alcohol users. Method: Regular alcohol drinkers were compared with MDMA users. Performance was assessed for each group separately using two versions of an eye-tracking attentional bias task with pairs of matched neutral, and alcohol or MDMA-related visual stimuli. Dwell time was recorded for alcohol or MDMA. Participants were tested twice, when intending and not intending to use MDMA or alcohol. Note, participants in the alcohol group did not complete any tasks which involved MDMA-related stimuli and vice versa. Results: Significant attentional biases were found with both MDMA and alcohol users for respective substance-related stimuli, but not control stimuli. Critically, use intention did not affect attentional biases. Attentional biases were demonstrated with both MDMA users and alcohol drinkers when usage was and was not intended. Conclusions: These findings demonstrate the robust nature of attentional biases i.e. once an attentional bias has developed, it is not readily affected by intention. |
Konstantin F. Willeke; Xiaoguang Tian; Antimo Buonocore; Joachim Bellet; Araceli Ramirez-Cardenas; Ziad M. Hafed Memory-guided microsaccades Journal Article In: Nature Communications, vol. 10, pp. 3710, 2019. @article{Willeke2019, Despite strong evidence to the contrary in the literature, microsaccades are overwhelmingly described as involuntary eye movements. Here we show in both human subjects and monkeys that individual microsaccades of any direction can easily be triggered: (1) on demand, based on an arbitrary instruction, (2) without any special training, (3) without visual guidance by a stimulus, and (4) in a spatially and temporally accurate manner. Subjects voluntarily generated instructed “memory-guided” microsaccades readily, and similarly to how they made normal visually-guided ones. In two monkeys, we also observed midbrain superior colliculus neurons that exhibit movement-related activity bursts exclusively for memory-guided microsaccades, but not for similarly-sized visually-guided movements. Our results demonstrate behavioral and neural evidence for voluntary control over individual microsaccades, supporting recently discovered functional contributions of individual microsaccade generation to visual performance alterations and covert visual selection, as well as observations that microsaccades optimize eye position during high acuity visually-guided behavior. |
Chad C. Williams; Mitchel Kappen; Cameron D. Hassall; Bruce Wright; Olave E. Krigolson Thinking theta and alpha: Mechanisms of intuitive and analytical reasoning Journal Article In: NeuroImage, vol. 189, pp. 574–580, 2019. @article{Williams2019, Humans have a unique ability to engage in different modes of thinking. Intuitive thinking (coined System 1, see Kahneman, 2011) is fast, automatic, and effortless whereas analytical thinking (coined System 2) is slow, contemplative, and effortful. We extend seminal pupillometry research examining these modes of thinking by using electroencephalography (EEG) to decipher their respective underlying neural mechanisms. We demonstrate that System 1 thinking is characterized by an increase in parietal alpha EEG power reflecting autonomic access to long-term memory and a release of attentional resources whereas System 2 thinking is characterized by an increase in frontal theta EEG power indicative of the engagement of cognitive control and working memory processes. Consider our results in terms of an example - a child may need cognitive control and working memory when contemplating a mathematics problem yet an adullt can drive a car with little to no attention by drawing on easily accessed memories. Importantly, the unravelling of intuitive and analytical thinking mechanisms and their neural signatures will provide insight as to how different modes of thinking drive our everyday lives. |
Elin H. Williams; Fil Cristino; Emily S. Cross Human body motion captures visual attention and elicits pupillary dilation Journal Article In: Cognition, vol. 193, pp. 104029, 2019. @article{Williams2019a, The social motivation theory proposes that individuals naturally orient their attention to the social world. Research has documented the rewarding value of social stimuli, such as biological motion, to typically developed individuals. Here, we used complementary eye tracking measures to investigate how social motion cues affect attention and arousal. Specifically, we examined whether viewing the human body moving naturally versus mechanically leads to greater attentional engagement and changes in autonomic arousal (as assessed by pupil size measures). Participants completed an attentional disengagement task in two independent experiments, while pupillary responses were recorded. We found that natural, human-like motion produced greater increases in attention and arousal than mechanical motion, whether the moving agent was human or not. These findings contribute an important piece to our understanding of social motivation by demonstrating that human motion is a key social stimulus that engages visual attention and induces autonomic arousal in the viewer. |
Michael T. Willoughby; Benjamin Piper; Dunston Kwayumba; Megan McCune Measuring executive function skills in young children in Kenya Journal Article In: Child Neuropsychology, vol. 25, no. 4, pp. 425–444, 2019. @article{Willoughby2019, Interest in measuring executive function skills in young children in low- and middle-income country contexts has been stymied by the lack of assessments that are both easy to deploy and scalable. This study reports on an initial effort to develop a tablet-based battery of executive function tasks, which were designed and extensively studied in the United States, for use in Kenya. Participants were 193 children, aged 3–6 years old, who attended early childhood development and education centers. The rates of individual task completion were high (65–100%), and 85% of children completed three or more tasks. Assessors indicated that 90% of all task administrations were of acceptable quality. An executive function composite score was approximately normally distributed, despite higher-than-expected floor and ceiling effects on inhibitory control tasks. Children's simple reaction time (β = –0.20 |
Matthew B. Winn; Alan Kan; Ruth Y. Litovsky Temporal dynamics and uncertainty in binaural hearing revealed by anticipatory eye movements Journal Article In: The Journal of the Acoustical Society of America, vol. 145, no. 2, pp. 676–691, 2019. @article{Winn2019, Accurate perception of binaural cues is essential for left-right sound localization. Much literature focuses on threshold measures of perceptual acuity and accuracy. This study focused on suprathreshold perception using an anticipatory eye movement (AEM) paradigm designed to capture subtle aspects of perception that might not emerge in behavioral-motor responses, such as the accumulation of certainty, and rapid revisions in decision-making. Participants heard interaural timing differences (ITDs) or interaural level differences in correlated or uncorrelated narrowband noises, respectively. A cartoon ball moved behind an occluder and then emerged from the left or right side, consistent with the binaural cue. Participants anticipated the correct answer (before it appeared) by looking where the ball would emerge. Results showed quicker and more steadfast gaze fixations for stimuli with larger cue magnitudes. More difficult stimuli elicited a wider distribution of saccade times and greater number of corrective saccades before final judgment, implying perceptual uncertainty or competition. Cue levels above threshold elicited some wrong-way saccades that were quickly corrected. Saccades to ITDs were earlier and more reliable for low-frequency noises. The AEM paradigm reveals the time course of uncertainty and changes in perceptual decision-making for supra-threshold binaural stimuli even when behavioral responses are consistently correct. |
Samantha Withnell; Christopher R. Sears; Kristin M. Ranson In: Journal of Experimental Psychopathology, vol. 10, no. 2, pp. 1–16, 2019. @article{Withnell2019, Understanding attentional biases associated with body dissatisfaction can aid in devising and refining programs to reduce body dissatisfaction. This study compared attention to images of women's bodies before and after a body satisfaction or body dissatisfaction priming task. Attention was assessed using eye-gaze tracking, by measuring participants' fixations to images of “thin” models, “fat” models, and images of average women over an 8-s presentation. Women with high (n = 65) and low (n = 43) levels of trait body dissatisfaction, as measured by the Body Shape Questionnaire, were randomly assigned to a body satisfaction or body dissatisfaction priming task. Results indicated that both priming tasks were effective at modifying participants' state body satisfaction. Women with high body dissatisfaction exhibited an attentional bias to thin and fat model images prior to the priming procedure, replicating previous findings. Contrary to predictions, body dissatisfaction priming increased attention to body images for women with both high and low body dissatisfaction, whereas body satisfaction priming had no effect on attention for either group. These findings show that women with high and low body dissatisfaction are vulnerable to the effects of body dissatisfaction priming. |
Christian Wolf; Alexander C. Schütz Choice-induced inter-trial inhibition is modulated by idiosyncratic choice-consistency Journal Article In: PLoS ONE, vol. 14, no. 12, pp. e0226982, 2019. @article{Wolf2019, Humans constantly decide among multiple action plans. Carrying out one action usually implies that other plans are suppressed. Here we make use of inter-trial effects to determine whether suppression of non-chosen action plans is due to proactively preparing for upcoming decisions or due to retroactive influences from previous decisions. Participants received rewards for timely and accurate saccades to targets appearing left or right from fixation. Each block interleaved trials with one (single-trial) or two targets (choice-trial). Whereas single-trial rewards were always identical, rewards for the two targets in choice-trials could either be identical (unbiased) or differ (biased) within one block. We analyzed single-trial latencies as a function of idiosyncratic choice-consistency or reward-bias, the previous trial type and whether the same or the other target was selected in the preceding trial. After choice-trials, single-trial responses to the previously non-chosen target were delayed. For biased choices, inter-trial effects were strongest when choices were followed by a singletrial to the non-chosen target. In the unbiased condition, inter-trial effects increased with increasing individual consistency of choice behavior. These findings suggest that the suppression of alternative action plans is not coupled to target selection and motor execution but instead depends on top-down signals like the overall preference of one target over another. |
Luca Wollenberg; Heiner Deubel; Martin Szinte Investigating the deployment of visual attention before accurate and averaging saccades via eye tracking and assessment of visual sensitivity Journal Article In: Journal of Visualized Experiments, no. 145, pp. 1–9, 2019. @article{Wollenberg2019, This experimental protocol was designed to investigate whether visual attention is obligatorily deployed at the endpoint of saccades. To this end, we recorded the eye position of human participants engaged in a saccade task via eye tracking and assessed visual orientation discrimination performance at various locations during saccade preparation. Importantly, instead of using a single saccade target paradigm for which the saccade endpoint typically coincides roughly with the target, this protocol comprised the presentation of two nearby saccade targets, leading to a distinct spatial dissociation between target locations and saccade endpoint on a substantial number of trials. The paradigm allowed us to compare presaccadic visual discrimination performance at the endpoint of accurate saccades (landing at one of the saccade targets) and of averaging saccades (landing at an intermediate location in between the two targets). We observed a selective enhancement of visual sensitivity at the endpoint of accurate saccades but not at the endpoint of averaging saccades. Rather, before the execution of averaging saccades, visual sensitivity was equally enhanced at both targets, suggesting that saccade averaging follows from unresolved attentional selection among the saccade targets. These results argue against a mandatory coupling between visual attention and saccade programming based on a direct measure of presaccadic visual sensitivity rather than saccadic reaction times, which have been used in other protocols to draw similar conclusions. While our protocol provides a useful framework to investigate the relationship between visual attention and saccadic eye movements at the behavioral level, it can also be combined with electrophysiological measures to extend insights at the neuronal level. |
Bo Yeong Won; Mary Kosoyan; Joy J. Geng Evidence for second-order singleton suppression based on probabilistic expectations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 1, pp. 125–138, 2019. @article{Won2019, Decades of research in attention have shown that salient distractors (e.g., a color singleton) tend to capture attention. However, in most studies, singleton distractors are just as likely to be present as absent. We therefore have little knowledge of how probabilistic expectations of the salient distractor's occurrence and features affect suppression. In three experiments, we explored this question by manipulating the frequency of a singleton distractor and the variability of its color within a search display. We found that increased expectations regarding the occurrence of the singleton distractor eliminated the singleton response time cost and reduced the number of first saccades to the singleton. In contrast, expectations regarding variability in the singleton color did not affect singleton capture. This was surprising and suggests the ability to suppress second-order salience over and above that of first-order features. We next inserted the probe display that included a to-be-reported letter inside each shape between search trials to measure if attention went to multiple objects. The letter in the singleton location was reported less often in the high-frequency condition, suggesting proactive suppression of expected singleton. Additionally, we found that trial-to-trial repetitions of a singleton (irrespective of its color and location) facilitated performance (i.e., singleton repetition priming), but repetitions of its specific color or location did not. Together our findings demonstrate that attentional capture by a color singleton distractor is attenuated by probabilistic expectations of its occurrence, but not of its color and location. |
Yuan-hao Wu; Lisa A. Velenosi; Pia Schröder; Simon Ludwig; Felix Blankenburg Decoding vibrotactile choice independent of stimulus order and saccade selection during sequential comparisons Journal Article In: Human Brain Mapping, vol. 40, no. 6, pp. 1898–1907, 2019. @article{Wu2019, Decision-making in the somatosensory domain has been intensively studied using vibrotactile frequency discrimination tasks. Results from human and monkey electrophysiological studies from this line of research suggest that perceptual choices are encoded within a sensorimotor network. These findings, however, rely on experimental settings in which perceptual choices are inextricably linked to sensory and motor components of the task. Here, we devised a novel version of the vibrotactile frequency discrimination task with saccade responses which has the crucial advantage of decoupling perceptual choices from sensory and motor processes. We recorded human fMRI data from 32 participants while they performed the task. Using a whole-brain searchlight multivariate classification technique, we identify the left lateral prefrontal cortex and the oculomotor system, including the bilateral frontal eye fields (FEF) and intraparietal sulci, as representing vibrotactile choices. Moreover, we show that the decoding accuracy of choice information in the right FEF correlates with behavioral performance. Not only are these findings in remarkable agreement with previous work, they also provide novel fMRI evidence for choice coding in human oculomotor regions, which is not limited to saccadic decisions, but pertains to contexts where choices are made in a more abstract form. |
Jiang Yushi Research on the best visual search effect of logo elements in internet advertising layout Journal Article In: Journal of Contemporary Marketing Science, vol. 2, no. 1, pp. 23–33, 2019. @article{Yushi2019, Purpose: The purpose of this paper is to control the size of online advertising by the use of the single factor experiment design using the eight matching methods of logo and commodity picture elements as independent variables, under the premise of background color and content complexity and to investigate the best visual search law of logo elements in online advertising format. The result shows that when the picture element is fixed in the center of the advertisement, it is suggested that the logo element should be placed in the middle position parallel to the picture element (left middle and upper left), placing the logo element at the bottom of the picture element, especially at the bottom left should be avoided. The designer can determine the best online advertising format based on the visual search effect of the logo element and the actual marketing purpose. Design/methodology/approach: In this experiment, the repeated measurement experiment design was used in a single factor test. According to the criteria of different types of commodities and eight matching methods, 20 advertisements were randomly selected from 50 original advertisements as experimental stimulation materials, as shown in Section 2.3. The eight matching methods were processed to obtain a total of 20×8=160 experimental stimuli. At the same time, in order to minimize the memory effect of the repeated appearance of the same product, all pictures, etc., the probability was randomly presented. In addition, in order to avoid the pre-judgment of the test for the purpose of the experiment, 80 additional filler online advertisements were added. Therefore, each testee was required to watch 160+80=240 pieces of stimulation materials.Findings On one hand, when the image elements are fixed for an advertisement, the advertiser should first try to place the logo element in the right middle position parallel to the picture element, because the commodity logo in this matching mode can get the longest average time of consumers' attention, and the duration of attention is the most. Danaher and Mullarkey (2003) clearly pointed out that as consumers look at online advertising, the length of fixation time increases, the degree of memory of online advertisement is also improved accordingly. Second, you can consider placing the logo element in the left or upper left of the picture element. In contrast, advertisers should try to avoid placing the logo element at the bottom of the picture element (lower left and lower right), especially at the lower left, because, at this area, the logo attracts less attention, resulting in shortest duration of consumer attention, less than a quarter of consumers' total attention. This conclusion is consistent with the related research results. |
Andrew D. Zaharia; Robbe L. T. Goris; J. Anthony Movshon; Eero P. Simoncelli Compound stimuli reveal the structure of visual motion selectivity in macaque MT neurons Journal Article In: eNeuro, vol. 6, no. 6, pp. 1–19, 2019. @article{Zaharia2019, Motion selectivity in primary visual cortex (V1) is approximately separable in orientation, spatial frequency, and temporal frequency (“frequency-separable”). Models for area MT neurons posit that their selectivity arises by combining direction-selective V1 afferents whose tuning is organized around a tilted plane in the frequency domain, specifying a particular direction and speed (“velocity-separable”). This construction explains “pattern direction-selective” MT neurons, which are velocity-selective but relatively invariant to spatial structure, including spatial frequency, texture and shape. We designed a set of experiments to distinguish frequency-separable and velocity-separable models and executed them with single-unit recordings in macaque V1 and MT. Surprisingly, when tested with single drifting gratings, most MT neurons' responses are fit equally well by models with either form of separability. However, responses to plaids (sums of two moving gratings) tend to be better described as velocity-separable, especially for pattern neurons. We conclude that direction selectivity in MT is primarily computed by summing V1 afferents, but pattern-invariant velocity tuning for complex stimuli may arise from local, recurrent interactions. |
Safa R. Zaki; Isabella L. Salmi Sequence as context in category learning: An eyetracking study Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 11, pp. 1942–1954, 2019. @article{Zaki2019, In the current research, we tested the idea that the proximity of contrasting categories in a learning sequence would determine the features to which participants attend in a categorization task. For the first experiment, we designed a 4-category structure in which pairs of categories could be perfectly distinguished using 1 feature. Two of the categories were paired together in the first part of the learning phase, followed by the other 2 categories in the second part of this phase. In a transfer test in which all 4 categories were shown, participants attended more to the features that differentiated the paired categories than to the other, equally diagnostic features. In the second experiment, we extended this finding to a task that involved all 4 categories but in which pairs of categories were more likely to be interleaved. Once again, participants were more likely to pay attention to the dimensions that separated the 2 categories in proximity in the sequence. These findings suggest that the local learning context influences the representation of a category. |
Andrea M. Zawoyski; Scott P. Ardoin Using eye-tracking technology to examine the impact of question format on reading behavior in elementary students Journal Article In: School Psychology Review, vol. 48, no. 4, pp. 320–332, 2019. @article{Zawoyski2019, Reading comprehension assessments often include multiple-choice (MC) questions, but some researchers doubt their validity in measuring comprehension. Consequently, new assessments may include more short-answer (SA) questions. The current study contributes to the research comparing MC and SA questions by evaluating the effects of anticipated question format on elementary students' reading behavior. Third- and fourth-grade participants were divided into the MC (n = 43) or SA condition (n = 44) and expected to answer questions consistent with their group assignment. Eye movements (EMs) were analyzed across the passage and on areas significant to its meaning. Correlational analyses between EMs and reading measures were conducted. Findings support modification of question format in reading assessments. Implications for school psychologists, teachers, and EM researchers are addressed. |
Sandra A. Zerkle; Jennifer E. Arnold Does planning explain why predictability affects reference production? Journal Article In: Dialogue and Discourse, vol. 10, no. 2, pp. 34–55, 2019. @article{Zerkle2019, How does thematic role predictability affect reference production? This study tests a planning facilitation hypothesis-that the predictability effect on reference form can be explained in terms of the time course of utterance planning. In a discourse production task, participants viewed two sequential event pictures, listened to a description of the first picture (depicting a transfer event between two characters), and then provided a description of the second picture (continuing with one thematic role character, either goal or source). We replicated previous findings that goal continuations lead to more reduced forms of reference and shorter latency to begin speaking than source continuations. Additionally, we tracked speakers' eye movements in two periods of utterance planning, early vs. late. We found that 1) early planning supports the use of reduced forms but is not affected by thematic role; 2) thematic role only affects late planning; and 3) in contrast with our hypothesis, planning does not account for predictability effects on reduced forms. We then speculate that discourse connectedness drives the thematic role predictability effect on reference form choice. |
Felicia Zhang; Lauren L. Emberson Opposing timing constraints severely limit the use of pupillometry to investigate visual statistical learning Journal Article In: Frontiers in Psychology, vol. 10, pp. 1792, 2019. @article{Zhang2019c, Majority of visual statistical learning (VSL) research uses only offline measures, collected after the familiarization phase (i.e. learning) has occurred. Offline measures have revealed a lot about the extent of statistical learning (SL) but less is known about the learning mechanisms that support VSL. Studies have shown that prediction can be a potential learning mechanism for VSL, but it is difficult to examine the role of prediction in VSL using offline measures alone. Pupil diameter is a promising online measure to index prediction in VSL because it can be collected during learning, requires no overt action or task and can be used in a wide-range of populations (e.g., infants and adults). Furthermore, pupil diameter has already been used to investigate processes that are part of prediction such as prediction error and updating. While the properties of pupil diameter have the potentially to powerfully expand studies in VSL, through a series of three experiments, we find that the two are not compatible with each other. Our results revealed that pupil diameter, used to index prediction, is not related to offline measures of learning. We also found that pupil differences that appear to be a result of prediction, are actually a result of where we chose to baseline instead. Ultimately, we conclude that the fast-paced nature of VSL paradigms make it incompatible with the slow nature of pupil change. Therefore, our findings suggest pupillometry should not be used to investigate learning mechanisms in fast-paced VSL tasks. |
Jinxiao Zhang; Antoni B. Chan; Esther Y. Y. Lau; Janet H. Hsiao Individuals with insomnia misrecognize angry faces as fearful faces while missing the eyes: An eye-tracking study Journal Article In: Sleep, vol. 42, no. 2, pp. zsy220, 2019. @article{Zhang2019d, Individuals with insomnia have been found to have disturbed perception of facial expressions. Through eye movement examinations, here we test the hypothesis that this effect is due to impaired visual attention functions for retrieving diagnostic features in facial expression judgments. Twenty-three individuals with insomnia symptoms and 23 controls without insomnia completed a task to categorize happy, sad, fearful, and angry facial expressions. The participants with insomnia were less accurate in recognizing angry faces and misidentified them as fearful faces more often than the controls. A hidden Markov modeling approach for eye movement data analysis revealed that when viewing facial expressions, more individuals with insomnia adopted a nose-mouth eye movement pattern focusing on the vertical face midline while more controls adopted an eyes-mouth pattern preferentially attending to lateral features, particularly the two eyes. As previous studies found that the primary diagnostic feature for recognizing angry faces is the eyes while the diagnostic features for other facial expressions involve the mouth region, missing the eye region may contribute to specific difficulties in recognizing angry facial expressions, consistent with our behavioral finding in participants with insomnia symptoms. Taken together, the findings suggest that impaired information selection through visual attention control may be related to the compromised emotion perception in individuals with insomnia. |
Kaining Zhang; Charles D. Chen; Ilya E. Monosov Novelty, salience, and surprise timing are signaled by neurons in the basal forebrain Journal Article In: Current Biology, vol. 29, no. 1, pp. 134–142, 2019. @article{Zhang2019g, The basal forebrain (BF) is a principal source of modulation of the neocortex [1–6] and is thought to regulate cognitive functions such as attention, motivation, and learning by broadcasting information about salience [2, 3, 5, 7–19]. However, events can be salient for multiple reasons—such as novelty, surprise, or reward prediction errors [20–24]—and to date, precisely which salience-related information the BF broadcasts isunclear.Here, wereport that theprimate BF contains at least two types of neurons that often process salient events in distinct manners: one with phasic burst responses to cues predicting salient events and one with ramping activity anticipating such events. Bursting neurons respond to cues that convey predictions about the magnitude, probability, and timing of primary reinforcements. They also burst to the reinforcement itself, particularly when it is unexpected. However, they do not have a selective response to reinforcement omission (the unexpected absence of an event). Thus, bursting neurons do not convey value-prediction errors but do signal surprise associated with external events. Indeed, they are not limited to processing primary reinforcement: they discriminate fully expected novel visual objects from familiar objects and respond to object-sequence violations. In contrast, ramping neurons predict the timing of many salient, novel, and surprising events. Their ramping activity is highly sensitive to the subjects' confidence in event timing and on average encodes the subjects' surprise after unexpected events occur. These data suggest that the primate BF contains mechanisms to anticipate the timing of a diverse set of important external events (via ramping activity) and to rapidly deploy cognitive resources when these events occur (via short latency bursting). |
Xiaoxian Zhang; Wanlu Fu; Licheng Xue; Jing Zhao; Zhiguo Wang Children with mathematical learning difficulties are sluggish in disengaging attention Journal Article In: Frontiers in Psychology, vol. 10, pp. 932, 2019. @article{Zhang2019f, Mathematical learning difficulties (MLD) refer to a variety of deficits in math skills, typically pertaining to the domains of arithmetic and problem solving. The present study examined the time course of attentional orienting in MLD children with a spatial cueing task, by parametrically manipulating the cue-target onset asynchrony (CTOA). The results of Experiment 1 revealed that, in contrast to typical developing children, the inhibitory aftereffect of attentional orienting-frequently referred to as inhibition of return (IOR)-was not observed in the MLD children, even at the longest CTOA tested (800 ms). However, robust early facilitation effects were observed in the MLD children, suggesting that they have difficulties in attentional disengagement rather than attentional engagement. In a second experiment, a secondary cue was introduced to the cueing task to encourage attentional disengagement and IOR effects were observed in the MLD children. Taken together, the present experiments indicate that MLD children are sluggish in disengaging spatial attention. |
Xuemeng Zhang; Yijun Luo; Yong Liu; Chao Yang; Hong Chen Lack of conflict during food choice is associated with the failure of restrained eating Journal Article In: Eating Behaviors, vol. 34, pp. 1–8, 2019. @article{Zhang2019, Restrained eaters tend to sustain a restriction in caloric intake to lose or maintain body weight; however, only a few restrained eaters can achieve the goal of restricting their caloric intake to lose or maintain body weight. Those who are effective restrained eaters habitually adhere to their intentions to avoid eating certain palatable foods, whereas those who are ineffective restrained eaters are generally unable to translate their intentions into behavior. To restrain eating regardless of temptation, an individual must first identify potential conflicts between achieving restrained eating and temptation to eat. Regarding food selections, the association between a lack of conflict between temptation, eating enjoyment, and weight loss or maintenance goals and the failure of restriction of caloric intake remains unknown. The present study used an eye-tracking technique to assess the degree of conflict experienced by effective and ineffective restrained eaters during food choice. Participants were required to choose between pairs of high-and low-calorie foods. The results showed that choosing the low-calorie food was associated with the experience of more conflict, measured by longer response times and more gaze switches, than choosing the high-calorie food. Ineffective restrained eaters experienced less conflict, exhibiting shorter response times and fewer gaze switches, than did effective restrained eaters, which suggests that a failure to restrain eating might be associated with a lack of experience of conflict. |
Sijia Zhao; Gabriela Bury; Alice Milne; Maria Chait Pupillometry as an objective measure of sustained attention in young and older listeners Journal Article In: Trends in Hearing, vol. 23, pp. 1–21, 2019. @article{Zhao2019a, The ability to sustain attention on a task-relevant sound-source whilst avoiding distraction from other concurrent sounds is fundamental to listening in crowded environments. To isolate this aspect of hearing we designed a paradigm that continuously measured behavioural and pupillometry responses during 25-second-long trials in young (18-35 yo) and older (63-79 yo) participants. The auditory stimuli consisted of a number (1, 2 or 3) of concurrent, spectrally distinct tone streams. On each trial, participants detected brief silent gaps in one of the streams whilst resisting distraction from the others. Behavioural performance demonstrated increasing difficulty with time-on-task and with number/proximity of distractor streams. In young listeners (N=20), pupillometry revealed that pupil diameter (on the group and individual level) was dynamically modulated by instantaneous task difficulty such that periods where behavioural performance revealed a strain on sustained attention, were also accompanied by increased pupil diameter. Only trials on which participants performed successfully were included in the pupillometry analysis. Therefore, the observed effects reflect consequences of task demands as opposed to failure to attend.In line with existing reports, we observed global changes to pupil dynamics in the older group, including decreased pupil diameter, a limited dilation range, and reduced temporal variability. However, despite these changes, the older group showed similar effects of attentive tracking to those observed in the younger listeners. Overall, our results demonstrate that pupillometry can be a reliable and time-sensitive measure of the effort associated with attentive tracking over long durations in both young and (with some caveats) older listeners. |
Sijia Zhao; Nga Wai Yum; Lucas Benjamin; Elia Benhamou; Makoto Yoneya; Shigeto Furukawa; Frederic Dick; Malcolm Slaney; Maria Chait Rapid ocular responses are modulated by bottom-up-driven auditory salience Journal Article In: Journal of Neuroscience, vol. 39, no. 39, pp. 7703–7714, 2019. @article{Zhao2019c, Despite the prevalent use of alerting sounds in alarms and human–machine interface systems and the long-hypothesized role of the auditory system as the brain's “early warning system,” we have only a rudimentary understanding of what determines auditory salience — the automatic attraction of attention by sound —and which brain mechanisms underlie this process. A major roadblock has been the lack ofa robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N ⫽ 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (ofeither sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless oftheir modality. |
Matthew S. Peterson; Shane P. Kelly; Eric J. Blumberg Saccadic eye movements smear spatial working memory Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 2, pp. 255–263, 2019. @article{Peterson2019a, Why do saccades interfere with spatial working memory? One possibility is that attention and saccades are tightly coupled, and performing a saccade momentarily removes attention from spatial working memory, degrading the memory representation. This cannot be the entire explanation, because saccades cause greater interference than do covert attentional shifts (Lawrence, Myerson, & Abrams, 2004). In addition, this saccadic degradation is limited to spatial but not object, configural, or verbal representations. We propose that saccadic remapping is partially responsible for this increased interference. To test this, we used a spatial change detection task, and during the retention interval, participants either performed a central task, a peripheral task without an eye movement, or a peripheral task that required a saccade. Using the method of constant stimuli allowed us to fit psychophysical functions in which we derived measures of spatial memory precision, guessing, and response bias. It is important that we found a directionally specific loss of memory precision, such that memory representations were less precise along the axis of the saccade. This was beyond the general loss of precision we found for covert shifts, suggesting that part of the effect is because of remapping. Saccades also increased guessing, but unlike the loss of precision, the effect was nondirectional. |
Nathan M. Petro; Nina N. Thigpen; Steven Garcia; Maeve R. Boylan; Andreas Keil Pre-target alpha power predicts the speed of cued target discrimination Journal Article In: NeuroImage, vol. 189, pp. 878–885, 2019. @article{Petro2019, The human visual system selects information from dense and complex streams of spatiotemporal input. This selection process is aided by prior knowledge of the features, location, and temporal proximity of an upcoming stimulus. In the laboratory, this knowledge is often conveyed by cues, preceding a task-relevant target stimulus. Response speed in cued selection tasks varies within and across participants and is often thought to index efficient selection of a cued feature, location, or moment in time. The present study used a reverse correlation approach to identify neural predictors of efficient target discrimination: Participants identified the orientation of a sinusoidal grating, which was presented in one hemifield following the presentation of bilateral visual cues that carried temporal but not spatial information about the target. Across different analytic approaches, faster target responses were predicted by larger alpha power preceding the target. These results suggest that heightened pre-target alpha power during a cue period may index a state that is beneficial for subsequent target processing. Our findings are broadly consistent with models that emphasize capacity sharing across time, as well as models that link alpha oscillations to temporal predictions regarding upcoming events. |
Zhongling Pi; Yi Zhang; Jiumin Yang; Weiping Hu; Harrison Hao Yang In: Journal of Nonverbal Behavior, pp. 1–11, 2019. @article{Pi2019, This study focused on how an instructor's pointing gestures and depictive gestures differentially affected learners' retention, transfer, and visual attention allocation. Eighty-five Chinese undergraduates were randomly assigned to view one of three video lectures in a laboratory. The videos varied in terms of the instructor's use of gesture: pointing gestures, depictive gestures, or no gestures. As hypothesized, the results showed better learning performance after the videos that included either pointing gestures or depictive gestures relative to the no gestures video; interestingly, the effect of gestures in video lectures was greater for participants with low and medium prior knowledge. In addition, the type of gesture differentially affected learners' visual attention allocation: pointing gestures directed attention to the relevant learning content of the PowerPoint slides, and depictive gestures drew learners' attention to the instructor. The findings have practical implications: instructors are encouraged to use pointing gestures and depictive gestures in video lectures. |
Zhongling Pi; Yi Zhang; Fangfang Zhu; Ke Xu; Jiumin Yang; Weiping Hu Instructors' pointing gestures improve learning regardless of their use of directed gaze in video lectures Journal Article In: Computers and Education, vol. 128, pp. 345–352, 2019. @article{Pi2019a, Empirical research to date has not distinguished the effects of the instructor's pointing gestures from directed gaze in video lectures. This study tested the hypothesis that the pointing gesture is superior to directed gaze in enhancing video lecture learning. Participants (n = 120) watched one of four types of video lecture in which the instructor either (a) looked straight into the camera with no gaze shift and without pointing gesture; (b) made occasional gaze shifts and without pointing gesture; (c) looked straight into the camera with no gaze shift and pointed to the relevant areas of the slide; or (d) made occasional gaze shifts accompanied by pointing gestures towards the slides. ANOVAs found that students in the conditions that included the instructor's pointing gesture showed better learning performance, more efficient visual search, and greater attention to the learning content that the instructor was referring to, regardless of her use of directed gaze. The implication for education is that instructors should use pointing gestures, with or without directed gaze, to guide students' attention and improve their learning in video lectures. |
Aleks Pieczykolan; Lynn Huestegge Action scheduling in multitasking: A multi-phase framework of response-order control Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 5, pp. 1464–1487, 2019. @article{Pieczykolan2019, Temporal organization of human behavior is particularly important when several action requirements must be processed around the same time. A crucial challenge in such multitasking situations is to control the temporal response order. However, multitasking studies usually focus on temporal processing dynamics after a specific response order – which is usually triggered by stimulus sequence and instructions – has been determined, whereas a comprehensive study of response-order scheduling mechanisms is still lacking. Across three psychological refractory period (PRP) experiments, we examined the impact of stimulus order, response characteristics, and several other factors on response order. Crucially, we utilized a combination of effector systems (oculomotor and manual) that are known to ensure reasonable response order variability in the first place. The results suggest that – contrary to previous assumptions – bottom-up factors (e.g., stimulus order) are not the primary determinant of temporal action scheduling. Instead, we found a major influence of effector-based characteristics (i.e., oculomotor task prioritization) that could be attenuated by both instructions and changes in the task environment (providing temporally predictable input). Effects of between-task compatibility suggest that a dedicated stimulus-code comparison process precedes and affects response-order scheduling. Based on the present results and previous findings, we propose a multi-phase framework of temporal response-order control that emphasizes the extent to which cognitive control of action scheduling is dynamically adaptive to particular task characteristics. |
Joanna Pilarczyk; Emilia Schwertner; Kinga Wołoszyn; Michał Kuniecki Phase of the menstrual cycle affects engagement of attention with emotional images Journal Article In: Psychoneuroendocrinology, vol. 104, pp. 25–32, 2019. @article{Pilarczyk2019, Changes that occur during the menstrual cycle affect various aspects of behavior, cognition, and emotion. Here, we focused on potential differences between early follicular and midluteal phases in the way women process images of behaviorally relevant content categories: children, threat, disgust, erotic scenes, low-and high-calorie food. Using eye-tracking, we examined women's engagement of attention in the key region of each image in a free-viewing condition. Specifically, we tested how quickly attention was attracted to these regions and for how long it was held there. Participants took part in two experimental sessions, one in the early follicular and one in the midluteal phase. The results showed that in the midluteal phase attention was attracted to the key region earlier than in the early follicular phase: the first fixation more often fell within the key region and there were fewer fixations preceding it. While the effect of the phase in terms of the capture of attention did not depend on the image category, the effect regarding the hold of attention was category-specific, concerning the disgust category only. Specifically, in the midluteal phase the duration of the exploration of the key region between reaching it for the first time and first exiting it was shorter, which might be due to heightened sensitivity to disgusting stimuli in this period. Overall, our results indicate the occurrence of changes in attentional processing of emotional scenes related to the menstrual cycle, which seem to differ depending on the aspect of attention deployment: in the midluteal phase the effect of enhancing orienting was general and concerned any important visual information, whereas the effect of the shortened hold of attention appeared to be limited to specific content. |
Ella Podvalny; Matthew W. Flounders; Leana E. King; Tom Holroyd; Biyu J. He A dual role of prestimulus spontaneous neural activity in visual object recognition Journal Article In: Nature Communications, vol. 10, pp. 3910, 2019. @article{Podvalny2019, Vision relies on both specific knowledge of visual attributes, such as object categories, and general brain states, such as those reflecting arousal. We hypothesized that these phenomena independently influence recognition of forthcoming stimuli through distinct processes reflected in spontaneous neural activity. Here, we recorded magnetoencephalographic (MEG) activity in participants (N = 24) who viewed images of objects presented at recognition threshold. Using multivariate analysis applied to sensor-level activity patterns recorded before stimulus presentation, we identified two neural processes influencing subsequent subjective recognition: a general process, which disregards stimulus category and correlates with pupil size, and a specific process, which facilitates category-specific recognition. The two processes are doubly-dissociable: the general process correlates with changes in criterion but not in sensitivity, whereas the specific process correlates with changes in sensitivity but not in criterion. Our findings reveal distinct mechanisms of how spontaneous neural activity influences perception and provide a framework to integrate previous findings. |
Mikhail Y. Pokhoday; Yury Y. Shtyrov; Andriy Myachykov Effects of visual priming and event orientation on word order choice in Russian sentence production Journal Article In: Frontiers in Psychology, vol. 10, pp. 1661, 2019. @article{Pokhoday2019, Existing research shows that distribution of the speaker's attention among event's protagonists affects syntactic choice during sentence production. One of the debated issues concerns the extent of the attentional contribution to syntactic choice in languages that put stronger emphasis on word order arrangement rather than the choice of the overall syntactic frame. To address this, the current study used a sentence production task, in which Russian native speakers were asked to verbally describe visually perceived transitive events. Prior to describing the target event, a visual cue directed the participants' attention to the location of either the agent or the patient of the subsequently presented visual event. In addition, we also manipulated event orientation (agent-left vs agent-right) as another potential contributor to syntactic choice. The number of patient-initial sentences was the dependent variable compared between conditions. First, the obtained results replicated the effect of visual cueing on the word order in Russian language: more patient-initial sentences in patient cued condition. Second, we registered a novel effect of event orientation: Russian native speakers produced more patient-initial sentences after seeing events developing from right to left as opposed to left-to-right events. Our study provides new evidence about the role of the speaker's attention and event orientation in syntactic choice in language with flexible word order. |
Martyn Poliakoff; Alexis D. J. Makin; Samantha L. Y. Tang; Ellen Poliakoff Turning the periodic table upside down Journal Article In: Nature Chemistry, vol. 11, pp. 391–393, 2019. @article{Poliakoff2019, The periodic table is immensely powerful for rationalizing many different properties of the chemical elements, but would turning it on its head make some important aspects easier to understand and give everyone a new perspective on chemistry? |
Ulrich Pomper; Thomas Ditye; Ulrich Ansorge Contralateral delay activity during temporal order memory Journal Article In: Neuropsychologia, vol. 129, pp. 104–116, 2019. @article{Pomper2019, In everyday life, we constantly need to remember the temporal sequence of visual events over short periods of time, for example, when making sense of others' actions or watching a movie. While there is increasing knowledge available on neural mechanisms underlying visual working memory (VWM) regarding the identity and spatial location of objects, less is known about how the brain encodes and retains information on temporal sequences. Here, we investigate whether the contralateral-delay activity (CDA), a well-studied electroencephalographic (EEG) component associated with VWM of object identity, also reflects the encoding and retention of temporal order. In two independent experiments, we presented participants with a sequence of four or six images, followed by a 1 s retention period. Participants judged temporal order by indicating whether a subsequently presented probe image was originally displayed during the first or the second half of the sequence. As a main novel result, we report the emergence of a contralateral negativity already following the presentation of the first item of the sequence, which increases over the course of a trial with every presented item, up to a limit of four items. We further observed no differences in the CDA during the temporal-order task compared to one obtained during a task concerning the spatial location of the presented items. Since the characteristics of the CDA appear to be highly similar between different encoded feature dimensions and increases as additional items are being encoded, we suggest this component might be a general characteristic of various types of VWM. |
Eva R. Pool; Wolfgang M. Pauli; Carolina S. Kress; John P. O'Doherty Behavioural evidence for parallel outcome-sensitive and outcome-insensitive Pavlovian learning systems in humans Journal Article In: Nature Human Behaviour, vol. 3, pp. 284–296, 2019. @article{Pool2019, There is a dichotomy in instrumental conditioning between goal-directed actions and habits that are distinguishable on the basis of their relative sensitivity to changes in outcome value. It is less clear whether a similar distinction applies in Pavlovian conditioning, where responses have been found to be predominantly outcome-sensitive. To test for both devaluation-insensitive and devaluation-sensitive Pavlovian conditioning in humans, we conducted four experiments combining Pavlovian condition- ing and outcome-devaluation procedures while measuring multiple conditioned responses. Our results suggest that Pavlovian conditioning involves two distinct types of learning: one that learns the current value of the outcome, which is sensitive to devaluation, and one that learns about the spatial localization of the outcome, which is insensitive to devaluation. Our findings have implications for the mechanistic understanding of Pavlovian conditioning and provide a more nuanced understanding of Pavlovian mechanisms that might contribute to a number of psychiatric disorders. |
Tzvetan Popov; Bart Gips; Sabine Kastner; Ole Jensen Spatial specificity of alpha oscillations in the human visual system Journal Article In: Human Brain Mapping, vol. 40, no. 15, pp. 4432–4440, 2019. @article{Popov2019, Alpha oscillations are strongly modulated by spatial attention. To what extent, the generators of cortical alpha oscillations are spatially distributed and have selectivity that can be related to retinotopic organization is a matter of continuous scientific debate. In the present report, neuromagnetic activity was quantified by means of spatial location tuning functions from 30 participants engaged in a visuospatial attention task. A cue presented briefly in one of 16 locations directing covert spatial attention resulted in a robust modulation of posterior alpha oscillations. The distribution of the alpha sources approximated the retinotopic organization of the human visual system known from hemodynamic studies. Better performance in terms of target identification was associated with a more spatially constrained alpha modulation. The present findings demonstrate that the generators of posterior alpha oscillations are retinotopically organized when modulated by spatial attention. |
Dina V. Popovkina; Wyeth Bair; Anitha Pasupathy Modeling diverse responses to filled and outline shapes in macaque V4 Journal Article In: Journal of Neurophysiology, vol. 121, no. 3, pp. 1059–1077, 2019. @article{Popovkina2019, Visual area V4 is an important midlevel cortical processing stage that subserves object recognition in primates. Studies investigating shape coding in V4 have largely probed neuronal responses with filled shapes, i.e., shapes defined by both a boundary and an interior fill. As a result, we do not know whether form-selective V4 responses are dictated by boundary features alone or if interior fill is also important. We studied 43 V4 neurons in two male macaque monkeys ( Macaca mulatta) with a set of 362 filled shapes and their corresponding outlines to determine how interior fill modulates neuronal responses in shape-selective neurons. Only a minority of neurons exhibited similar response strength and shape preferences for filled and outline stimuli. A majority responded preferentially to one stimulus category (either filled or outline shapes) and poorly to the other. Our findings are inconsistent with predictions of the hierarchical-max (HMax) V4 model that builds form selectivity from oriented boundary features and takes little account of attributes related to object surface, such as the phase of the boundary edge. We modified the V4 HMax model to include sensitivity to interior fill by either removing phase-pooling or introducing unoriented units at the V1 level; both modifications better explained our data without increasing the number of free parameters. Overall, our results suggest that boundary orientation and interior surface information are both maintained until at least the midlevel visual representation, consistent with the idea that object fill is important for recognition and perception in natural vision. |
Matthew F. Panichello; Brian DePasquale; Jonathan W. Pillow; Timothy J. Buschman Error-correcting dynamics in visual working memory Journal Article In: Nature Communications, vol. 10, pp. 3366, 2019. @article{Panichello2019, Working memory is critical to cognition, decoupling behavior from the immediate world. Yet, it is imperfect; internal noise introduces errors into memory representations. Such errors have been shown to accumulate over time and increase with the number of items simultaneously held in working memory. Here, we show that discrete attractor dynamics mitigate the impact of noise on working memory. These dynamics pull memories towards a few stable representations in mnemonic space, inducing a bias in memory representations but reducing the effect of random diffusion. Model-based and model-free analyses of human and monkey behavior show that discrete attractor dynamics account for the distribution, bias, and precision of working memory reports. Furthermore, attractor dynamics are adaptive. They increase in strength as noise increases with memory load and experiments in humans show these dynamics adapt to the statistics of the environment, such that memories drift towards contextually-predicted values. Together, our results suggest attractor dynamics mitigate errors in working memory by counteracting noise and integrating contextual information into memories. |
Adela S. Y. Park; Phillip A. Bedggood; Andrew B. Metha; Andrew J. Anderson The influence of perceptual stabilisation on perceptual grouping of temporally asynchronous stimuli Journal Article In: Vision Research, vol. 160, pp. 1–9, 2019. @article{Park2019, Even during fixation, our eyes constantly make small, involuntary eye movements that cause the retinal image to be swept across our retinae. Despite this, our world appears completely stable, due to powerful perceptual stabilisation mechanisms. Whether these mechanisms are of functional consequence for visual performance remains largely unexplored, however. We directly tested this by using a perceptual grouping task, where physically aligned alternate grid elements were presented with an imperceptible temporal offset. Observers' abilities to reliably group the grid into rows (or columns) is posited to arise from the failure in compensation of retinal slip arising from the small eye movements that occur during the temporal offset, effectively introducing a spatial shift in the arrangement of grid elements. We incorporated this perceptual grouping task within the on-line jitter illusion, which temporarily disables perceptual stabilisation mechanisms through a 10 Hz flickering annulus of random noise (Vision Res 43 (2003) 957–969). Observers' abilities to correctly group the grid stimulus were measured with and without perceptual stabilisation mechanisms engaged (i.e. non-flickering vs. flickering annulus). Grouping performance was better when eye movements were perceived, suggesting that the influence of retinal slip is increased when perceptual stabilisation mechanisms are disabled. We therefore find that perceptual stabilisation can measurably influence visual function, in addition to its perceptual effects. |
Minsun Park; Randolph Blake; Yeseul Kim; Chai-Youn Kim Congruent audio-visual stimulation during adaptation modulates the subsequently experienced visual motion aftereffect Journal Article In: Scientific Reports, vol. 9, pp. 19391, 2019. @article{Park2019a, Sensory information registered in one modality can influence perception associated with sensory information registered in another modality. The current work focuses on one particularly salient form of such multisensory interaction: audio-visual motion perception. Previous studies have shown that watching visual motion and listening to auditory motion influence each other, but results from those studies are mixed with regard to the nature of the interactions promoting that influence and where within the sequence of information processing those interactions transpire. To address these issues, we investigated whether (i) concurrent audio-visual motion stimulation during an adaptation phase impacts the strength of the visual motion aftereffect (MAE) during a subsequent test phase, and (ii) whether the magnitude of that impact was dependent on the congruence between auditory and visual motion experienced during adaptation. Results show that congruent direction of audio-visual motion during adaptation induced a stronger initial impression and a slower decay of the MAE than did the incongruent direction, which is not attributable to differential patterns of eye movements during adaptation. The audio-visual congruency effects measured here imply that visual motion perception emerges from integration of audio-visual motion information at a sensory neural stage of processing. |
Karisa B. Parkington; Roxane J. Itier From eye to face: The impact of face outline, feature number, and feature saliency on the early neural response to faces Journal Article In: Brain Research, vol. 1722, pp. 1–14, 2019. @article{Parkington2019, The LIFTED model of early face perception postulates that the face-sensitive N170 event-related potential may reflect underlying neural inhibition mechanisms which serve to regulate holistic and featural processing. It remains unclear, however, what specific factors impact these neural inhibition processes. Here, N170 peak responses were recorded whilst adults maintained fixation on a single eye using a gaze-contingent paradigm, and the presence/absence of a face outline, as well as the number and type of parafoveal features within the outline, were manipulated. N170 amplitudes and latencies were reduced when a single eye was fixated within a face outline compared to fixation on the same eye in isolation, demonstrating that the simple presence of a face outline is sufficient to elicit a shift towards a more face-like neural response. A monotonic decrease in the N170 amplitude and latency was observed with increasing numbers of parafoveal features, and the type of feature(s) present in parafovea further modulated this early face response. These results support the idea of neural inhibition exerted by parafoveal features onto the foveated feature as a function of the number, and possibly the nature, of parafoveal features. Specifically, the results suggest the use of a feature saliency framework (eyes > mouth > nose) at the neural level, such that the parafoveal eye may play a role in down-regulating the response to the other eye (in fovea) more so than the nose or the mouth. These results confirm the importance of parafoveal features and the face outline in the neural inhibition mechanism, and provide further support for a feature saliency mechanism guiding early face perception. |
Thomas Parr; M. Berk Mirza; Hayriye Cagnan; Karl J. Friston Dynamic causal modelling of active vision Journal Article In: Journal of Neuroscience, vol. 39, no. 32, pp. 6265–6275, 2019. @article{Parr2019, In this paper, we draw from recent theoretical work on active perception, which suggests that the brain makes use of an internal (i.e., generative) model to make inferences about the causes of sensations. This view treats visual sensations as consequent on action (i.e., saccades) and implies that visual percepts must be actively constructed via a sequence ofeye movements. Oculomotor control calls on a distributed set ofbrain sources that includes the dorsal and ventral frontoparietal (attention) networks.Weargue that connections from the frontal eye fields to ventral parietal sources represent the mapping from “where”, fixation location to information derived from “what” representations in the ventral visual stream. During scene construction, this mapping must be learned, putatively through changes in the effective connectivityofthese synapses. Here,wetest the hypothesis that the couplingbetweenthe dorsal frontal cortexand the right temporoparietal cortex is modulated during saccadic interrogation ofa simple visual scene. Using dynamic causal modeling for magnetoencephalography with (male and female) human participants, we assess the evidence for changes in effective connectivity by comparing models that allow for this modulation with models that do not. We find strong evidence for modulation of connections between the two attention networks; namely, a disinhibition ofthe ventral network by its dorsal counterpart. |
Aishwarya Parthasarathy; Cheng Tang; Roger Herikstad; Loong Fah Cheong; Shih-Cheng Yen; Camilo Libedinsky Time-invariant working memory representations in the presence of code-morphing in the lateral prefrontal cortex Journal Article In: Nature Communications, vol. 10, pp. 4995, 2019. @article{Parthasarathy2019, Maintenance of working memory is thought to involve the activity of prefrontal neuronal populations with strong recurrent connections. However, it was recently shown that distractors evoke a morphing of the prefrontal population code, even when memories are maintained throughout the delay. How can a morphing code maintain time-invariant memory information? We hypothesized that dynamic prefrontal activity contains time-invariant memory information within a subspace of neural activity. Using an optimization algorithm, we found a low-dimensional subspace that contains time-invariant memory information. This information was reduced in trials where the animals made errors in the task, and was also found in periods of the trial not used to find the subspace. A bump attractor model replicated these properties, and provided predictions that were confirmed in the neural data. Our results suggest that the high-dimensional responses of prefrontal cortex contain subspaces where different types of information can be simultaneously encoded with minimal interference. |
J. A. Patrick; Neil W. Roach; Paul V. McGraw Temporal modulation improves dynamic peripheral acuity Journal Article In: Journal of Vision, vol. 19, no. 13, pp. 1–19, 2019. @article{Patrick2019, Macular degeneration and related visual disorders greatly limit foveal function, resulting in reliance on the peripheral retina for tasks requiring fine spatial vision. Here we investigate stimulus manipulations intended to maximize peripheral acuity for dynamic targets. Acuity was measured using a single interval orientation discrimination task at 10° eccentricity. Two types of image motion were investigated along with two different forms of temporal manipulation. Smooth object motion was generated by translating targets along an isoeccentric path at a constant speed (0–20°/s). Ocular motion was simulated by jittering target location using previously recorded fixational eye movement data, amplified by a variable gain factor (0–8). In one stimulus manipulation, the sequence was temporally subsampled by displaying the target on an evenly spaced subset of video frames. In the other, the contrast polarity of the stimulus was reversed at a variable rate. We found that threshold under object motion was improved at all speeds by reversing contrast polarity, while temporal subsampling improved resolution at high speeds but impaired performance at low speeds. With simulated ocular motion, thresholds were consistently improved by contrast polarity reversal, but impaired by temporal subsampling. We find that contrast polarity reversal and temporal subsampling produce differential effects on peripheral acuity. Applying contrast polarity reversal may offer a relatively simple image manipulation that could enhance visual performance in individuals with central vision loss. |
Jerneja Pavlin; Saša A. Glažar; Miha Slapničar; Iztok Devetak In: Chemistry Education Research and Practice, vol. 20, no. 3, pp. 633–649, 2019. @article{Pavlin2019, The purpose of this paper is to explore and explain students' achievements in solving context-based gas exercises comprising the macroscopic and submicroscopic levels of chemical concepts. The influence of specific variables, such as interest in learning, formal-reasoning abilities, and visualisation abilities, is a significant factor that should be considered when explaining students' achievements with context-based exercises. Seventy-nine students of three age groups (12, 16, and 23) participated in the study. Questionnaires, tests, and a semi-structured interview including computer-displayed context-based exercises were used to collect data. In addition, an eye-tracker was used to determine the exact location of the participants' points of gaze. The results show that students on average answered correctly from 40 to 79% of all questions in the context-based exercises. The context-based exercise related to air compression is indicated as being difficult for students. In students' explanations of different levels of chemical concepts, representation difficulties are detected in all three age groups of students. Students' achievements in solving context-based gas exercises do not depend on interest in learning chemistry and visualisation abilities. However, statistically significant differences exist in total fixation duration on the correct submicrorepresentation animation between students with different formal-reasoning abilities. The results serve as a starting point for the planning of different context-based exercises and problems comprising the chemistry triplet with 3D dynamic submicrorepresentations. |
Candace E. Peacock; Taylor R. Hayes; John M. Henderson The role of meaning in attentional guidance during free viewing of real-world scenes Journal Article In: Acta Psychologica, vol. 198, pp. 1–8, 2019. @article{Peacock2019a, In real-world vision, humans prioritize the most relevant visual information at the expense of other information via attentional selection. The current study sought to understand the role of semantic features and image features on attentional selection during free viewing of real-world scenes. We compared the ability of meaning maps generated from ratings of isolated, context-free image patches and saliency maps generated from the Graph-Based Visual Saliency model to predict the spatial distribution of attention in scenes as measured by eye movements. Additionally, we introduce new contextualized meaning maps in which scene patches were rated based upon how informative or recognizable they were in the context of the scene from which they derived. We found that both context-free and contextualized meaning explained significantly more of the overall variance in the spatial distribution of attention than image salience. Furthermore, meaning explained early attention to a significantly greater extent than image salience, contrary to predictions of the 'saliency first' hypothesis. Finally, both context-free and contextualized meaning predicted attention equivalently. These results support theories in which meaning plays a dominant role in attentional guidance during free viewing of real-world scenes. |
Candace E. Peacock; Taylor R. Hayes; John M. Henderson Meaning guides attention during scene viewing, even when it is irrelevant Journal Article In: Attention, Perception, and Psychophysics, vol. 81, pp. 20–34, 2019. @article{Peacock2019, During real-world scene viewing, humans must prioritize scene regions for attention. What are the roles of low-level image salience and high-level semantic meaning in attentional prioritization? A previous study suggested that when salience and meaning are directly contrasted in scene memorization and preference tasks, attentional priority is assigned by meaning (Henderson & Hayes in Nature Human Behavior, 1, 743-747, 2017). Here we examined the role of meaning in attentional guidance using two tasks in which meaning was irrelevant and salience was relevant: a brightness rating task and a brightness search task. Meaning was represented by meaning maps that captured the spatial distribution of semantic features. Meaning was contrasted with image salience, represented by saliency maps. Critically, both maps were represented similarly, allowing us to directly compare how meaning and salience influenced the spatial distribution of attention, as measured by fixation density maps. Our findings suggest that even in tasks for which meaning is irrelevant and salience is relevant, meaningful scene regions are prioritized for attention over salient scene regions. These results support theories in which scene semantics play a dominant role in attentional guidance in scenes. |
Charlotte R. Pennington; Damien Litchfield; Neil McLatchie; Derek Heim Stereotype threat may not impact women's inhibitory control or mathematical performance: Providing support for the null hypothesis Journal Article In: European Journal of Social Psychology, vol. 49, pp. 717–734, 2019. @article{Pennington2019a, Underpinned by the findings of Jamieson and Harkins (2007; Experiment 3), the current study pits the mere effort motivational account of stereotype threat against a working memory interference account. In Experiment 1, females were primed with a negative self- or group stereotype pertaining to their visuospatial ability and completed an anti-saccade eye-tracking task. In Experiment 2 they were primed with a negative or positive group stereotype and completed an anti-saccade and mental arithmetic task. Findings indicate that stereotype threat did not significantly impair women's inhibitory control (Experiments 1 and 2) or mathematical performance (Experiment 2), with Bayesian analyses providing support for the null hypothesis. These findings are discussed in relation to potential moderating factors of stereotype threat, such as task difficulty and stereotype endorsement, as well as the possibility that effect sizes reported in the stereotype threat literature are inflated due to publication bias. |
Effie J. Pereira; Monica S. Castelhano Attentional capture is contingent on scene region: Using surface guidance framework to explore attentional mechanisms during search Journal Article In: Psychonomic Bulletin & Review, vol. 26, no. 4, pp. 1273–1281, 2019. @article{Pereira2019b, Studies have established that scene context guides attention during visual search, but it is not yet clear how. In this study, we examined how attention is deployed across scenes using an attentional capture paradigm. Using the Surface Guidance Framework (Castelhano & Pereira, 2019), we defined target-relevant and target-irrelevant scene regions for each target object and compared how attentional capture of a suddenly onsetting distractor differs for object and letter searches. We found an enhancement of capture effects when distractors appeared within target-relevant regions, with greater proportions of distractors fixated and greater proportions of saccades made toward the distractor for object searches, but not for letter searches. Thus, attention in the real world can be flexibly and spatially distributed on the basis of contextual information, with the Surface Guidance Framework presenting a powerful tool for exploring attentional guidance in real-world scenes. |
Hassan Mansour; Gustav Kuhn In: Quarterly Journal of Experimental Psychology, vol. 72, no. 8, pp. 1913–1925, 2019. @article{Mansour2019, Experimental psychologists frequently present participants with social stimuli (videos or pictures) and measure behavioural responses. Such designs are problematic in that they remove the potential for social interaction and inadvertently restrict our eyes multifaceted nature as a tool to both perceive and communicate with others. The aim of this study was to develop a new paradigm within which we can easily and reliably measure the influence of top-down processes (belief), social activity (talking and listening), and possible clinical traits (gaze anxiety, and social interaction difficulties) onto gaze behaviours. Participants were engaged in a "real" or pre-recorded Skype conversation. Findings suggest that participants who believed they were engaging in a real conversation spent less time looking at the speaker's eyes, but no differences were found for dwell time onto the whole face. Within our non-clinical sample, higher levels of gaze anxiety resulted in reduced dwell time onto the whole face but not eyes, whereas social interaction difficulties produced reduced dwell time onto the eyes only. Finally, talking consistently produced reduced dwell time onto the whole face and eyes regardless of any other conditions. |
Juan J. Mariman; Pablo Burgos; Pedro E. Maldonado Parallel learning processes of a visuomotor adaptation task in a changing environment Journal Article In: European Journal of Neuroscience, vol. 49, no. 1, pp. 106–119, 2019. @article{Mariman2019, During the control of reaching movements, a key contribution of the visual system is the localization of relevant environmental targets. In motor adaptation processes, the visual evaluation of effector motor behavior enables learning from errors, which demands continuous visual attentional focus. However, most current adaptation paradigms include static targets; therefore, when a learning situation develops in a highly variable environment and there is a double demand for visual resources (environment and motor performance), the evolution of learning processes is unknown. In order to understand how learning processes evolve in a variable environment, a video game task was designed in which subjects were asked to manage a 60° counterclockwise-rotated cursor to capture descending targets with initially unpredictable trajectories. During the task, the cursor and eye movements were recorded to dissect visuomotor coordination. We observed that the pursuit of the targets conditioned a predominant and continuous visual inspection of the environment instead of the rotated cursor. As learning progressed, subjects exhibited a linear reduction in directional error and selected a motor strategy based on the degree of reward, which improved the performance. These results suggest that when the environment demands high visual attention, error-based and reinforced motor learning processes are implemented simultaneously, thus enabling efficient predictive behavior. |
Charles R. Marshall; Christopher J. D. Hardy; Lucy L. Russell; Rebecca L. Bond; Harri Sivasathiaseelan; Caroline Greaves; Katrina M. Moore; Jennifer L. Agustus; Janneke E. P. Leeuwen; Stephen J. Wastling; Jonathan D. Rohrer; James M. Kilner; Jason D. Warren The functional neuroanatomy of emotion processing in frontotemporal dementias Journal Article In: Brain, vol. 142, no. 9, pp. 2873–2887, 2019. @article{Marshall2019, Impaired processing of emotional signals is a core feature of frontotemporal dementia syndromes, but the underlying neural mechanisms have proved challenging to characterize and measure. Progress in this field may depend on detecting functional changes in the working brain, and disentangling components of emotion processing that include sensory decoding, emotion categorization and emotional contagion. We addressed this using functional MRI of naturalistic, dynamic facial emotion processing with concurrent indices of autonomic arousal, in a cohort of patients representing all major frontotemporal dementia syndromes relative to healthy age-matched individuals. Seventeen patients with behavioural variant frontotemporal dementia [four female; mean (standard deviation) age 64.8 (6.8) years], 12 with semantic variant primary progressive aphasia [four female; 66.9 (7.0) years], nine with non-fluent variant primary progressive aphasia [five female; 67.4 (8.1) years] and 22 healthy controls [12 female; 68.6 (6.8) years] passively viewed videos of universal facial expressions during functional MRI acquisition, with simultaneous heart rate and pupillometric recordings; emotion identification accuracy was assessed in a post-scan behavioural task. Relative to healthy controls, patient groups showed significant impairments (analysis of variance models, all P 5 0.05) of facial emotion identification (all syndromes) and cardiac (all syndromes) and pupillary (non-fluent variant only) reactivity. Group-level functional neuroanatomical changes were assessed using statistical parametric mapping, thresholded at P 5 0.05 after correction for multiple comparisons over the whole brain or within pre-specified regions of interest. In response to viewing facial expressions, all participant groups showed comparable activation of primary visual cortex while patient groups showed differential hypo-activation of fusiform and posterior temporo-occipital junctional cortices. Bi-hemispheric, syndrome-specific activations predicting facial emotion identification performance were identified (behavioural variant, anterior insula and caudate; semantic variant, anterior temporal cortex; non-fluent variant, frontal operculum). The semantic and non-fluent variant groups additionally showed complex profiles of central parasympathetic and sympathetic autonomic involvement that overlapped signatures of emotional visual and categorization processing and extended (in the non-fluent group) to brainstem effector pathways. These findings open a window on the functional cerebral mechanisms underpinning complex socio-emotional phenotypes of frontotemporal dementia, with implications for novel physiological biomarker development. |
James Mathew; J. Randall Flanagan; Frederic R. Danion Gaze behavior during visuomotor tracking with complex hand-cursor dynamics Journal Article In: Journal of vision, vol. 19, no. 14, pp. 1–13, 2019. @article{Mathew2019, The ability to track a moving target with the hand has been extensively studied, but few studies have characterized gaze behavior during this task. Here we investigate gaze behavior when participants learn a new mapping between hand and cursor motion, such that the cursor represented the position of a virtual mass attached to the grasped handle via a virtual spring. Depending on the experimental condition, haptic feedback consistent with mass-spring dynamics could also be provided. For comparison a simple one-to-one hand-cursor mapping was also tested. We hypothesized that gaze would be drawn, at times, to the cursor in the mass-spring conditions, especially in the absence of haptic feedback. As expected hand tracking performance was less accurate under the spring mapping, but gaze behavior was virtually unaffected by the spring mapping, regardless of whether haptic feedback was provided. Specifically, relative gaze position between target and cursor, rate of saccades, and gain of smooth pursuit were similar under both mappings and both haptic feedback conditions. We conclude that even when participants are exposed to a challenging hand-cursor mapping, gaze is primarily concerned about ongoing target motion suggesting that peripheral vision is sufficient to monitor cursor position and to update hand movement control. |
Seema Gorur Prasad; Shiji Viswambharan; Ramesh Kumar Mishra Visual working memory load constrains language non-selective activation under task-demands Journal Article In: Linguistic Approaches to Bilingualism, vol. 6, pp. 805–846, 2019. @article{Prasad2019, Visual world studies with bilinguals have demonstrated spontaneous cross-linguistic activations. In two experiments, we examined whether concurrent visual working memory (VWM) load constrains bilingual parallel activation during spoken word comprehension. Hindi-English bilinguals heard a spoken word in Hindi (L1) or English (L2) and saw a display containing the spoken word-referent, a phonological cohort of the spoken word's translation and two unrelated objects. Participants completed a concurrent WM task of remembering an array of five coloured squares and judging its similarity with a test array. Participants were asked to click on the spoken word-referent in Experiment 1 but not in Experiment 2. Reduced parallel activation and enhanced target activation was observed under the load for L2 spoken words in Experiment 1 (where the task-demands were high). The findings suggest that a VWM load can constrain the spontaneous activation of an irrelevant lexicon, under certain conditions. |
Alexandra Pressigout; Agnès Charvillat; Karima Mersad; Karine Doré-Mazars Time dependency of the SNARC effect for different number formats: evidence from saccadic responses Journal Article In: Psychological Research, vol. 83, no. 7, pp. 1485–1495, 2019. @article{Pressigout2019, In line with the suggestion that the strength of the spatial numerical association of response codes (SNARC) effect was time dependent, the aim of the present study was to assess whether the association strength depends on the processing time of numerical quantity and/or of the time to initiate responses. More specifically, we examined whether and how the SNARC effect could be modulated by number format and effector type. Experiment 1 compared the effect induced by Arabic numbers and number words on the basis of saccadic responses in a parity judgment task. Indeed, previous studies have shown that Arabic numbers lead to faster processing than number words. The results replicated the SNARC effect with Arabic numbers, but not with number words. Experiment 2 was similar to Experiment 1, but this time manual responses (i.e., responses far slower than saccadic ones) were recorded. A strong SNARC effect was observed for both number formats. Further analyses revealed a correlation between mean individual response times and the strength of the SNARC effect. We proposed that the initiation times for saccadic responses may be too short for the SNARC effect to appear, in particular with the written number format for which activation of magnitude takes time. We conclude in terms of time variations resulting from processing specificities related with number format, effector type and also individual reaction and processing speed. |
Silvan C. Quax; Nadine Dijkstra; Mariel J. Staveren; Sander E. Bosch; Marcel A. J. Gerven Eye movements explain decodability during perception and cued attention in MEG Journal Article In: NeuroImage, vol. 195, pp. 444–453, 2019. @article{Quax2019, Eye movements are an integral part of human perception, but can induce artifacts in many magneto-encephalography (MEG) and electroencephalography (EEG) studies. For this reason, investigators try to minimize eye movements and remove these artifacts from their data using different techniques. When these artifacts are not purely random, but consistent regarding certain stimuli or conditions, the possibility arises that eye movements are actually inducing effects in the MEG signal. It remains unclear how much of an influence eye movements can have on observed effects in MEG, since most MEG studies lack a control analysis to verify whether an effect found in the MEG signal is induced by eye movements. Here, we find that we can decode stimulus location from eye movements in two different stages of a working memory match-to-sample task that encompass different areas of research typically done with MEG. This means that the observed MEG effect might be (partly) due to eye movements instead of any true neural correlate. We suggest how to check for eye movement effects in the data and make suggestions on how to minimize eye movement artifacts from occurring in the first place. |
Romain Quentin; Jean Rémi King; Etienne Sallard; Nathan Fishman; Ryan Thompson; Ethan R. Buch; Leonardo G. Cohen Differential brain mechanisms of selection and maintenance of information during working memory Journal Article In: Journal of Neuroscience, vol. 39, no. 19, pp. 3728–3740, 2019. @article{Quentin2019, Working memory is our ability to select and temporarily hold information as needed for complex cognitive operations. The temporal dynamics of sustained and transient neural activity supporting the selection and holding of memory content is not known. To address this problem, we recorded magnetoencephalography in healthy participants performing a retro-cue working memory task in which the selection rule and the memory content varied independently. Multivariate decoding and source analyses showed that selecting the memory content relies on prefrontal and parieto-occipital persistent oscillatory neural activity. By contrast, the memory content was reactivated in a distributed occipitotemporal posterior network, preceding the working memory decision and in a different format than during the visual encoding. These results identify a neural signature of content selection and characterize differentiated spatiotemporal constraints for subprocesses of working memory. |
Mar Quiroga; Adam P. Morris; Bart Krekelberg Short-term attractive tilt aftereffects predicted by a recurrent network model of primary visual cortex Journal Article In: Frontiers in Systems Neuroscience, vol. 13, pp. 67, 2019. @article{Quiroga2019, Adaptation is a multi-faceted phenomenon that is of interest in terms of both its function and its potential to reveal underlying neural processing. Many behavioral studies have shown that after exposure to an oriented adapter the perceived orientation of a subsequent test is repulsed away from the orientation of the adapter. This is the well-known Tilt Aftereffect (TAE). Recently, we showed that the dynamics of recurrently connected networks may contribute substantially to the neural changes induced by adaptation, especially on short time scales. Here we extended the network model and made the novel behavioral prediction that the TAE should be attractive, not repulsive, on a time scale of a few 100ms. Our experiments, using a novel adaptation protocol that specifically targeted adaptation on a short time scale, confirmed this prediction. These results support our hypothesis that recurrent network dynamics may contribute to short-term adaptation. More broadly, they show that understanding the neural processing of visual inputs that change on the time scale of a typical fixation requires a detailed analysis of not only the intrinsic properties of neurons, but also the slow and complex dynamics that emerge from their recurrent connectivity. We argue that this is but one example of how even simple recurrent networks can underlie surprisingly complex information processing, and are involved in rudimentary forms of memory, spatio-temporal integration, and signal amplification. |
Adam W. Qureshi; Rebecca L. Monk; Charlotte R. Pennington; Thomas D. W. Wilcockson; Derek Heim Alcohol-related attentional bias in a gaze contingency task: Comparing appetitive and non-appetitive cues Journal Article In: Addictive Behaviors, vol. 90, pp. 312–317, 2019. @article{Qureshi2019, Background: Non-problem drinkers attend automatically to alcohol-related cues compared to non-alcohol related cues on tests of inhibitory control. Moreover, attentional bias for alcohol-related cues varies between problem and non-problem drinkers. Aim: To examine attentional bias towards alcoholic and non-alcoholic appetitive cues between problem and non-problem drinkers. Method: Forty-one university students (9 male, 32 female; Mage = 21.50) completed an eye-tracking gaze contingency paradigm, measuring the number of times participants looked at peripherally and centrally located stimuli (break frequency) when instructed to maintain focus on a target object. Stimuli consisted of appetitive alcohol-related (e.g., wine), appetitive non-alcohol-related (e.g., cola) and non-appetitive (e.g., fabric softener) stimuli. Participants were split using the Alcohol Use Disorders Identification Test (AUDIT) into non-problem (M AUDIT = 3.86) and problematic drinkers (M AUDIT = 11.59). Results: Problematic drinkers had higher break frequencies towards peripheral appetitive stimuli than towards non-appetitive stimuli, while break frequency was equivalent between appetitive cues presented centrally (alcohol and non-alcohol-related). In contrast, there were no differences in break frequency across stimuli type or cue presentation location (central or peripheral) for non-problem drinkers. Conclusion: In contrast to non-problem drinkers, people displaying more problematic consumption practices may find it more difficult to inhibit eye movements towards appetitive stimuli, particularly when in peripheral vision. This may suggest that attentional biases, as measured in terms of overt eye movements, in problem drinkers may be most powerful when the alcoholic and appetitive stimuli are not directly in field of view. An uncertainty reduction process in the allocation of attention to appetitive cues may help explain the patterns of results observed. |
Lina H. Raffa; Heather Fennell-Al Sayed; Robert LaRoche Measuring attention bias in observers of strabismus subjects Journal Article In: Journal of AAPOS, vol. 23, no. 3, pp. 1–5, 2019. @article{Raffa2019, Background: Despite the known negative psychosocial impact and the importance of facial aesthetics for individuals with strabismus, the gaze pattern of the presumed attention bias has not been documented previously. Methods: Thirty images (15 digitally reconstructed color photographs to show strabismus and 15 photographs of volunteers without strabismus) were viewed in random order by 25 naïve participants (age range, 23-63 years; 15 females). Visual scan paths of participants were recorded using an infrared corneal image eye movement recorder, and the individual parameters of saccades, fixations, and dwell time were assessed using DataViewer software. Results: Viewers primarily tended to fixate on the eyes, the nose was the next most prominent point of focus (both P < 0.001). Time to first fixation and the presence of strabismus in the images presented were significantly associated (P < 0.001). When the eyes were viewed, there was more time spent looking at the strabismic eye (P < 0.001), although the number of fixations toward the eyes did not differ significantly between normal and strabismic faces (P = 0.2). Conclusions: Our results confirm that the presence of strabismus in the features of the human face draws longer attention from the average viewer to the eye region, and particularly to the strabismic eye. |
Rishi Rajalingham; James J. DiCarlo Reversible inactivation of different millimeter-scale regions of primate IT results in rifferent patterns of core object recognition deficits Journal Article In: Neuron, vol. 102, no. 2, pp. 493–505.e5, 2019. @article{Rajalingham2019, Rajalingham and DiCarlo show that inactivating millimeter-scale IT subregions results in selective object recognition deficits, providing direct evidence for a causal role of IT in this behavior. Inactivating different subregions resulted in different deficit patterns, revealing an underlying topographical organization. |
Abhijit Rajan; Sreenivasan Meyyappan; Harrison Walker; Immanuel Babu; Henry Samuel; Zhenhong Hu; Mingzhou Ding Neural mechanisms of internal distraction suppression in visual attention Journal Article In: Cortex, vol. 117, pp. 77–88, 2019. @article{Rajan2019, When performing a demanding cognitive task, internal distraction in the form of task-irrelevant thoughts and mind wandering can shift our attention away from the task, negatively affecting task performance. Behaviorally, individuals with higher executive function indexed by higher working memory capacity (WMC) exhibit less mind wandering during cognitive tasks, but the underlying neural mechanisms are unknown. To address this problem, we recorded functional magnetic resonance imaging (fMRI) data from subjects performing a cued visual attention task, and assessed their WMC in a separate experiment. Applying machine learning and time-series analysis techniques, we showed that (1) higher WMC individuals experienced lower internal distraction through stronger suppression of posterior cingulate cortex (PCC) activity, (2) higher WMC individuals had better neural representations of attended information as evidenced by higher multivoxel decoding accuracy of cue-related activities in the dorsal attention network (DAN), (3) the positive relationship between WMC and DAN decoding accuracy was mediated by suppression of PCC activity, (4) the dorsal anterior cingulate (dACC) was a source of top-down signals that regulate PCC activity as evidenced by the negative association between Granger-causal influence dACC/PCC and PCC activity levels, and (5) higher WMC individuals exhibiting stronger dACC/PCC Granger-causal influence. These results shed light on the neural mechanisms underlying the executive suppression of internal distraction in tasks requiring externally oriented attention and provide an explanation of the individual differences in such suppression. |
Arjun Ramakrishnan; Benjamin Y. Hayden; Michael L. Platt Local field potentials in dorsal anterior cingulate sulcus reflect rewards but not travel time costs during foraging Journal Article In: Brain and Neuroscience Advances, vol. 3, pp. 1–12, 2019. @article{Ramakrishnan2019, To maximise long-term reward rates, foragers deciding when to leave a patch must compute a decision variable that reflects both the immediately available reward and the time costs associated with travelling to the next patch. Identifying the mechanisms that mediate this computation is central to understanding how brains implement foraging decisions. We previously showed that firing rates of dorsal anterior cingulate sulcus neurons incorporate both variables. This result does not provide information about whether integration of information reflected in dorsal anterior cingulate sulcus spiking activity arises locally or whether it is inherited from upstream structures. Here, we examined local field potentials gathered simultaneously with our earlier recordings. In the majority of recording sites, local field potential spectral bands – specifically theta, beta, and gamma frequency ranges – encoded immediately available rewards but not time costs. The disjunction between information contained in spiking and local field potentials can constrain models of foraging-related processing. In particular, given the proposed link between local field potentials and inputs to a brain area, it raises the possibility that local processing within dorsal anterior cingulate sulcus serves to more fully bind immediate reward and time costs into a single decision variable. WABBLE |
Michelle M. Ramey; Andrew P. Yonelinas; John M. Henderson Conscious and unconscious memory differentially impact attention: Eye movements, visual search, and recognition processes Journal Article In: Cognition, vol. 185, pp. 71–82, 2019. @article{Ramey2019, A hotly debated question is whether memory influences attention through conscious or unconscious processes. To address this controversy, we measured eye movements while participants searched repeated real-world scenes for embedded targets, and we assessed memory for each scene using confidence-based methods to isolate different states of subjective memory awareness. We found that memory-informed eye movements during visual search were predicted both by conscious recollection, which led to a highly precise first eye movement toward the remembered location, and by unconscious memory, which increased search efficiency by gradually directing the eyes toward the target throughout the search trial. In contrast, these eye movement measures were not influenced by familiarity-based memory (i.e., changes in subjective reports of memory strength). The results indicate that conscious recollection and unconscious memory can each play distinct and complementary roles in guiding attention to facilitate efficient extraction of visual information. |
Birgit Rauchbauer; Bruno Nazarian; Morgane Bourhis; Magalie Ochs; Laurent Prévot; Thierry Chaminade Brain activity during reciprocal social interaction investigated using conversational robots as control condition Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 374, pp. 1–8, 2019. @article{Rauchbauer2019, We present a novel functional magnetic resonance imaging paradigm for second-person neuroscience. The paradigm compares a human social interaction (human-human interaction, HHI) to an interaction with a conversational robot (human-robot interaction, HRI). The social interaction consists of 1 min blocks of live bidirectional discussion between the scanned participant and the human or robot agent. A final sample of 21 participants is included in the corpus comprising physiological (blood oxygen level-dependent, respiration and peripheral blood flow) and behavioural (recorded speech from all interlocutors, eye tracking from the scanned participant, face recording of the human and robot agents) data. Here, we present the first analysis of this corpus, contrasting neural activity between HHI and HRI. We hypothesized that independently of differences in behaviour between interactions with the human and robot agent, neural markers of mentalizing (temporoparietal junction (TPJ) and medial prefrontal cortex) and social motivation (hypothalamus and amygdala) would only be active in HHI. Results confirmed significantly increased response associated with HHI in the TPJ, hypothalamus and amygdala, but not in the medial prefrontal cortex. Future analysis of this corpus will include fine-grained characterization of verbal and non-verbal behaviours recorded during the interaction to investigate their neural correlates. |
Anirban Ray; Aditi Subramanian; Harleen Chhabra; John Vijay Sagar Kommu; Ganesan Venkatsubramanian; Shoba Srinath; Satish Girimaji; Shekhar P. Sheshadri; Mariamma Philip Eye movement tracking in pediatric obsessive compulsive disorder Journal Article In: Asian Journal of Psychiatry, vol. 43, pp. 9–16, 2019. @article{Ray2019, Till date researchers have elucidated the neurobiological substrates in OCD using methods like neuroimaging. However, a potential biomarker is still elusive. The present study is an attempt to identify a potential biomarker in pediatric OCD using eye tracking. The present study measured pro-saccade and anti-saccade parameters in 36 cases of pediatric OCD and 31 healthy controls. There was no significant difference between cases and controls in the error rate, peak velocity, position gain and latency measures in both pro-saccade and anti-saccade eye tracking tasks. With age, anti-saccades become slower in velocity, faster in response and more accurate irrespective of disorder status of the child. Pro-saccades also show a similar effect that is less prominent than anti-saccades. Gain measures more significantly vary with age in children with OCD than the controls, whereas latency measures positively correlated with age in children with OCD as opposed to being negatively correlated in the controls. Findings of this study do not support any of the eye tracking measures as putative diagnostic bio-markers in OCD. However, latency and gain parameters across different age groups in anti-saccade tasks need to be explored in future studies. |
Sara T. K. Li; Susana T. L. Chung; Janet H. Hsiao Music-reading expertise modulates the visual span for English letters but not Chinese characters Journal Article In: Journal of Vision, vol. 19, no. 4, pp. 1–16, 2019. @article{Li2019f, Recent research has suggested that the visual span in stimulus identification can be enlarged through perceptual learning. Since both English and music reading involve left-to-right sequential symbol processing, music-reading experience may enhance symbol identification through perceptual learning particularly in the right visual field (RVF). In contrast, as Chinese can be read in all directions, and components of Chinese characters do not consistently form a left-right structure, this hypothesized RVF enhancement effect may be limited in Chinese character identification. To test these hypotheses, here we recruited musicians and nonmusicians who read Chinese as their first language (L1) and English as their second language (L2) to identify music notes, English letters, Chinese characters, and novel symbols (Tibetan letters) presented at different eccentricities and visual field locations on the screen while maintaining central fixation. We found that in English letter identification, significantly more musicians achieved above-chance performance in the center-RVF locations than nonmusicians. This effect was not observed in Chinese character or novel symbol identification. We also found that in music note identification, musicians outperformed nonmusicians in accuracy in the center-RVF condition, consistent with the RVF enhancement effect in the visual span observed in English-letter identification. These results suggest that the modulation of music-reading experience on the visual span for stimulus identification depends on the similarities in the perceptual processes involved. |
Ya Li; Yonghui Wang; Sheng Li Recurrent processing of contour integration in the human visual cortex as revealed by fMRI-guided TMS Journal Article In: Cerebral Cortex, vol. 29, no. 1, pp. 17–26, 2019. @article{Li2019i, Contour integration is a critical step in visual perception because it groups discretely local elements into perceptually global contours. Previous investigations have suggested that striate and extrastriate visual areas are involved in this mid-level processing of visual perception. However, the temporal dynamics of these areas in the human brain during contour integration is less understood. The present study used functional magnetic resonance imaging-guided transcranial magnetic stimulation (TMS) to briefly disrupt 1 of 2 visual areas (V1/V2 and V3B) and examined the causal contributions of these areas to contour detection. The results demonstrated that the earliest critical time window at which behavioral detection performance was impaired by TMS pluses differed between V1/V2 and V3B. The first critical window of V3B (90-110 ms after stimulus onset) was earlier than that of V1/V2 (120-140 ms after stimulus onset), thus indicating that feedback connection from higher to lower area was necessary for complete contour integration. These results suggested that the fine processing of contour-related information in V1/V2 follows the generation of a coarse template in the higher visual areas, such as V3B. Our findings provide direct causal evidence that a recurrent mechanism is necessary for the integration of contours from cluttered background in the human brain. |
Zhihan Li; An Yan; Kun Guo; Wu Li Fear-related signals in the primary visual cortex Journal Article In: Current Biology, vol. 29, no. 23, pp. 4078–4083, 2019. @article{Li2019h, Neuronal responses in the primary visual cortex (V1) are driven by simple stimuli, but these stimulus- evoked responses can be markedly modulated by non-sensory factors, such as attention and reward [1], and shaped by perceptual training [2]. In real-life situations, neutral visual stimuli can become emotionally tagged by experience, resulting in altered perceptual abilities to detect and discriminate these stimuli [3–5]. Human imaging [4] and electroencephalography (EEG) studies [6–9] have shown that visual fear learning (the acquisition of aversive emotion associated with a visual stimulus) affects the activities in visual cortical areas as early as in V1. However, it remains elusive as to whether the fear-related activities seen in the early visual cortex have to do with feedback influences from other cortical areas; it is also unclear whether and how the response properties of V1 cells are modified during the fear learning. In the current study, we addressed these issues by recording from V1 of awake monkeys implanted with an array of microelectrodes. We found that responses of V1 neurons were rapidly modified when a given orientation of grating stimulus was repeatedly associated with an aversive stimulus. The output visual signals from V1 cells conveyed, from their response outset, fear-related signals that were specific to the fear-associated grating orientation and visual-field location. The specific fear signals were independent of neurons' orientation preferences and were present even though the fear-associated stimuli were rendered invisible. Our findings suggest a bottom-up mechanism that allows for proactive labeling of visual inputs that are predictive of imminent danger. |
Alfred Lim; Vivian Eng; Caitlyn Osborne; Steve M. J. Janssen; Jason Satel Inhibitory and facilitatory cueing effects: Competition between exogenous and endogenous mechanisms Journal Article In: Vision, vol. 3, pp. 40, 2019. @article{Lim2019, Inhibition of return is characterized by delayed responses to previously attended locations when the cue-target onset asynchrony (CTOA) is long enough. However, when cues are predictive of a target's location, faster reaction times to cued as compared to uncued targets are normally observed. In this series of experiments investigating saccadic reaction times, we manipulated the cue predictability to 25% (counterpredictive), 50% (nonpredictive), and 75% (predictive) to investigate the interaction between predictive endogenous facilitatory (FCEs) and inhibitory cueing effects (ICEs). Overall, larger ICEs were seen in the counterpredictive condition than in the nonpredictive condition, and no ICE was found in the predictive condition. Based on the hypothesized additivity of FCEs and ICEs, we reasoned that the null ICEs observed in the predictive condition are the result of two opposing mechanisms balancing each other out, and the large ICEs observed with counterpredictive cueing can be attributed to the combination of endogenous facilitation at uncued locations with inhibition at cued locations. Our findings suggest that the endogenous activity contributed by cue predictability can reduce the overall inhibition observed when the mechanisms occur at the same location, or enhance behavioral inhibition when the mechanisms occur at opposite locations. |
Hai Lin; Wei-ping Li; Synnöve Carlson A privileged working memory state and potential top-down modulation for faces, not scenes Journal Article In: Frontiers in Human Neuroscience, vol. 13, pp. 2, 2019. @article{Lin2019a, Top-down modulation is engaged during multiple stages of working memory (WM), including expectation, encoding, and maintenance. During WM maintenance period, an “incidental cue” can bring one of the two items into a privileged state and make the privileged item be recalled with higher precision, despite being irrelevant to which one to be probed as the target. With regard to the different representational states of WM, it's unclear whether there is top-down modulation on earth sensory cortical areas. Here, We used this behavioral paradigm of “incidental cue” and event-related fMRI to investigate whether there were a privileged WM state and top-down modulation for complex stimuli including faces and natural scenes. We found that faces, not scenes, could enter into the privileged state with improved accuracy and response time of WM task. Meanwhile, cue-driven baseline activity shifts in fusiform face area (FFA) were identified by univariate analysis in the recognition of privileged faces, compared to that of non-privileged ones. In addition, the functional connectivity between FFA and right inferior frontal junction (IFJ), middle frontal gyrus (MFG), inferior frontal gyrus, right intraparietal sulcus (IPS), right precuneus and supplementary motor area was significantly enhanced, corresponding to the improved WM performance. Moreover, FFA connectivity with IFJ and IPS could predict WM improvements. These findings indicated that privileged WM state and potential top-down modulation existed for faces, but not scenes, during WM maintenance period. |
Ebony R. Lindor; Nicole J. Rinehart; Joanne Fielding Distractor inhibition in Autism Spectrum Disorder: Evidence of a selective impairment for individuals with co-occurring motor difficulties Journal Article In: Journal of Autism and Developmental Disorders, vol. 49, no. 2, pp. 669–682, 2019. @article{Lindor2019a, Although most researchers agree that individuals with Autism Spectrum Disorder (ASD) exhibit atypical attention, there is little consensus on the exact nature of their deficits. We explored whether attentional control in ASD varies as a function of motor proficiency. Nineteen children with ASD and 26 typically-developing controls completed the Movement Assessment Battery for Children and two ocular motor tasks requiring them to generate a saccade toward, and fixate, a visual target in the presence or absence of a distractor. The ASD group demonstrated poorer accuracy than typically-developing controls when distractors were present. Importantly, however, ASD symptomology was only related to poorer accuracy in individuals with motor difficulties. These findings suggest that distractor inhibition may be selectively impaired in this subgroup. |
Ebony R. Lindor; Jeroen J. A. Boxtel; Nicole J. Rinehart; Joanne Fielding Motor difficulties are associated with impaired perception of interactive human movement in autism spectrum disorder: A pilot study Journal Article In: Journal of Clinical and Experimental Neuropsychology, vol. 41, no. 8, pp. 856–874, 2019. @article{Lindor2019, Introduction: The ability to accurately perceive human movement is fundamental to social functioning and known to be influenced by one's own motor skills. In Autism Spectrum Disorder (ASD), there is ongoing debate about whether human movement perception is impaired. Given that motor skills vary considerably among these individuals, it may be that human movement perception is differentially affected as a function of motor proficiency. The aim of the current study was, thus, to explore whether individuals with ASD with and without motor difficulties differ in the way they visually attend to and perceive human movement. Method: Three groups of children aged 6 to 14 completed the study: an ASD group with motor difficulties (ASDMD), an ASD group without motor difficulties (ASDNMD), and a typically-developing control group (TD). All participants (N = 31) underwent eye-tracking while they viewed communicative interactions performed by two point-light actors. Primary analyses considered group differences in perceptual accuracy and gaze patterns. Results: Results revealed poorer perceptual accuracy in the ASDMD group compared to the ASDNMD and TD groups. Both ASD groups also exhibited gaze anomalies. Unlike the ASDNMD and TD groups who preferentially allocated their gaze to the actor initiating the interaction, the ASDMD group gazed at both actors equally. In contrast, the ASDNMD group shifted their gaze between the actors more frequently than the other groups. Conclusions: These preliminary findings suggest that individuals with ASD and co-occurring motor difficulties employ an atypical attentional style that may hinder accurate human movement perception, whereas those without motor difficulties may employ a compensatory attentional style that facilitates typical perception. Improving our understanding of how attention and perception are affected across the ASD spectrum has the potential to provide insight into the mechanisms that underlie the core social deficits that define this disorder. |
Ying Ling; Zhou Yang; Todd Jackson Visual attention to pain cues for impending touch versus impending pain: An eye tracking study Journal Article In: European Journal of Pain, vol. 23, no. 8, pp. 1527–1537, 2019. @article{Ling2019, Background: In this eye tracking study, we evaluated pain-related biases in orienting and maintenance of gaze within impending touch versus impending pain tasks and examined features of pain resilience as individual difference influences on potential biases. Methods: Gaze preferences of healthy adults (25 women and 39 men) were assessed during standardized pain-neutral (P-N) image pair presentations (2,000 ms) of an impending touch task versus an impending pain task whereby image pair offsets were followed by potential non-painful touch and potential pain stimulation, respectively. Results: Within each task, participants were significantly more likely to fixate first upon pain images in P-N pairs and maintain gaze on these images for longer overall durations during trials. Between task comparisons indicated pain-related biases in orienting and maintenance were significantly stronger when image pairs signalled potential pain rather than impending touch. Finally, within the impending pain task, higher scores on the behaviour perseverance dimension of pain resilience were related to shorter first fixation durations and overall gaze durations towards pain images. Conclusions: Supporting specific threat interpretation model premises, comparatively more threatening external pain cues for impending pain were characterized by gaze biases reflecting pronounced early attentional capture and subsequent prolonged vigilance. However, elevations in self-reported behavioural perseverance in spite of pain corresponded to an increased capacity to disengage from pain images that signalled potential pain. Significance: Gaze biases were assessed within a comparatively benign “impending touch” paradigm versus a higher threat, impending pain task. Early capture and maintenance of gaze towards pain images were more pronounced on the latter task, although pain resilient participants were able to disengage more easily from pain images signalling possible pain. |
Damien Litchfield; Tim Donovan Expecting the initial glimpse: Prior target knowledge activation or repeated search does not eliminate scene preview search benefits Journal Article In: Journal of Cognitive Psychology, vol. 31, no. 1, pp. 49–63, 2019. @article{Litchfield2019, A brief glimpse of a scene can guide eye movements but it remains unclear how prior target knowledge influences early scene processing. Using the ‘flash-preview moving window' (FPMW) paradigm to restrict peripheral vision during search, we manipulated whether target identity was presented before or after previews. Windowed search was more efficient following 250 ms scene previews, and knowing target identity beforehand further improved how search was initiated and executed. However, in Experiment 2 when targets were removed from scene previews, only the initiation of search continued to be modulated by prior activation of target knowledge. Experiment 3 showed that search benefits from scene previews are maintained even when repeatedly searching through the same type of scene for the same type of target. Experiment 4 replicated Experiment 3 whilst also controlling for differences in integration times. We discuss the flexibility of the FPMW paradigm to measure how the first glimpse affects search. |
Celia Litovsky; Feitong Yang; Jonathan Flombaum; Michael McCloskey Bimanual visually guided movements are more than the sum of their parts: Evidence from optic ataxia Journal Article In: Cognitive Neuropsychology, vol. 36, no. 7-8, pp. 410–420, 2019. @article{Litovsky2019, Many reaching actions involve both hands. An open question is whether two-handed reaching involves two simultaneous, independent unimanual reaches, or recruits additional or different processes than those mediating one-handed reaching. We tested optic ataxic patient MDK on a set of unimanual and bimanual reaching tasks. Although MDK was impaired in both types of reaching task, his bimanual reaching was considerably better than his unimanual reaching. These results imply that bimanual reaching involves different or additional processes relative to unimanual reaching. We suggest that bimanual reaching may involve monitoring of the distance between the two hands relative to the distance between the two targets. |
Xinge Liu; Xingfen Liang; Cong Feng; Guomei Zhou Self-construal priming affects holistic face processing and race categorization, but not face recognition Journal Article In: Frontiers in Psychology, vol. 10, pp. 1973, 2019. @article{Liu2019b, Self-construal priming can affect an individual's cognitive processing. Participants who were primed with interdependent self-construal showed more holistic process bias than those who were primed with independent self-construal. The holistic processing of a face also differs across cultures. As such, the purpose of the present study was to explore whether the cultural differences in holistic face processing can be interpreted from the perspective of self-construal, as well as to investigate the relationship between self-construal and holistic face processing/face recognition/race categorization. In Experiment 1, participants were primed with control, interdependent, or independent self-construal, respectively, and then they completed a feature-space same-different task (Experiment 1A) or a composite face effect task (Experiment 1B). Results showed no priming effect in Experiment 1A, whereas independent self-construal priming resulted in less holistic processing in Experiment 1B. In Experiment 2, participants were primed with control, collective/interdependent, relational, or independent self-construal, respectively, and then they completed a Vanderbilt Holistic Face Processing Test and Cambridge Face Memory Test. Participants who were primed as independent showed greater congruency effect than the relational group. Self-construal priming had no effect on face recognition. In Experiment 3, we manipulated self-construal in the same way as that in Experiment 2 and monitored the eye movement of Chinese participants while they learned, recognized, and categorized their own-/other-race faces. Self-construal priming had no effect on face recognition. Compared with other groups, collective-/interdependent-self priming increased the fixation time of eyes and decreased the fixation time of nose in the race categorization task. These results indicated that the cultural differences in self-construal could not mirror the cultural differences in face processing in a simple way. |
Sarah D. McCrackin; Roxane J. Itier Perceived gaze direction differentially affects discrimination of facial emotion, attention, and gender - An ERP study Journal Article In: Frontiers in Neuroscience, vol. 13, pp. 517, 2019. @article{McCrackin2019, The perception of eye-gaze is thought to be a key component of our everyday social interactions. While the neural correlates of direct and averted gaze processing have been investigated, there is little consensus about how these gaze directions may be processed differently as a function of the task being performed. In a within-subject design, we examined how perception of direct and averted gaze affected performance on tasks requiring participants to use directly available facial cues to infer the individuals' emotional state (emotion discrimination), direction of attention (attention discrimination) and gender (gender discrimination). Neural activity was recorded throughout the three tasks using EEG, and ERPs time-locked to face onset were analyzed. Participants were most accurate at discriminating emotions with direct gaze faces, but most accurate at discriminating attention with averted gaze faces, while gender discrimination was not affected by gaze direction. At the neural level, direct and averted gaze elicited different patterns of activation depending on the task over frontal sites, from approximately 220-290 ms. More positive amplitudes were seen for direct than averted gaze in the emotion discrimination task. In contrast, more positive amplitudes were seen for averted gaze than for direct gaze in the gender discrimination task. These findings are among the first direct evidence that perceived gaze direction modulates neural activity differently depending on task demands, and that at the behavioral level, specific gaze directions functionally overlap with emotion and attention discrimination, precursors to more elaborated theory of mind processes. |
Sinè McDougall; Judy Edworthy; Deili Sinimeri; Jamie Goodliffe; Daniel Bradley; James Foster Searching for meaning in sound: Learning and interpreting alarm signals in visual environments Journal Article In: Journal of Experimental Psychology: Applied, vol. 26, no. 1, pp. 1–19, 2019. @article{McDougall2019, Given the ease with which the diverse array of environmental sounds can be understood, the difficulties encountered in using auditory alarm signals on medical devices are surprising. In two experiments, with nonclinical participants, alarm sets which relied on similarities to environmental sounds (concrete alarms, such as a heartbeat sound to indicate "check cardiovascular function") were compared to alarms using abstract tones to represent functions on medical devices. The extent to which alarms were acoustically diverse was also examined: alarm sets were either acoustically different or acoustically similar within each set. In Experiment 1, concrete alarm sets, which were also acoustically different, were learned more quickly than abstract alarms which were acoustically similar. Importantly, the abstract similar alarms were devised using guidelines from the current global medical device standard (International Electrotechnical Commission 60601-1-8, 2012). Experiment 2 replicated these findings. In addition, eye tracking data showed that participants were most likely to fixate first on the correct medical devices in an operating theater scene when presented with concrete acoustically different alarms using real world sounds. A new set of alarms which are related to environmental sounds and differ acoustically have therefore been proposed as a replacement for the current medical device standard. |
Vincent B. McGinty Overt attention toward appetitive cues enhances their subjective value, independent of orbitofrontal cortex activity Journal Article In: eNeuro, vol. 6, no. 6, pp. 1–19, 2019. @article{McGinty2019, Neural representations of value underlie many behaviors that are crucial for survival. Previously, we found that value representations in primate orbitofrontal cortex (OFC) are modulated by attention, specifically, by overt shifts of gaze toward or away from reward-associated visual cues (McGinty et al., 2016). Here, we investigate the influence of overt attention on behavior by asking how gaze shifts correlate with reward anticipatory responses and whether activity in OFC mediates this correlation. Macaque monkeys viewed pavlovian conditioned appetitive cues on a visual display, while the fraction of time they spent looking toward or away from the cues was measured using an eye tracker. Also measured during cue presentation were the reward anticipation, indicated by conditioned licking responses (CRs), and single-neuron activity in OFC. In general, gaze allocation predicted subsequent licking responses: the longer the monkeys spent looking at a cue at a given time point in a trial, the more likely they were to produce an anticipatory CR later in that trial, as if the subjective value of the cue were increased. To address neural mechanisms, mediation analysis measured the extent to which the gaze–CR correlation could be statistically explained by the concurrently recorded firing of OFC neurons. The resulting mediation effects were indistinguishable from chance. Therefore, while overt attention may increase the subjective value of reward-associated cues (as revealed by anticipatory behaviors), the underlying mechanism remains unknown, as does the functional significance of gaze-driven modulation of OFC value signals. |
Radha Nila Meghanathan; Andrey R. Nikolaev; Cees Leeuwen Refixation patterns reveal memory-encoding strategies in free viewing Journal Article In: Attention, Perception, and Psychophysics, vol. 81, no. 7, pp. 2499–2516, 2019. @article{Meghanathan2019, We investigated visual working memory encoding across saccadic eye movements, focusing our analysis on refixation behavior. Over 10-s periods, participants performed a visual search for three, four, or five targets and remembered their orientations for a subsequent change-detection task. In 50% of the trials, one of the targets had its orientation changed. From the visual search period, we scored three types of refixations and applied measures for quantifying eye-fixation recurrence patterns. Repeated fixations on the same regions as well as repeated fixation patterns increased with memory load. Correct change detection was associated with more refixations on targets and less on distractors, with increased frequency of recurrence, and with longer intervals between refixations. The results are in accordance with the view that patterns of eye movement are an integral part of visual working memory representation. |
Priyanka S. Mehta; Jiaxin Cindy Tu; Giuliana A. LoConte; Meghan C. Pesce; Benjamin Y. Hayden Ventromedial prefrontal cortex tracks multiple environmental variables during search Journal Article In: Journal of Neuroscience, vol. 39, no. 27, pp. 5336–5350, 2019. @article{Mehta2019, To make efficient foraging decisions, we must combine information about the values of available options with nonvalue information. Some accounts of ventromedial PFC (vmPFC) suggest that it has a narrow role limited to evaluating immediately available options. We examined responses of neurons in area 14 (a putative macaque homolog of human vmPFC) as 2 male macaques performed a novel foraging search task. Although many neurons encoded the values of immediately available offers, they also independently encoded several other variables that influence choice, but that are conceptually distinct from offer value. These variables include average reward rate, number of offers viewed per trial, previous offer values, previous outcome sizes, and the locations of the currently attended offer.We conclude that, rather than serving as specialized economic value center, vmPFC plays a broad role in integrating relevant environmental information to drive foraging decisions. |
David Meijer; Sebastijan Veselič; Carmelo Calafiore; Uta Noppeney Integration of audiovisual spatial signals is not consistent with maximum likelihood estimation Journal Article In: Cortex, vol. 119, pp. 74–88, 2019. @article{Meijer2019, Multisensory perception is regarded as one of the most prominent examples where human behaviour conforms to the computational principles of maximum likelihood estimation (MLE). In particular, observers are thought to integrate auditory and visual spatial cues weighted in proportion to their relative sensory reliabilities into the most reliable and unbiased percept consistent with MLE. Yet, evidence to date has been inconsistent. The current pre-registered, large-scale (N = 36)replication study investigated the extent to which human behaviour for audiovisual localization is in line with maximum likelihood estimation. The acquired psychophysics data show that while observers were able to reduce their multisensory variance relative to the unisensory variances in accordance with MLE, they weighed the visual signals significantly stronger than predicted by MLE. Simulations show that this dissociation can be explained by a greater sensitivity of standard estimation procedures to detect deviations from MLE predictions for sensory weights than for audiovisual variances. Our results therefore suggest that observers did not integrate audiovisual spatial signals weighted exactly in proportion to their relative reliabilities for localization. These small deviations from the predictions of maximum likelihood estimation may be explained by observers' uncertainty about the world's causal structure as accounted for by Bayesian causal inference. |
Erik L. Meijs; Pim Mostert; Heleen A. Slagter; Floris P. Lange; Simon Gaal Exploring the role of expectations and stimulus relevance on stimulus-specific neural representations and conscious report Journal Article In: Neuroscience of Consciousness, vol. 5, no. 1, pp. 1–12, 2019. @article{Meijs2019, Subjective experience can be influenced by top-down factors, such as expectations and stimulus relevance. Recently, it has been shown that expectations can enhance the likelihood that a stimulus is consciously reported, but the neural mechanisms supporting this enhancement are still unclear. We manipulated stimulus expectations within the attentional blink (AB) paradigm using letters and combined visual psychophysics with magnetoencephalographic (MEG) recordings to investigate whether prior expectations may enhance conscious access by sharpening stimulus-specific neural representations. We further explored how stimulus-specific neural activity patterns are affected by the factors expectation, stimulus relevance and conscious report. First, we show that valid expectations about the identity of an upcoming stimulus increase the likelihood that it is consciously reported. Second, using a series of multivariate decoding analyses, we show that the identity of letters presented in and out of the AB can be reliably decoded from MEG data. Third, we show that early sensory stimulus-specific neural representations are similar for reported and missed target letters in the AB task (active report required) and an oddball task in which the letter was clearly presented but its identity was task-irrelevant. However, later sustained and stable stimulus-specific representations were uniquely observed when target letters were consciously reported (decision-dependent signal). Fourth, we show that global pre-stimulus neural activity biased perceptual decisions for a ‘seen' response. Fifth and last, no evidence was obtained for the sharpening of sensory representations by top-down expectations. We discuss these findings in light of emerging models of perception and conscious report highlighting the role of expectations and stimulus relevance. |
Tamaryn Menneer; Kyle R. Cave; Elina Kaplan; Michael J. Stroud; Junha Chang; Nick Donnelly The relationship between working memory and the dual-target cost in visual search guidance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 7, pp. 911–935, 2019. @article{Menneer2019, Searching for two targets produces a dual-target cost compared with single-target search, with reduced attentional guidance toward targets (Stroud, Menneer, Cave, & Donnelly, 2012). We explore the effect of holding a color in working memory (WM) on guidance in single-target search. In Experiments 1 and 2, participants searched for a T of a specific color while holding one of the following in WM: a color patch, a letter, a dot pattern, or an oriented bar. Only when holding a color in WM was guidance in single-target search affected as strongly as it is in dual-target search. In Experiment 3, the target changed color from trial to trial. A color in WM reduced guidance, but not to the extent of dual-target search. However, search andWMerror rates were high, suggesting interference and incomplete engagement with the combined task. We conclude that the guidance cost in dual-target search is not solely due to attentional capture by the WM-color, because the WM-color can be effectively separated from search color, with little confusion between the two. However, WM load does cause substantial interference in guidance when both tasks involve color. These results illustrate the complex interactions between WM and attentional guidance. |