EyeLink Non-Human Primate Publications
All EyeLink non-human primate research publications up until 2020 (with some early 2021s) are listed below by year. You can search the publications using keywords such as Temporal Cortex, Macaque, Antisaccade, etc. You can also search for individual author names. If we missed any EyeLink non-human primate article, please email us!
Milena Raffi; Andrea Meoni; Alessandro Piras
In: Neuroscience Letters, 743 , pp. 1–7, 2021.
The spatial location indicated by a visual cue can bias microsaccades directions towards or away from the cue. Aim of this work was to evaluate the microsaccades characteristics during the monkey's training, investigating the relationship between a shift of attention and practice. The monkey was trained to press a lever at a target onset, then an expanding optic flow stimulus appeared to the right of the target. After a variable time delay, a visual cue appeared within the optic flow stimulus and the monkey had to release the lever in a maximum reaction time (RT) of 700 ms. In the control task no visual cue appeared and the monkey had to attend a change in the target color. Data were recorded in 9 months. Results revealed that the RTs at the control task changed significantly across time. The microsaccades directions were significantly clustered toward the visual cue, suggesting that the animal developed an attentional bias toward the visual space where the cue appeared. The microsaccades amplitude differed significantly across time. The microsaccades peak velocity differed significantly both across time and within the time delays, indicating that the monkey made faster microsaccades when it expected the cue to appear. The microsaccades number was significantly higher in the control task with respect to discrimination. The lack of change in microsaccades rate, duration, number and direction across time indicates that the experience acquired during practicing the task did not influence microsaccades generation.
Amarender R Bogadhi; Leor N Katz; Anil Bollimunta; David A Leopold; Richard J Krauzlis
The evolution of the primate brain is marked by a dramatic increase in the number of neocortical areas that process visual information 1. This cortical expansion supports two hallmarks of high-level primate vision - the ability to selectively attend to particular visual features 2 and the ability to recognize a seemingly limitless number of complex visual objects 3. Given their prominent roles in high-level vision for primates, it is commonly assumed that these cortical processes supersede the earlier versions of these functions accomplished by the evolutionarily older brain structures that lie beneath the cortex. Contrary to this view, here we show that the superior colliculus (SC), a midbrain structure conserved across all vertebrates 4, is necessary for the normal expression of attention-related modulation and object selectivity in a newly identified region of macaque temporal cortex. Using a combination of psychophysics, causal perturbations and fMRI, we identified a localized region in the temporal cortex that is functionally dependent on the SC. Targeted electrophysiological recordings in this cortical region revealed neurons with strong attention-related modulation that was markedly reduced during attention deficits caused by SC inactivation. Many of these neurons also exhibited selectivity for particular visual objects, and this selectivity was also reduced during SC inactivation. Thus, the SC exerts a causal influence on high-level visual processing in cortex at a surprisingly late stage where attention and object selectivity converge, perhaps determined by the elemental forms of perceptual processing the SC has supported since before there was a neocortex.
Francesco Fabbrini; Rufin Vogels
In: Journal of Neurophysiology, 125 (1), pp. 1–20, 2021.
The decrease in response with stimulus repetition is a common property observed in many sensory brain areas. This repetition suppression (RS) is ubiquitous in neurons of macaque inferior temporal (IT) cortex, the end-stage of the ventral visual pathway. The neural mechanisms of RS in IT are still unclear, and one possibility is that it is inherited from areas upstream to IT that show also RS. Since neurons in IT have larger receptive fields compared to earlier visual areas, we examined the inheritance hypothesis by presenting adapter and test stimuli at widely different spatial locations along both vertical and horizontal meridians, and across hemifields. RS was present for distances between adapter and test stimuli up to 22°, and when the two stimuli were presented in different hemifields. Also, we examined the position tolerance of the stimulus selectivity of adaptation by comparing the responses to a test stimulus following the same (repetition trial) or a different adapter (alternation trial) at a different position than the test stimulus. Stimulus-selective adaptation was still present and consistently stronger in the later phase of the response for distances up to 18°. Finally, we observed stimulus-selective adaptation in repetition trials even without a measurable excitatory response to the adapter stimulus. To accommodate these and previous data, we propose that at least part of the stimulus-selective adaptation in IT is based on short-term plasticity mechanisms within IT and/or reflects top-down activity from areas downstream to IT.
Jacob A Westerberg; Alexander Maier; Geoffrey F Woodman; Jeffrey D Schall
Performance monitoring during visual priming Journal Article
In: Journal of Cognitive Neuroscience, 32 (3), pp. 515–526, 2020.
Repetitive performance of single-feature (efficient or popout) visual search improves RTs and accuracy. This phenomenon, known as priming of pop-out, has been demonstrated in both humans and macaque monkeys. We investigated the relationship between performance monitoring and priming of pop-out. Neuronal activity in the supplementary eye field (SEF) contributes to performance monitoring and to the generation of performance monitoring signals in the EEG. To determine whether priming depends on performance monitoring, we investigated spiking activity in SEF as well as the concurrent EEG of two monkeys performing a priming of pop-out task. We found that SEF spiking did not modulate with priming. Surprisingly, concurrent EEG did covary with priming. Together, these results suggest that performance monitoring contributes to priming of pop-out. However, this performance monitoring seems not mediated by SEF. This dissociation suggests that EEG indices of performance monitoring arise from multiple, functionally distinct neural generators.
Guillaume Doucet; Roberto A Gulli; Benjamin W Corrigan; Lyndon R Duong; Julio C Martinez-Trujillo
In: Hippocampus, 30 (3), pp. 192–209, 2020.
Primates use saccades to gather information about objects and their relative spatial arrangement, a process essential for visual perception and memory. It has been proposed that signals linked to saccades reset the phase of local field potential (LFP) oscillations in the hippocampus, providing a temporal window for visual signals to activate neurons in this region and influence memory formation. We investigated this issue by measuring hippocampal LFPs and spikes in two macaques performing different tasks with unconstrained eye movements. We found that LFP phase clustering (PC) in the alpha/beta (8–16 Hz) frequencies followed foveation onsets, while PC in frequencies lower than 8 Hz followed spontaneous saccades, even on a homogeneous background. Saccades to a solid grey background were not followed by increases in local neuronal firing, whereas saccades toward appearing visual stimuli were. Finally, saccade parameters correlated with LFPs phase and amplitude: saccade direction correlated with delta (≤4 Hz) phase, and saccade amplitude with theta (4–8 Hz) power. Our results suggest that signals linked to saccades reach the hippocampus, producing synchronization of delta/theta LFPs without a general activation of local neurons. Moreover, some visual inputs co-occurring with saccades produce LFP synchronization in the alpha/beta bands and elevated neuronal firing. Our findings support the hypothesis that saccade-related signals enact sensory input-dependent plasticity and therefore memory formation in the primate hippocampus.
Ramina Adam; Kevin D Johnston; Ravi S Menon; Stefan Everling
In: NeuroImage, 207 , pp. 1–17, 2020.
Visual extinction has been characterized by the failure to respond to a visual stimulus in the contralesional hemifield when presented simultaneously with an ipsilesional stimulus (Corbetta and Shulman, 2011). Unilateral damage to the macaque frontoparietal cortex commonly leads to deficits in contralesional target selection that resemble visual extinction. Recently, we showed that macaque monkeys with unilateral lesions in the caudal prefrontal cortex (PFC) exhibited contralesional target selection deficits that recovered over 2–4 months (Adam et al., 2019). Here, we investigated the longitudinal changes in functional connectivity (FC) of the frontoparietal network after a small or large right caudal PFC lesion in four macaque monkeys. We collected ultra-high field resting-state fMRI at 7-T before the lesion and at weeks 1–16 post-lesion and compared the functional data with behavioural performance on a free-choice saccade task. We found that the pattern of frontoparietal network FC changes depended on lesion size, such that the recovery of contralesional extinction was associated with an initial increase in network FC that returned to baseline in the two small lesion monkeys, whereas FC continued to increase throughout recovery in the two monkeys with a larger lesion. We also found that the FC between contralesional dorsolateral PFC and ipsilesional parietal cortex correlated with behavioural recovery and that the contralesional dorsolateral PFC showed increasing degree centrality with the frontoparietal network. These findings suggest that both the contralesional and ipsilesional hemispheres play an important role in the recovery of function. Importantly, optimal compensation after large PFC lesions may require greater recruitment of distant and intact areas of the frontoparietal network, whereas recovery from smaller lesions was supported by a normalization of the functional network.
Habiba Azab; Benjamin Y Hayden
In: Behavioral Neuroscience, 134 (4), pp. 296–308, 2020.
Evaluation often involves integrating multiple determinants of value, such as the different possible outcomes in risky choice. A brain region can be placed either before or after a presumed evaluation stage by measuring how responses of its neurons depend on multiple determinants of value. A brain region could also, in principle, show partial integration, which would indicate that it occupies a middle position between (preevaluative) nonintegration and (postevaluative) full integration. Existing mathematical techniques cannot distinguish full from partial integration and therefore risk misidentifying regional function. Here we use a new Bayesian regression-based approach to analyze responses of neurons in dorsal anterior cingulate cortex (dACC) to risky offers. We find that dACC neurons only partially integrate across outcome dimensions, indicating that dACC cannot be assigned to either a pre- or postevaluative position. Neurons in dACC also show putative signatures of value comparison, thereby demonstrating that comparison does not require complete evaluation before proceeding.
Marzyeh Azimi; Mariann Oemisch; Thilo Womelsdorf
In: Psychopharmacology, 237 (4), pp. 997–1010, 2020.
Rationale: Nicotinic acetylcholine receptors (nAChRs) modulate attention, memory, and higher executive functioning, but it is unclear how nACh sub-receptors mediate different mechanisms supporting these functions. Objectives: We investigated whether selective agonists for the alpha-7 nAChR versus the alpha-4/beta-2 nAChR have unique functional contributions for value learning and attentional filtering of distractors in the nonhuman primate. Methods: Two adult rhesus macaque monkeys performed reversal learning following systemic administration of either the alpha-7 nAChR agonist PHA-543613 or the alpha-4/beta-2 nAChR agonist ABT-089 or a vehicle control. Behavioral analysis quantified performance accuracy, speed of processing, reversal learning speed, the control of distractor interference, perseveration tendencies, and motivation. Results: We found that the alpha-7 nAChR agonist PHA-543613 enhanced the learning speed of feature values but did not modulate how salient distracting information was filtered from ongoing choice processes. In contrast, the selective alpha-4/beta-2 nAChR agonist ABT-089 did not affect learning speed but reduced distractibility. This dissociation was dose-dependent and evident in the absence of systematic changes in overall performance, reward intake, motivation to perform the task, perseveration tendencies, or reaction times. Conclusions: These results suggest nicotinic sub-receptor specific mechanisms consistent with (1) alpha-4/beta-2 nAChR specific amplification of cholinergic transients in prefrontal cortex linked to enhanced cue detection in light of interferences, and (2) alpha-7 nAChR specific activation prolonging cholinergic transients, which could facilitate subjects to follow-through with newly established attentional strategies when outcome contingencies change. These insights will be critical for developing function-specific drugs alleviating attention and learning deficits in neuro-psychiatric diseases.
Pragathi Priyadharsini Balasubramani; Meghan C Pesce; Benjamin Y Hayden
In: European Journal of Neuroscience, 51 (10), pp. 2033–2051, 2020.
Stopping, or inhibition, is a form of self-control that is a core element of flexible and adaptive behavior. Its neural origins remain unclear. Some views hold that inhibition decisions reflect the aggregation of widespread and diverse pieces of information, including information arising in ostensible core reward regions (i.e., outside the canonical executive system). We recorded activity of single neurons in the orbitofrontal cortex (OFC) of macaques, a region associated with economic decisions, and whose role in inhibition is debated. Subjects performed a classic inhibition task known as the stop signal task. Ensemble decoding analyses reveal a clear firing rate pattern that distinguishes successful from failed inhibition and that begins after the stop signal and before the stop signal reaction time (SSRT). We also found a different and orthogonal ensemble pattern that distinguishes successful from failed stopping before the beginning of the trial. These signals were distinct from, and orthogonal to, value encoding, which was also observed in these neurons. The timing of the early and late signals was, respectively, consistent with the idea that neuronal activity in OFC encodes inhibition both proactively and reactively.
Kévin Blaize; Fabrice Arcizet; Marc Gesnik; Harry Ahnine; Ulisse Ferrari; Thomas Deffieux; Pierre Pouget; Frédéric Chavane; Mathias Fink; José Alain Sahel; José Alain Sahel; José Alain Sahel; Mickael Tanter; Serge Picaud
In: Proceedings of the National Academy of Sciences, 117 (25), pp. 14453–14463, 2020.
Deep regions of the brain are not easily accessible to investigation at the mesoscale level in awake animals or humans. We have recently developed a functional ultrasound (fUS) technique that enables imaging hemodynamic responses to visual tasks. Using fUS imaging on two awake nonhuman primates performing a passive fixation task, we constructed retinotopic maps at depth in the visual cortex (V1, V2, and V3) in the calcarine and lunate sulci. The maps could be acquired in a single-hour session with relatively few presentations of the stimuli. The spatial resolution of the technology is illustrated by mapping patterns similar to ocular dominance (OD) columns within superficial and deep layers of the primary visual cortex. These acquisitions using fUS suggested that OD selectivity is mostly present in layer IV but with extensions into layers II/III and V. This imaging technology provides a new mesoscale approach to the mapping of brain activity at high spatiotemporal resolution in awake subjects within the whole depth of the cortex.
Amarender R Bogadhi; Antimo Buonocore; Ziad M Hafed
In: Journal of Neuroscience, 40 (49), pp. 9496–9506, 2020.
Covert and overt spatial selection behaviors are guided by both visual saliency maps derived from early visual features as well as priority maps reflecting high-level cognitive factors. However, whether mid-level perceptual processes associated with visual form recognition contribute to covert and overt spatial selection behaviors remains unclear. We hypothesized that if peripheral visual forms contribute to spatial selection behaviors, then they should do so even when the visual forms are task-irrelevant. We tested this hypothesis in male and female human subjects as well as in male macaque monkeys performing a visual detection task. In this task, subjects reported the detection of a suprathreshold target spot presented on top of one of two peripheral images, and they did so with either a speeded manual button press (humans) or a speeded saccadic eye movement response (humans and monkeys). Crucially, the two images, one with a visual form and the other with a partially phase-scrambled visual form, were completely irrelevant to the task. In both manual (covert) and oculomotor (overt) response modalities, and in both humans and monkeys, response times were faster when the target was congruent with a visual form than when it was incongruent. Importantly, incongruent targets were associated with almost all errors, suggesting that forms automatically captured selection behaviors. These findings demonstrate that mid-level perceptual processes associated with visual form recognition contribute to covert and overt spatial selection. This indicates that neural circuits associated with target selection, such as the superior colliculus, may have privileged access to visual form information. SIGNIFICANCE STATEMENT Spatial selection of visual information either with (overt) or without (covert) foveating eye movements is critical to primate behavior. However, it is still not clear whether spatial maps in sensorimotor regions known to guide overt and covert spatial selection are influenced by peripheral visual forms. We probed the ability of humans and monkeys to perform overt and covert target selection in the presence of spatially congruent or incongruent visual forms. Even when completely task-irrelevant, images of visual objects had a dramatic effect on target selection, acting much like spatial cues used in spatial attention tasks. Our results demonstrate that traditional brain circuits for orienting behaviors, such as the superior colliculus, likely have privileged access to visual object representations.
Sophie Brulé; Bastien Herlin; Pierre Pouget; Marcus Missal
Ketamine reduces temporal expectation in the rhesus monkey Journal Article
In: Psychopharmacology, pp. 1–9, 2020.
Rationale: Ketamine, a well-known general dissociative anesthetic agent that is a non-competitive antagonist of the N-methyl-D-aspartate receptor, perturbs the perception of elapsed time and the expectation of upcoming events. Objective: The objective of this study was to determine the influence of ketamine on temporal expectation in the rhesus monkey. Methods: Two rhesus monkeys were trained to make a saccade between a central warning stimulus and an eccentric visual target that served as imperative stimulus. The delay between the warning and the imperative stimulus could take one of four different values randomly with the same probability (variable foreperiod paradigm). During experimental sessions, a subanesthetic low dose of ketamine (0.25–0.35 mg/kg) was injected i.m. and the influence of the drug on movement latency was measured. Results: We found that in the control conditions, saccadic latencies strongly decreased with elapsed time before the appearance of the visual target showing that temporal expectation built up during the delay period between the warning and the imperative stimulus. However, after ketamine injection, temporal expectation was significantly reduced in both subjects. In addition, ketamine also increased average movement latency but this effect could be dissociated from the reduction of temporal expectation. Conclusion: In conclusion, a subanesthetic dose of ketamine could have two independent effects: increasing reaction time and decreasing temporal expectation. This alteration of temporal expectation could explain cognitive deficits observed during ketamine use.
Ting Yu Chang; Raymond Doudlah; Byounghoon Kim; Adhira Sunkara; Lowell W Thompson; Meghan E Lowe; Ari Rosenberg
In: eLife, 9 , pp. 1–27, 2020.
Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.
Ting Yu Chang; Lowell Thompson; Raymond Doudlah; Byounghoon Kim; Adhira Sunkara; Ari Rosenberg
In: eNeuro, 7 (1), pp. 1–18, 2020.
Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-posed problem. Despite this, 3D perception of the world based on 2D retinal images is seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D perception in humans, but it is unclear whether this is true for non-human primates (NHPs). Here, we assessed 3D perception in macaque monkeys using a planar surface orientation discrimination task. Perception was accurate across a wide range of spatial poses (orientations and distances), but precision was highly dependent on the plane's pose. The monkeys achieved robust 3D perception by dynamically reweighting the integration of stereoscopic and perspective cues according to their pose-dependent reliabilities. Errors in performance could be explained by a prior resembling the 3D orientation statistics of natural scenes. We used neural network simulations based on 3D orientation-selective neurons recorded from the same monkeys to assess how neural computation might constrain perception. The perceptual data were consistent with a model in which the responses of two independent neuronal populations representing stereoscopic cues and perspective cues (with perspective signals from the two eyes combined using nonlinear canonical computations) were optimally integrated through linear summation. Perception of combined-cue stimuli was optimal given this architecture. However, an alternative architecture in which stereoscopic cues, left eye perspective cues, and right eye perspective cues were represented by three independent populations yielded two times greater precision than the monkeys. This result suggests that, due to canonical computations, cue integration for 3D perception is optimized but not maximized.
Chih-Yang Chen; Denis Matrov; Richard Veale; Hirotaka Onoe; Masatoshi Yoshida; Kenichiro Miura; Tadashi Isa
In: Journal of Neurophysiology, 2020.
The saccade is a stereotypic behavior whose investigation improves our understanding of how primate brains implement precise motor control. Furthermore, saccades offer an important window into the cognitive and attentional state of the brain. Historically, saccade studies have largely relied on macaque. However, the cortical network giving rise to the saccadic command is difficult to study in macaque because relevant cortical areas lie in sulci and are difficult to access. Recently, a New World monkey – the marmoset – has garnered attention as an attractive alternative to macaque because of its smooth cortical surface, its smaller body, and its amenability to transgenic technology. However, adoption of marmoset for oculomotor research has been limited due to a lack of in-depth descriptions of marmoset saccade kinematics and their ability to perform psychophysical and cognitive tasks. Here, we directly compare free-viewing and visually-guided behavior of marmoset, macaque, and human engaged in identical tasks under similar conditions. In video free-viewing task, all species exhibited qualitatively similar saccade kinematics including saccade main sequence up to 25° in amplitude. Furthermore, the conventional bottom-up saliency model predicted gaze targets at similar rates for all species. We further verified their visually-guided behavior by training them with step and gap saccade tasks. All species showed similar gap effect and express saccades in the gap paradigm. Our results suggest that the three species have similar natural and task-guided visuomotor behavior. The marmoset can be trained on saccadic tasks and thus can serve as a model for oculomotor, attention, and cognitive research.
Xiaomo Chen; Marc Zirnsak; Gabriel M Vega; Eshan Govil; Stephen G Lomber; Tirin Moore
In: Neuron, 106 (1), pp. 177–187, 2020.
Unique stimuli stand out. Despite an abundance of competing sensory stimuli, the detection of the most salient ones occurs without effort, and that detection contributes to theguidanceof adaptive behavior. Neurons sensitive to the salience of visual stimuli are wide-spread throughout the primate visual system and are thought to shape the selection of visual targets. However, a neural source of salience remains elusive. In an attempt to identify a source of visual salience, we reversibly inactivated parietal cortex and simultaneously recorded salience signals in prefrontal cortex. Inactivation of parietal cortex not only caused pronounced and selective reductions of salience signals in prefrontal cortex but also diminished the influence of salience on visually guided behavior. These observations demonstrate a causal role of parietal cortex in regulating salience signals within the brain and in controlling salience-driven behavior.
Xiaomo Chen; Marc Zirnsak; Gabriel M Vega; Tirin Moore
In: Progress in Neurobiology, 195 , pp. 1–10, 2020.
The consequences of individual actions are typically unknown until well after they are executed. This fact necessitates a mechanism that bridges delays between specific actions and reward outcomes. We looked for the presence of such a mechanism in the post-movement activity of neurons in the frontal eye field (FEF), a visuomotor area in prefrontal cortex. Monkeys performed an oculomotor gamble task in which they made eye movements to different locations associated with dynamically varying reward outcomes. Behavioral data showed that monkeys tracked reward history and made choices according to their own risk preferences. Consistent with previous studies, we observed that the activity of FEF neurons is correlated with the expected reward value of different eye movements before a target appears. Moreover, we observed that the activity of FEF neurons continued to signal the direction of eye movements, the expected reward value, and their interaction well after the movements were completed and when targets were no longer within the neuronal response field. In addition, this post-movement information was also observed in local field potentials, particularly in low-frequency bands. These results show that neural signals of prior actions and expected reward value persist across delays between those actions and their experienced outcomes. These memory traces may serve a role in reward-based learning in which subjects need to learn actions predicting delayed reward.
E Cleeren; I D Popivanov; W Van Paesschen; Peter Janssen
In: Scientific Reports, 10 , pp. 1–11, 2020.
Visual information reaches the amygdala through the various stages of the ventral visual stream. There is, however, evidence that a fast subcortical pathway for the processing of emotional visual input exists. To explore the presence of this pathway in primates, we recorded local field potentials in the amygdala of four rhesus monkeys during a passive fixation task showing images of ten object categories. Additionally, in one of the monkeys we also obtained multi-unit spiking activity during the same task. We observed remarkably fast medium and high gamma responses in the amygdala of the four monkeys. These responses were selective for the different stimulus categories, showed within-category selectivity, and peaked as early as 60 ms after stimulus onset. Multi-unit responses in the amygdala were lagging the gamma responses by about 40 ms. Thus, these observations add further evidence that selective visual information reaches the amygdala of nonhuman primates through a very fast route.
Benjamin R Cowley; Adam C Snyder; Katerina Acar; Ryan C Williamson; Byron M Yu; Matthew A Smith
In: Neuron, 108 (3), pp. 551–567, 2020.
An animal's decision depends not only on incoming sensory evidence but also on its fluctuating internal state. This state embodies multiple cognitive factors, such as arousal and fatigue, but it is unclear how these factors influence the neural processes that encode sensory stimuli and form a decision. We discovered that, unprompted by task conditions, animals slowly shifted their likelihood of detecting stimulus changes over the timescale of tens of minutes. Neural population activity from visual area V4, as well as from prefrontal cortex, slowly drifted together with these behavioral fluctuations. We found that this slow drift, rather than altering the encoding of the sensory stimulus, acted as an impulsivity signal, overriding sensory evidence to dictate the final decision. Overall, this work uncovers an internal state embedded in population activity across multiple brain areas and sheds further light on how internal states contribute to the decision-making process.
Olga Dal Monte; Cheng C J Chu; Nicholas A Fagan; Steve W C Chang
In: Nature Neuroscience, 23 (4), pp. 565–574, 2020.
Social behaviors recruit multiple cognitive operations that require interactions between cortical and subcortical brain regions. Interareal synchrony may facilitate such interactions between cortical and subcortical neural populations. However, it remains unknown how neurons from different nodes in the social brain network interact during social decision-making. Here we investigated oscillatory neuronal interactions between the basolateral amygdala and the rostral anterior cingulate gyrus of the medial prefrontal cortex while monkeys expressed context-dependent positive or negative other-regarding preference (ORP), whereby decisions affected the reward received by another monkey. Synchronization between the two nodes was enhanced for a positive ORP but suppressed for a negative ORP. These interactions occurred in beta and gamma frequency bands depending on the area contributing the spikes, exhibited a specific directionality of information flow associated with a positive ORP and could be used to decode social decisions. These findings suggest that specialized coordination in the medial prefrontal–amygdala network underlies social-decision preferences.
Becket R Ebitz; Jiaxin Cindy Tu; Benjamin Y Hayden
Rules warp feature encoding in decision-making circuits Journal Article
In: PLOS Biology, 18 (11), pp. 1–38, 2020.
We have the capacity to follow arbitrary stimulus–response rules, meaning simple policies that guide our behavior. Rule identity is broadly encoded across decision-making circuits, but there are less data on how rules shape the computations that lead to choices. One idea is that rules could simplify these computations. When we follow a rule, there is no need to encode or compute information that is irrelevant to the current rule, which could reduce the metabolic or energetic demands of decision-making. However, it is not clear if the brain can actually take advantage of this computational simplicity. To test this idea, we recorded from neurons in 3 regions linked to decision-making, the orbitofrontal cortex (OFC), ventral striatum (VS), and dorsal striatum (DS), while macaques performed a rule-based decision-making task. Rule-based decisions were identified via modeling rules as the latent causes of decisions. This left us with a set of physically identical choices that maximized reward and information, but could not be explained by simple stimulus–response rules. Contrasting rule-based choices with these residual choices revealed that following rules (1) decreased the energetic cost of decision-making; and (2) expanded rule-relevant coding dimensions and compressed rule-irrelevant ones. Together, these results suggest that we use rules, in part, because they reduce the costs of decision-making through a distributed representational warping in decision-making circuits.
Steven P Errington; Geoffrey F Woodman; Jeffrey D Schall
In: Journal of Neuroscience, 40 (48), pp. 9272–9282, 2020.
The neural mechanisms of executive and motor control concern both basic researchers and clinicians. In human studies, preparation and cancellation of movements are accompanied by changes in the $beta$-frequency band (15-29 Hz) of electroencephalogram (EEG). Previous studies with human participants performing stop signal (countermanding) tasks have described reduced frequency of transient $beta$-bursts over sensorimotor cortical areas before movement initiation and increased $beta$-bursting over medial frontal areas with movement cancellation. This modulation has been interpreted as contributing to the trial-by-trial control of behavior. We performed identical analyses of EEG recorded over the frontal lobe of macaque monkeys (one male, one female) performing a saccade countermanding task. While we replicate the occurrence and modulation of $beta$-bursts associated with initiation and cancellation of saccades, we found that $beta$-bursts occur too infrequently to account for the observed stopping behavior. We also found $beta$-bursts were more common after errors, but their incidence was unrelated to response time (RT) adaptation. These results demonstrate the homology of this EEG signature between humans and macaques but raise questions about the current interpretation of $beta$ band functional significance.
Katharine A Shapcott; Joscha T Schmiedt; Kleopatra Kouroupaki; Ricardo Kienitz; Andreea Lazar; Wolf Singer; Michael C Schmid
In: Cerebral Cortex, 30 (9), pp. 4871–4881, 2020.
In order for organisms to survive, they need to detect rewarding stimuli, for example, food or a mate, in a complex environment with many competing stimuli. These rewarding stimuli should be detected even if they are nonsalient or irrelevant to the current goal. The value-driven theory of attentional selection proposes that this detection takes place through reward-associated stimuli automatically engaging attentional mechanisms. But how this is achieved in the brain is not very well understood. Here, we investigate the effect of differential reward on the multiunit activity in visual area V4 of monkeys performing a perceptual judgment task. Surprisingly, instead of finding reward-related increases in neural responses to the perceptual target, we observed a large suppression at the onset of the reward indicating cues. Therefore, while previous research showed that reward increases neural activity, here we report a decrease. More suppression was caused by cues associated with higher reward than with lower reward, although neither cue was informative about the perceptually correct choice. This finding of reward-associated neural suppression further highlights normalization as a general cortical mechanism and is consistent with predictions of the value-driven attention theory.
Zhenhua Shi; Xiaomo Chen; Changming Zhao; He He; Veit Stuphorn; Dongrui Wu
In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28 (9), pp. 1908–1920, 2020.
Multi-view learning improves the learning performance by utilizing multi-view data: data collected from multiple sources, or feature sets extracted from the same data source. This approach is suitable for primate brain state decoding using cortical neural signals. This is because the complementary components of simultaneously recorded neural signals, local field potentials (LFPs) and action potentials (spikes), can be treated as two views. In this paper, we extended broad learning system (BLS), a recently proposed wide neural network architecture, from single-view learning to multi-view learning, and validated its performance in decoding monkeys' oculomotor decision from medial frontal LFPs and spikes. We demonstrated that medial frontal LFPs and spikes in non-human primate do contain complementary information about the oculomotor decision, and that the proposed multi-view BLS is a more effective approach for decoding the oculomotor decision than several classical and state-of-the-art single-view and multi-view learning approaches.
Ramona Siebert; Nick Taubert; Silvia Spadacenta; Peter W Dicke; Martin A Giese; Peter Thier
In: eNeuro, 7 (4), pp. 1–17, 2020.
Research on social perception in monkeys may benefit from standardized, controllable, and ethologically valid renditions of conspecifics offered by monkey avatars. However, previous work has cautioned that monkeys, like humans, show an adverse reaction toward realistic synthetic stimuli, known as the “uncanny valley” effect. We developed an improved naturalistic rhesus monkey face avatar capable of producing facial expressions (fear grin, lip smack and threat), animated by motion capture data of real monkeys. For validation, we addition-ally created decreasingly naturalistic avatar variants. Eight rhesus macaques were tested on the various videos and avoided looking at less naturalistic avatar variants, but not at the most naturalistic or the most unnaturalis-tic avatar, indicating an uncanny valley effect for the less naturalistic avatar versions. The avoidance was deepened by motion and accompanied by physiological arousal. Only the most naturalistic avatar evoked facial expressions comparable to those toward the real monkey videos. Hence, our findings demonstrate that the uncanny valley reaction in monkeys can be overcome by a highly naturalistic avatar.
Cheng Tang; Roger Herikstad; Aishwarya Parthasarathy; Camilo Libedinsky; Shih Cheng Yen
In: eLife, 9 , pp. 1–23, 2020.
The lateral prefrontal cortex is involved in the integration of multiple types of information, including working memory and motor preparation. However, it is not known how downstream regions can extract one type of information without interference from the others present in the network. Here, we show that the lateral prefrontal cortex of non-human primates contains two minimally dependent low-dimensional subspaces: one that encodes working memory information, and another that encodes motor preparation information. These subspaces capture all the information about the target in the delay periods, and the information in both subspaces is reduced in error trials. A single population of neurons with mixed selectivity forms both subspaces, but the information is kept largely independent from each other. A bump attractor model with divisive normalization replicates the properties of the neural data. These results provide new insights into neural processing in prefrontal regions.
David A Tovar; Jacob A Westerberg; Michele A Cox; Kacie Dougherty; Thomas A Carlson; Mark T Wallace; Alexander Maier
In: Frontiers in Systems Neuroscience, 14 , pp. 1–14, 2020.
Most of the mammalian neocortex is comprised of a highly similar anatomical structure, consisting of a granular cell layer between superficial and deep layers. Even so, different cortical areas process different information. Taken together, this suggests that cortex features a canonical functional microcircuit that supports region-specific information processing. For example, the primate primary visual cortex (V1) combines the two eyes' signals, extracts stimulus orientation, and integrates contextual information such as visual stimulation history. These processes co-occur during the same laminar stimulation sequence that is triggered by the onset of visual stimuli. Yet, we still know little regarding the laminar processing differences that are specific to each of these types of stimulus information. Univariate analysis techniques have provided great insight by examining one electrode at a time or by studying average responses across multiple electrodes. Here we focus on multivariate statistics to examine response patterns across electrodes instead. Specifically, we applied multivariate pattern analysis (MVPA) to linear multielectrode array recordings of laminar spiking responses to decode information regarding the eye-of-origin, stimulus orientation, and stimulus repetition. MVPA differs from conventional univariate approaches in that it examines patterns of neural activity across simultaneously recorded electrode sites. We were curious whether this added dimensionality could reveal neural processes on the population level that are challenging to detect when measuring brain activity without the context of neighboring recording sites. We found that eye-of-origin information was decodable for the entire duration of stimulus presentation, but diminished in the deepest layers of V1. Conversely, orientation information was transient and equally pronounced along all layers. More importantly, using time-resolved MVPA, we were able to evaluate laminar response properties beyond those yielded by univariate analyses. Specifically, we performed a time generalization analysis by training a classifier at one point of the neural response and testing its performance throughout the remaining period of stimulation. Using this technique, we demonstrate repeating (reverberating) patterns of neural activity that have not previously been observed using standard univariate approaches.
Pedro G Vieira; Matthew R Krause; Christopher C Pack
In: PLoS Biology, 18 (10), pp. 1–14, 2020.
Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons.
Benjamin Voloh; Mariann Oemisch; Thilo Womelsdorf
In: Nature Communications, 11 , pp. 1–16, 2020.
The prefrontal cortex and striatum form a recurrent network whose spiking activity encodes multiple types of learning-relevant information. This spike-encoded information is evident in average firing rates, but finer temporal coding might allow multiplexing and enhanced readout across the connected network. We tested this hypothesis in the fronto-striatal network of nonhuman primates during reversal learning of feature values. We found that populations of neurons encoding choice outcomes, outcome prediction errors, and outcome history in their firing rates also carry significant information in their phase-of-firing at a 10–25 Hz band-limited beta frequency at which they synchronize across lateral prefrontal cortex, anterior cingulate cortex and anterior striatum when outcomes were processed. The phase-of-firing code exceeds information that can be obtained from firing rates alone and is evident for inter-areal connections between anterior cingulate cortex, lateral prefrontal cortex and anterior striatum. For the majority of connections, the phase-of-firing information gain is maximal at phases of the beta cycle that were offset from the preferred spiking phase of neurons. Taken together, these findings document enhanced information of three important learning variables at specific phases of firing in the beta cycle at an inter-areally shared beta oscillation frequency during goal-directed behavior.
Steven Wiesner; Ian W Baumgart; Xin Huang
In: Journal of Neuroscience, 40 (9), pp. 1834–1848, 2020.
Natural scenes often contain multiple objects and surfaces. However, how neurons in the visual cortex represent multiple visual stimuli is not well understood. Previous studies have shown that, when multiple stimuli compete in one feature domain, the evoked neuronal response is biased toward the stimulus that has a stronger signal strength. We recorded from two male macaques to investigate how neurons in the middle temporal cortex (MT) represent multiple stimuli that compete in more than one feature domain. Visual stimuli were two random-dot patches moving in different directions. One stimulus had low luminance contrast and moved with high coherence, whereas the other had high contrast and moved with low coherence. We found that how MT neurons represent multiple stimuli depended on the spatial arrangement. When two stimuli were overlapping, MT responses were dominated by the stimulus component that had high contrast. When two stimuli were spatially separated within the receptive fields, the contrast dominance was abolished. We found the same results when using contrast to compete with motion speed. Our neural data and computer simulations using a V1-MT model suggest that the contrast dominance found with overlapping stimuli is due to normalization occurring at an input stage fed to MT, and MT neurons cannot overturn this bias based on their own feature selectivity. The interaction between spatially separated stimuli can largely be explained by normalization within MT. Our results revealed new rules on stimulus competition and highlighted the impact of hierarchical processing on representing multiple stimuli in the visual cortex.
Vanessa A D Wilson; Carolin Kade; Sebastian Moeller; Stefan Treue; Igor Kagan; Julia Fischer
In: Frontiers in Psychology, 11 , pp. 1–13, 2020.
Following the expanding use and applications of virtual reality in everyday life, realistic virtual stimuli are of increasing interest in cognitive studies. They allow for control of features such as gaze, expression, appearance, and movement, which may help to overcome limitations of using photographs or video recordings to study social responses. In using virtual stimuli however, one must be careful to avoid the uncanny valley effect, where realistic stimuli can be perceived as eerie, and induce an aversion response. At the same time, it is important to establish whether responses to virtual stimuli mirror responses to depictions of a real conspecific. In the current study, we describe the development of a new virtual monkey head with realistic facial features for experiments with nonhuman primates, the “Primatar.” As a first step toward validation, we assessed how monkeys respond to facial images of a prototype of this Primatar compared to images of real monkeys (RMs), and an unrealistic model. We also compared gaze responses between original images and scrambled as well as obfuscated versions of these images. We measured looking time to images in six freely moving long-tailed macaques (Macaca fascicularis) and gaze exploration behavior in three rhesus macaques (Macaca mulatta). Both groups showed more signs of overt attention to original images than scrambled or obfuscated images. In addition, we found no evidence for an uncanny valley effect; since for both groups, looking times did not differ between real, realistic, or unrealistic images. These results provide important data for further development of our Primatar for use in social cognition studies and more generally for cognitive research with virtual stimuli in nonhuman primates. Future research on the absence of an uncanny valley effect in macaques is needed, to elucidate the roots of this mechanism in humans.
Seng Bum Michael Yoo; Benjamin Y Hayden
In: Neuron, 105 (4), pp. 1–13, 2020.
Economic choice proceeds from evaluation, in which we contemplate options, to selection, in which we weigh options and choose one. These stages must be differentiated so that decision makers do not proceed to selection before evaluation is complete. We examined responses of neurons in two core reward regions, orbitofrontal (OFC) and ventromedial prefrontal cortex (vmPFC), during two-option choice with asynchronous offer presentation. Our data suggest that neurons selective during the first (presumed evaluation) and second (presumed comparison and selection) offer epochs come from a single pool. Stage transition is accompanied by a shift toward orthogonality in the low-dimensional population response manifold. Nonetheless, the relative position of each option in driving responses in the population subspace is preserved. The orthogonalization we observe supports the hypothesis that the transition from evaluation to selection leads to reorganization of response subspace and suggests a mechanism by which value-related signals are prevented from prematurely driving choice.
Mengxi Yun; Takashi Kawai; Masafumi Nejime; Hiroshi Yamada; Masayuki Matsumoto
In: Science Advances, 6 , pp. 1–15, 2020.
When we make economic choices, the brain first evaluates available options and then decides whether to choose them. Midbrain dopamine neurons are known to reinforce economic choices through their signal evoked by outcomes after decisions are made. However, although critical internal processing is executed while decisions are being made, little is known about the role of dopamine neurons during this period. We found that dopamine neurons exhibited dynamically changing signals related to the internal processing while rhesus monkeys were making decisions. These neurons encoded the value of an option immediately after it was offered and then gradually changed their activity to represent the animal's upcoming choice. Similar dynamics were observed in the orbitofrontal cortex, a center for economic decision-making, but the value-to-choice signal transition was completed earlier in dopamine neurons. Our findings suggest that dopamine neurons are a key component of the neural network that makes choices from values during ongoing decision-making processes.
Polina Zamarashkina; Dina V Popovkina; Anitha Pasupathy
In: Journal of Neurophysiology, 123 (6), pp. 2311–2325, 2020.
In the primate visual cortex, both the magnitude of the neuronal response and its timing can carry important information about the visual world, but studies typically focus only on response magnitude. Here, we examine the onset and offset latency of the responses of neurons in area V4 of awake, behaving macaques across several experiments in the context of a variety of stimuli and task paradigms. Our results highlight distinct contributions of stimuli and tasks to V4 response latency. We found that response onset latencies are shorter than typically cited (median = 75.5 ms), supporting a role for V4 neurons in rapid object and scene recognition functions. Moreover, onset latencies are longer for smaller stimuli and stimulus outlines, consistent with the hypothesis that longer latencies are associated with higher spatial frequency content. Strikingly, we found that onset latencies showed no significant dependence on stimulus occlusion, unlike in inferotemporal cortex, nor on task demands. Across the V4 population, onset latencies had a broad distribution, reflecting the diversity of feedforward, recurrent, and feedback connections that inform the responses of individual neurons. Response offset latencies, on the other hand, displayed the opposite tendency in their relationship to stimulus and task attributes: they are less influenced by stimulus appearance but are shorter in guided saccade tasks compared with fixation tasks. The observation that response latency is influenced by stimulus- and task-associated factors emphasizes a need to examine response timing alongside firing rate in determining the functional role of area V4.NEW & NOTEWORTHY Onset and offset timing of neuronal responses can provide information about visual environment and neuron's role in visual processing and its anatomical connectivity. In the first comprehensive examination of onset and offset latencies in the intermediate visual cortical area V4, we find neurons respond faster than previously reported, making them ideally suited to contribute to rapid object and scene recognition. While response onset reflects stimulus characteristics, timing of response offset is influenced more by behavioral task.
Armin Najarpour Foroushani; Sujaya Neupane; Pablo De Heredia Pastor; Christopher C Pack; Mohamad Sawan
In: Journal of Neural Engineering, 17 (2), pp. 1–23, 2020.
Objective. An important challenge for the development of cortical visual prostheses is to generate spatially localized percepts of light, using artificial stimulation. Such percepts are called phosphenes, and the goal of prosthetic applications is to generate a pattern of phosphenes that matches the structure of the retinal image. A preliminary step in this process is to understand how the spatial positions of phosphene-like visual stimuli are encoded in the distributed activity of cortical neurons. The spatial resolution with which the distributed responses discriminate positions puts a limit on the capability of visual prosthesis devices to induce phosphenes at multiple positions. While most previous prosthetic devices have targeted the primary visual cortex, the extrastriate cortex has the advantage of covering a large part of the visual field with a smaller amount of cortical tissue, providing the possibility of a more compact implant. Here, we studied how well ensembles of Local Field Potentials (LFPs) and Multiunit activity (MUA) responses from extrastriate cortical visual area V4 of a behaving macaque monkey can discriminate between two-dimensional spatial positions. Approach. We used support vector machines (SVM) to determine the capabilities of LFPs and MUA to discriminate responses to phosphene-like stimuli (probes) at different spatial separations. We proposed a selection strategy based on the combined responses of multiple electrodes and used the linear learning weights to find the minimum number of electrodes for fine and coarse discriminations. We also measured the contribution of correlated trial-to-trial variability in the responses to the discrimination performance for MUA and LFP. Main results. We found that despite the large receptive field sizes in V4, the combined responses from multiple sites, whether MUA or LFP, are capable of fine and coarse discrimination of positions. Our electrode selection procedure significantly increased discrimination performance while reducing the required number of electrodes. Analysis of noise correlations in MUA and LFP responses showed that noise correlations in LFPs carry more information about spatial positions. Significance. This study determined the coding strategy for fine discrimination, suggesting that spatial positions could be well localized with patterned stimulation in extrastriate area V4. It also provides a novel approach to build a compact prosthesis with relatively few electrodes, which has the potential advantage of reducing tissue damage in real applications.
Mathilda Froesel; Quentin Goudard; Marc Hauser; Maëva Gacoin; Suliann Ben Hamed
In: Scientific Reports, 10 , pp. 1–11, 2020.
Heart rate (HR) is extremely valuable in the study of complex behaviours and their physiological correlates in non-human primates. However, collecting this information is often challenging, involving either invasive implants or tedious behavioural training. In the present study, we implement a Eulerian video magnification (EVM) heart tracking method in the macaque monkey combined with wavelet transform. This is based on a measure of image to image fluctuations in skin reflectance due to changes in blood influx. We show a strong temporal coherence and amplitude match between EVM-based heart tracking and ground truth ECG, from both color (RGB) and infrared (IR) videos, in anesthetized macaques, to a level comparable to what can be achieved in humans. We further show that this method allows to identify consistent HR changes following the presentation of conspecific emotional voices or faces. EVM is used to extract HR in humans but has never been applied to non-human primates. Video photoplethysmography allows to extract awake macaques HR from RGB videos. In contrast, our method allows to extract awake macaques HR from both RGB and IR videos and is particularly resilient to the head motion that can be observed in awake behaving monkeys. Overall, we believe that this method can be generalized as a tool to track HR of the awake behaving monkey, for ethological, behavioural, neuroscience or welfare purposes.
Jennifer M Groh; John M Pearson; Jeff T Mohl
In: Journal of Neurophysiology, 124 (3), pp. 715–727, 2020.
The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys (Macaca mulatta) and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accu- rately predicted both the explicit “same vs. different” source judg- ments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli.
Roberto A Gulli; Lyndon R Duong; Benjamin W Corrigan; Guillaume Doucet; Sylvain Williams; Stefano Fusi; Julio C Martinez-Trujillo
In: Nature Neuroscience, 23 (1), pp. 103–112, 2020.
The hippocampus is implicated in associative memory and spatial navigation. To investigate how these functions are mixed in the hippocampus, we recorded from single hippocampal neurons in macaque monkeys navigating a virtual maze during a foraging task and a context–object associative memory task. During both tasks, single neurons encoded information about spatial position; a linear classifier also decoded position. However, the population code for space did not generalize across tasks, particularly where stimuli relevant to the associative memory task appeared. Single-neuron and population-level analyses revealed that cross-task changes were due to selectivity for nonspatial features of the associative memory task when they were visually available (perceptual coding) and following their disappearance (mnemonic coding). Our results show that neurons in the primate hippocampus nonlinearly mix information about space and nonspatial elements of the environment in a task-dependent manner; this efficient code flexibly represents unique perceptual experiences and correspondent memories.
Ziad M Hafed; Laurent Goffart
In: Journal of Neurophysiology, 123 (1), pp. 308–322, 2020.
Rigorous behavioral studies made in human subjects have shown that small-eccentricity target displacements are associated with increased saccadic reaction times, but the reasons for this remain unclear. Before characterizing the neurophysiological foundations underlying this relationship between the spatial and temporal aspects of saccades, we tested the triggering of small saccades in the male rhesus macaque monkey. We also compared our results to those obtained in human subjects, both from the existing literature and through our own additional measurements. Using a variety of behavioral tasks exercising visual and nonvisual guidance of small saccades, we found that small saccades consistently require more time than larger saccades to be triggered in the nonhuman primate, even in the absence of any visual guidance and when valid advance information about the saccade landing position is available. We also found a strong asymmetry in the reaction times of small upper versus lower visual field visually guided saccades, a phenomenon that has not been described before for small saccades, even in humans. Following the suggestion that an eye movement is not initiated as long as the visuo-oculomotor system is within a state of balance, in which opposing commands counterbalance each other, we propose that the longer reaction times are a signature of enhanced times needed to create the symmetry-breaking condition that puts downstream premotor neurons into a push-pull regime necessary for rotating the eyeballs. Our results provide an important catalog of nonhuman primate oculomotor capabilities on the miniature scale, allowing concrete predictions on underlying neurophysiological mechanisms. NEW & NOTEWORTHY Leveraging a multitude of neurophysiological investigations in the rhesus macaque monkey, we generated and tested hypotheses about small-saccade latencies in this animal model. We found that small saccades always take longer, on average, than larger saccades to trigger, regardless of visual and cognitive context. Moreover, small downward saccades have the longest latencies overall. Our results provide an important documentation of oculomotor capabilities of an indispensable animal model for neuroscientific research in vision, cognition, and action.
Eric Hart; Alexander C Huk
In: eLife, 9 , pp. 1–22, 2020.
During delayed oculomotor response tasks, neurons in the lateral intraparietal area (LIP) and the frontal eye fields (FEF) exhibit persistent activity that reflects the active maintenance of behaviorally relevant information. Despite many computational models of the mechanisms of persistent activity, there is a lack of circuit-level data from the primate to inform the theories. To fill this gap, we simultaneously recorded ensembles of neurons in both LIP and FEF while macaques performed a memory-guided saccade task. A population encoding model revealed strong and symmetric long-timescale recurrent excitation between LIP and FEF. Unexpectedly, LIP exhibited stronger local functional connectivity than FEF, and many neurons in LIP had longer network and intrinsic timescales. The differences in connectivity could be explained by the strength of recurrent dynamics in attractor networks. These findings reveal reciprocal multi-area circuit dynamics in the frontoparietal network during persistent activity and lay the groundwork for quantitative comparisons to theoretical models.
Christopher A Henry; Adam Kohn
In: Nature Communications, 11 , pp. 1–12, 2020.
Crowding is a profound loss of discriminability of visual features, when a target stimulus is surrounded by distractors. Numerous studies of human perception have characterized how crowding depends on the properties of a visual display. Yet, there is limited understanding of how and where stimulus information is lost in the visual system under crowding. Here, we show that macaque monkeys exhibit perceptual crowding for target orientation that is similar to humans. We then record from neuronal populations in monkey primary visual cortex (V1). These populations show an appreciable loss of information about target orientation in the presence of distractors, due both to divisive and additive modulation of responses to targets by distractors. Our results show that spatial contextual effects in V1 limit the discriminability of visual features and can contribute substantively to crowding.
James P Herman; Fabrice Arcizet; Richard J Krauzlis
In: eLife, 9 , pp. 1–26, 2020.
Recent work has implicated the primate basal ganglia in visual perception and attention, in addition to their traditional role in motor control. The basal ganglia, especially the caudate nucleus “head” (CDh) of the striatum, receive indirect anatomical connections from the superior colliculus, a midbrain structure that is known to play a crucial role in the control of visual attention. To test the possible functional relationship between these subcortical structures, we recorded CDh neuronal activity of macaque monkeys before and during unilateral superior colliculus (SC) inactivation in a spatial attention task. SC inactivation significantly altered the attention-related modulation of CDh neurons and strongly impaired the classification of task epochs based on CDh activity. Only inactivation of SC on the same side of the brain as recorded CDh neurons, not the opposite side, had these effects. These results demonstrate a novel interaction between SC activity and attention-related visual processing in the basal ganglia.
Ahmad Jezzini; Camillo Padoa-Schioppa
Neuronal activity in the primate amygdala during economic choice Journal Article
In: Journal of Neuroscience, 40 (6), pp. 1286–1301, 2020.
Multiple lines of evidence link economic choices to the orbitofrontal cortex (OFC), but other brain regions may contribute to the computation and comparison of economic values. A particularly strong candidate is the basolateral amygdala (BLA). Amygdala lesions impair performance in reinforcer devaluation tasks, suggesting that the BLA contributes to value computation. Furthermore, previous studies of the BLA have found neuronal activity consistent with a value representation. Here, we recorded from the BLA of two male rhesus macaques choosing between different juices. Offered quantities varied from trial to trial, and relative values were inferred from choices. Approximately one-third of BLA cells were task-related. Our analyses revealed the presence of three groups of neurons encoding variables offer value, chosen value, and chosen juice. In this respect, the BLA appeared similar to the OFC. The two areas differed for the proportion of neurons in each group, as the fraction of chosen value cells was significantly higher in the BLA. Importantly, the activity of these neurons reflected the subjective nature of value. Firing rates in the BLA were sustained throughout the trial and maximal after juice delivery. In contrast, firing rates in the OFC were phasic and maximal shortly after offer presentation. Our results suggest that the BLA supports economic choice and reward expectation.
Kohitij Kar; James J DiCarlo
In: Neuron, pp. 1–13, 2020.
Distributed neural population spiking patterns in macaque inferior temporal (IT) cortex that support core object recognition require additional time to develop for specific, “late-solved” images. This suggests the necessity of recurrent processing in these computations. Which brain circuits are responsible for computing and transmitting these putative recurrent signals to IT? To test whether the ventrolateral prefrontal cortex (vlPFC) is a critical recurrent node in this system, here, we pharmacologically inactivated parts of vlPFC and simultaneously measured IT activity while monkeys performed object discrimination tasks. vlPFC inactivation deteriorated the quality of late-phase (textgreater150 ms from image onset) IT population code and produced commensurate behavioral deficits for late-solved images. Finally, silencing vlPFC caused the monkeys' IT activity and behavior to become more like those produced by feedforward-only ventral stream models. Together with prior work, these results implicate fast recurrent processing through vlPFC as critical to producing behaviorally sufficient object representations in IT.
Sanjeev B Khanna; Jonathan A Scott; Matthew A Smith
In: Journal of Neurophysiology, 124 (6), pp. 1774–1791, 2020.
Active vision is a fundamental process by which primates gather information about the external world. Multiple brain regions have been studied in the context of simple active vision tasks in which a visual target's appearance is temporally separated from saccade execution. Most neurons have tight spatial registration between visual and saccadic signals, and in areas such as prefrontal cortex (PFC), some neurons show persistent delay activity that links visual and motor epochs and has been proposed as a basis for spatial working memory. Many PFC neurons also show rich dynamics, which have been attributed to alternative working memory codes and the representation of other task variables. Our study investigated the transition between processing a visual stimulus and generating an eye movement in populations of PFC neurons in macaque monkeys performing a memory guided saccade task. We found that neurons in two subregions of PFC, the frontal eye fields (FEF) and area 8Ar, differed in their dynamics and spatial response profiles. These dynamics could be attributed largely to shifts in the spatial profile of visual and motor responses in individual neurons. This led to visual and motor codes for particular spatial locations that were instantiated by different mixtures of neurons, which could be important in PFC's flexible role in multiple sensory, cognitive, and motor tasks.NEW & NOTEWORTHY A central question in neuroscience is how the brain transitions from sensory representations to motor outputs. The prefrontal cortex contains neurons that have long been implicated as important in this transition and in working memory. We found evidence for rich and diverse tuning in these neurons, which was often spatially misaligned between visual and saccadic responses. This feature may play an important role in flexible working memory capabilities.
Ricardo Kienitz; Michele A Cox; Kacie Dougherty; Richard C Saunders; Joscha T Schmiedt; David A Leopold; Alexander Maier; Michael C Schmid
In: Current Biology, pp. 1–12, 2020.
Theta (3–9 Hz) and gamma (30–100 Hz) oscillations have been observed at different levels along the hierarchy of cortical areas and across a wide set of cognitive tasks. In the visual system, the emergence of both rhythms in primary visual cortex (V1) and mid-level cortical areas V4 has been linked with variations in perceptual reaction times.1–5 Based on analytical methods to infer causality in neural activation patterns, it was concluded that gamma and theta oscillations might both reflect feedforward sensory processing from V1 to V4.6–10 Here, we report on experiments in macaque monkeys in which we experimentally assessed the presence of both oscillations in the neural activity recorded from multi-electrode arrays in V1 and V4 before and after a permanent V1 lesion. With intact cortex, theta and gamma oscillations could be reliably elicited in V1 and V4 when monkeys viewed a visual contour illusion and showed phase-to-amplitude coupling. Laminar analysis in V1 revealed that both theta and gamma oscillations occurred primarily in the supragranular layers, the cortical output compartment of V1. However, there was a clear dissociation between the two rhythms in V4 that became apparent when the major feedforward input to V4 was removed by lesioning V1: although V1 lesioning eliminated V4 theta, it had little effect on V4 gamma power except for delaying its emergence by textgreater100 ms. These findings suggest that theta is more tightly associated with feedforward processing than gamma and pose limits on the proposed role of gamma as a feedforward mechanism.
Daniel L Kimmel; Gamaleldin F Elsayed; John P Cunningham; William T Newsome
In: Nature Communications, 11 , pp. 1–19, 2020.
Value-based decision-making requires different variables—including offer value, choice, expected outcome, and recent history—at different times in the decision process. Orbitofrontal cortex (OFC) is implicated in value-based decision-making, but it is unclear how downstream circuits read out complex OFC responses into separate representations of the relevant variables to support distinct functions at specific times. We recorded from single OFC neurons while macaque monkeys made cost-benefit decisions. Using a novel analysis, we find separable neural dimensions that selectively represent the value, choice, and expected reward of the present and previous offers. The representations are generally stable during periods of behavioral relevance, then transition abruptly at key task events and between trials. Applying new statistical methods, we show that the sensitivity, specificity and stability of the representations are greater than expected from the population's low-level features—dimensionality and temporal smoothness—alone. The separability and stability suggest a mechanism—linear summation over static synaptic weights—by which downstream circuits can select for specific variables at specific times.
Kenji W Koyano; Adam P Jones; David B T McMahon; Elena N Waidmann; Brian E Russ; David A Leopold
In: Current Biology, 31 , pp. 1–18, 2020.
The visual perception of identity in humans and other primates is thought to draw upon cortical areas specialized for the analysis of facial structure. A prominent theory of face recognition holds that the brain computes and stores average facial structure, which it then uses to efficiently determine individual identity, though the neural mechanisms underlying this process are controversial. Here, we demonstrate that the dynamic suppression of average facial structure plays a prominent role in the responses of neurons in three fMRI-defined face patches of the macaque. Using photorealistic face stimuli that systematically varied in identity level according to a psychophysically based face space, we found that single units in the AF, AM, and ML face patches exhibited robust tuning around average facial structure. This tuning emerged after the initial excitatory response to the face and was expressed as the selective suppression of sustained responses to low-identity faces. The coincidence of this suppression with increased spike timing synchrony across the population suggests a mechanism of active inhibition underlying this effect. Control experiments confirmed that the diminished responses to low-identity faces were not due to short-term adaptation processes. We propose that the brain's neural suppression of average facial structure facilitates recognition by promoting the extraction of distinctive facial characteristics and suppressing redundant or irrelevant responses across the population.
Aravind Krishna; Seiji Tanabe; Adam Kohn
In: Cerebral Cortex, pp. 1–15, 2020.
The neural basis of perceptual decision making has typically been studied using measurements of single neuron activity, though decisions are likely based on the activity of large neuronal ensembles. Local field potentials (LFPs) may, in some cases, serve as a useful proxy for population activity and thus be useful for understanding the neural basis of perceptual decision making. However, little is known about whether LFPs in sensory areas include decision-related signals. We therefore analyzed LFPs recorded using two 48electrode arrays implanted in primary visual cortex (V1) and area V4 of macaque monkeys trained to perform a fine orientation discrimination task. We found significant choice information in low (0–30 Hz) and higher (70–500 Hz) frequency components of the LFP, but little information in gamma frequencies (30–70 Hz). Choice information was more robust in V4 than V1 and stronger in LFPs than in simultaneously measured spiking activity. LFP-based choice information included a global component, common across electrodes within an area. Our findings reveal the presence of robust choice-related signals in the LFPs recorded in V1 and V4 and suggest that LFPs may be a useful complement to spike-based analyses of decision making.
Jan Kubanek; Julian Brown; Patrick Ye; Kim Butts Pauly; Tirin Moore; William Newsome
In: Science Advances, 6 , pp. 1–10, 2020.
The ability to modulate neural activity in specific brain circuits remotely and systematically could revolutionize studies of brain function and treatments of brain disorders. Sound waves of high frequencies (ultrasound) have shown promise in this respect, combining the ability to modulate neuronal activity with sharp spatial focus. Here, we show that the approach can have potent effects on choice behavior. Brief, low-intensity ultrasound pulses delivered noninvasively into specific brain regions of macaque monkeys influenced their decisions regarding which target to choose. The effects were substantial, leading to around a 2:1 bias in choices compared to the default balanced proportion. The effect presence and polarity was controlled by the specific target region. These results represent a critical step towards the ability to influence choice behavior noninvasively, enabling systematic investigations and treatments of brain circuits underlying disorders of choice.
Marcin Leszczyński; Annamaria Barczak; Yoshinao Kajikawa; Istvan Ulbert; Arnaud Y Falchier; Idan Tal; Saskia Haegens; Lucia Melloni; Robert T Knight; Charles E Schroeder
In: Science Advances, 6 , pp. 1–13, 2020.
Broadband High-frequency Activity (BHA; 70-150 Hz), also known as “high gamma,” a key analytic signal in human intracranial recordings is often assumed to reflect local neural firing (multiunit activity; MUA). Accordingly, BHA has been used to study neuronal population responses in auditory (1,2), visual (3,4), language (5), mnemonic processes (6-9) and cognitive control (10,11). BHA is arguably the electrophysiological measure best correlated with the Blood Oxygenation Level Dependent (BOLD) signal in fMRI (12-13). However, beyond the fact that BHA correlates with neuronal spiking (12, 14-16), the neuronal populations and physiological processes generating BHA are not precisely defined. Although critical for interpreting intracranial signals in human and non-human primates, the precise physiology of BHA remains unknown. Here, we show that BHA dissociates from MUA in primary visual and auditory cortex. Using laminar multielectrode data in monkeys, we found a bimodal distribution of stimulus-evoked BHA across depth of a cortical column: an early-deep, followed by a later-superficial layer response. Only, the early-deep layer BHA had a clear local MUA correlate, while the more prominent superficial layer BHA had a weak or undetectable MUA correlate. In many cases, particularly in V1 (70%), supragranular sites showed strong BHA in lieu of any detectable increase in MUA. Due to volume conduction, BHA from both the early-deep and the later-supragranular generators contribute to the field potential at the pial surface, though the contribution may be weighted towards the late-supragranular BHA. Our results demonstrate that the strongest generators of BHA are in the superficial cortical layers and show that the origins of BHA include a mixture of the neuronal action potential firing and dendritic processes separable from this firing. It is likely that the typically-recorded BHA signal emphasizes the latter processes to a greater extent than previously recognized.
Baowang Li; Brandy N Routh; Daniel Johnston; Eyal Seidemann; Nicholas J Priebe
In: Neuron, 107 (1), pp. 185–196.e4, 2020.
Li et al. used whole-cell recording to reveal a large and unexpected voltage-gated intrinsic conductance that dramatically alters the integrative properties of primate V1 neurons. Therefore, a standard computational model of sensory neurons that incorporates linear integration of synaptic inputs followed by a threshold nonlinearity requires revision.
Zhongqiao Lin; Chechang Nie; Yuanfeng Zhang; Yang Chen; Tianming Yang
In: Proceedings of the National Academy of Sciences, 117 (48), pp. 30728–30737, 2020.
A key step of decision making is to determine the value associated with each option. The evaluation process often depends on the accumulation of evidence from multiple sources, which may arrive at different times. How evidence is accumulated for value computation in the brain during decision making has not been well studied. To address this problem, we trained rhesus monkeys to perform a decision-making task in which they had to make eye movement choices between two targets, whose reward probabilities had to be determined with the combined evidence from four sequentially presented visual stimuli. We studied the encoding of the reward probabilities associated with the stimuli and the eye movements in the orbitofrontal (OFC) and the dorsolateral prefrontal (DLPFC) cortices during the decision process. We found that the OFC neurons encoded the reward probability associated with individual pieces of evidence in the stimulus domain. Importantly, the representation of the reward probability in the OFC was transient, and the OFC did not encode the reward probability associated with the combined evidence from multiple stimuli. The computation of the combined reward probabilities was observed only in the DLPFC and only in the action domain. Furthermore, the reward probability encoding in the DLPFC exhibited an asymmetric pattern of mixed selectivity that supported the computation of the stimulus-to-action transition of reward information. Our results reveal that the OFC and the DLPFC play distinct roles in the value computation during evidence accumulation.
Ye Liu; Ming Li; Xian Zhang; Yiliang Lu; Hongliang Gong; Jiapeng Yin; Zheyuan Chen; Liling Qian; Yupeng Yang; Ian Max Andolina; Stewart Shipp; Niall Mcloughlin; Shiming Tang; Wei Wang
In: Neuron, 108 (3), pp. 538–550.e5, 2020.
How does our visual brain generate perceptual color space? Liu et al. find that within a uniform blob-like architecture of hue responses, chromotopic maps develop progressively in scale and precision along the visual hierarchy of macaque V1, V2, and V4. Such hierarchical refinement improves spectral uniformity, better reflecting color perception.
Adi Lixenberg; Merav Yarkoni; Yehudit Botschko; Mati Joshua
In: Journal of Neurophysiology, 123 (2), pp. 786–799, 2020.
The cerebellum exhibits both motor and reward-related signals. However, it remains unclear whether reward is processed independently from the motor command or might reflect the motor consequences of the reward drive. To test how reward-related signals interact with sensorimotor processing in the cerebellum, we recorded Purkinje cell simple spike activity in the cerebellar floccular complex while monkeys were engaged in smooth pursuit eye movement tasks. The color of the target signaled the size of the reward the monkeys would receive at the end of the target motion. When the tracking task presented a single target, both pursuit and neural activity were only slightly modulated by the reward size. The reward modulations in single cells were rarely large enough to be detected. These modulations were only significant in the population analysis when we averaged across many neurons. In two-target tasks where the monkey learned to select based on the size of the reward outcome, both behavior and neural activity adapted rapidly. In both the single- and two-target tasks, the size of the reward-related modulation matched the size of the effect of reward on behavior. Thus, unlike cortical activity in eye movement structures, the reward-related signals could not be dissociated from the motor command. These results suggest that reward information is integrated with the eye movement command upstream of the Purkinje cells in the floccular complex. Thus reward-related modulations of the simple spikes are akin to modulations found in motor behavior and not to the central processing of the reward value. NEW & NOTEWORTHY Disentangling sensorimotor and reward signals is only possible if these signals do not completely overlap. We recorded activity in the floccular complex of the cerebellum while monkeys performed tasks designed to separate representations of reward from those of movement. Activity modulation by reward could be accounted for by the coding of eye movement parameters, suggesting that reward information is already integrated into motor commands upstream of the floccular complex.
Kaleb A Lowe; Wolf Zinke; Anthony M Phipps; Josh Cosman; Micala Maddox; Jeffrey D Schall; Charles F Caskey
In: Ultrasound in Medicine & Biology, pp. 1–14, 2020.
Neuromodulation with focused ultrasound (FUS) is being widely explored as a non-invasive tool to stimulate focal brain regions because of its superior spatial resolution and coverage compared with other neuro- modulation methods. The precise effects of FUS stimulation on specific regions of the brain are not yet fully understood. Here, we characterized the behavioral effects of FUS stimulation directly applied through a craniot- omy over the macaque frontal eye field (FEF). In macaque monkeys making directed eye movements to perform visual search tasks with direct or arbitrary responses, focused ultrasound was applied through a craniotomy over the FEF. Saccade response times (RTs) and error rates were determined for trials without or with FUS stim- ulation with pulses at a peak negative pressure of either 250 or 425 kPa. Both RTs and error rates were affected by FUS. Responses toward a target located contralateral to the FUS stimulation were approximately 3 ms slower in the presence of FUS in both monkeys studied, while only one exhibited a slowing of responses for ipsilateral targets. Error rates were lower in one monkey in this study. In another search task requiring making eye move- ments toward a target (pro-saccades) or in the opposite direction (anti-saccades), the RT for pro-saccades increased in the presence of FUS stimulation. Our results indicate the effectiveness of FUS to modulate saccadic responses when stimulating FEF in awake, behaving non-human primates. (E-mail:
Liya Ma; Janahan Selvanayagam; Maryam Ghahremani; Lauren K Hayrynen; Kevin D Johnston; Stefan Everling
In: Journal of Neurophysiology, 123 (3), pp. 896–911, 2020.
Abnormal saccadic eye movements can serve as biomarkers for patients with several neuropsychiatric disorders. The common marmoset (Callithrix jacchus) is becoming increasingly popular as a nonhuman primate model to investigate the cortical mechanisms of saccadic control. Recently, our group demonstrated that microstimulation in the posterior parietal cortex (PPC) of marmosets elicits contralateral saccades. Here we recorded single-unit activity in the PPC of the same two marmosets using chronic microelectrode arrays while the monkeys performed a saccadic task with gap trials (target onset lagged fixation point offset by 200 ms) interleaved with step trials (fixation point disappeared when the peripheral target appeared). Both marmosets showed a gap effect, shorter saccadic reaction times (SRTs) in gap vs. step trials. On average, stronger gap-period responses across the entire neuronal population preceded shorter SRTs on trials with contralateral targets although this correlation was stronger among the 15% “gap neurons,” which responded significantly during the gap. We also found 39% “target neurons” with significant saccadic target-related responses, which were stronger in gap trials and correlated with the SRTs better than the remaining neurons. Compared with saccades with relatively long SRTs, short-SRT saccades were preceded by both stronger gap-related and target-related responses in all PPC neurons, regardless of whether such response reached significance. Our findings suggest that the PPC in the marmoset contains an area that is involved in the modulation of saccadic preparation. NEW & NOTEWORTHY As a primate model in systems neuroscience, the marmoset is a great complement to the macaque monkey because of its unique advantages. To identify oculomotor networks in the marmoset, we recorded from the marmoset posterior parietal cortex during a saccadic task and found single-unit activities consistent with a role in saccadic modulation. This finding supports the marmoset as a valuable model for studying oculomotor control.
Tatiana Malevich; Antimo Buonocore; Ziad M Hafed
Rapid stimulus-driven modulation of slow ocular position drifts Journal Article
In: eLife, 9 , pp. 1–22, 2020.
The eyes are never still during maintained gaze fixation. When microsaccades are not occurring, ocular position exhibits continuous slow changes, often referred to as drifts. Unlike microsaccades, drifts remain to be viewed as largely random eye movements. Here we found that ocular position drifts can, instead, be very systematically stimulus-driven, and with very short latencies. We used highly precise eye tracking in three well trained macaque monkeys and found that even fleeting (~8 ms duration) stimulus presentations can robustly trigger transient and stimulus-specific modulations of ocular position drifts, and with only approximately 60 ms latency. Such drift responses are binocular, and they are most effectively elicited with large stimuli of low spatial frequency. Intriguingly, the drift responses exhibit some image pattern selectivity, and they are not explained by convergence responses, pupil constrictions, head movements, or starting eye positions. Ocular position drifts have very rapid access to exogenous visual information.
Vahid Mehrpour; Julio C Martinez-Trujillo; Stefan Treue
In: Nature Communications, 11 , pp. 1–8, 2020.
Attention enhances the neural representations of behaviorally relevant stimuli, typically by a push–pull increase of the neuronal response gain to attended vs. unattended stimuli. This selectively improves perception and consequently behavioral performance. However, to enhance the detectability of stimulus changes, attention might also distort neural representations, compromising accurate stimulus representation. We test this hypothesis by recording neural responses in the visual cortex of rhesus monkeys during a motion direction change detection task. We find that attention indeed amplifies the neural representation of direction changes, beyond a similar effect of adaptation. We further show that humans overestimate such direction changes, providing a perceptual correlate of our neurophysiological observations. Our results demonstrate that attention distorts the neural representations of abrupt sensory changes and consequently perceptual accuracy. This likely represents an evolutionary adaptive mechanism that allows sensory systems to flexibly forgo accurate representation of stimulus features to improve the encoding of stimulus change.
Atsushi Noritake; Taihei Ninomiya; Masaki Isoda
In: Proceedings of the National Academy of Sciences, 117 (10), pp. 5516–5524, 2020.
The lateral hypothalamus (LH) has long been implicated in maintaining behavioral homeostasis essential for the survival of an individual. However, recent evidence suggests its more widespread roles in behavioral coordination, extending to the social domain. The neuronal and circuit mechanisms behind the LH processing of social information are unknown. Here, we show that the LH represents distinct reward variables for “self” and “other” and is causally involved in shaping socially motivated behavior. During a Pavlovian conditioning procedure incorporating ubiquitous social experiences where rewards to others affect one's motivation, LH cells encoded the subjective value of self-rewards, as well as the likelihood of self- or other-rewards. The other-reward coding was not a general consequence of other's existence, but a specific effect of other's reward availability. Coherent activity with and top-down information flow from the medial prefrontal cortex, a hub of social brain networks, contributed to signal encoding in the LH. Furthermore, deactivation of LH cells eliminated the motivational impact of other-rewards. These results indicate that the LH constitutes a subcortical node in social brain networks and shapes one's motivation by integrating cortically derived, agent-specific reward information.
Wei Song Ong; Seth Madlon-Kay; Michael L Platt
Neuronal correlates of strategic cooperation in monkeys Journal Article
In: Nature Neuroscience, 24 , pp. 1–22, 2020.
We recorded neural activity in male monkeys playing a variant of the game ‘chicken' in which they made decisions to cooperate or not cooperate to obtain rewards of different sizes. Neurons in the middle superior temporal sulcus (mSTS)—previously implicated in social perception—signaled strategic information, including payoffs, intentions of the other player, reward outcomes and predictions about the other player. Moreover, a subpopulation of mSTS neurons selectively signaled cooperatively obtained rewards. Neurons in the anterior cingulate gyrus, previously implicated in vicarious reinforcement and empathy, carried less information about strategic variables, especially cooperative reward. Strategic signals were not reducible to perceptual information about the other player or motor contingencies. These findings suggest that the capacity to compute models of other agents has deep roots in the strategic social behavior of primates and that the anterior cingulate gyrus and the mSTS support these computations.
Tyler R Peel; Suryadeep Dash; Stephen G Lomber; Brian D Corneil
In: Journal of Computational Neuroscience, pp. 1–21, 2020.
Saccades require a spatiotemporal transformation of activity between the intermediate layers of the superior colliculus (iSC) and downstream brainstem burst generator. The dynamic linear ensemble-coding model (Goossens and Van Opstal, 2006) proposes that each iSC spike contributes a fixed mini-vector to saccade displacement. Although biologically-plausible, this model assumes cortical areas like the frontal eye fields (FEF) simply provide the saccadic goal to be executed by the iSC and brainstem burst generator. However, the FEF and iSC operate in unison during saccades, and a pathway from the FEF to the brainstem burst generator that bypasses the iSC exists. Here, we investigate the impact of large yet reversible inactivation of the FEF on iSC activity in the context of the model across four saccade tasks. We exploit the overlap of saccade vectors generated when the FEF is inactivated or not, comparing the number of iSC spikes for metrically-matched saccades. We found that the iSC emits fewer spikes for metrically-matched saccades during FEF inactivation. The decrease in spike count is task-dependent, with a greater decrease accompanying more cognitively-demanding saccades. Our results show that FEF integrity influences the readout of iSC activity in a task-dependent manner. We propose that the dynamic linear ensemble-coding model be modified so that FEF inactivation increases the gain of a readout parameter, effectively increasing the influence of a single iSC spike. We speculate that this modification could be instantiated by a direct pathway from the FEF to the omnipause region that modulates the excitability of the brainstem burst generator. Significance statement One of the enduring puzzles in the oculomotor system is how it achieves the spatiotemporal transformation, converting spatial activity within the intermediate layers of the superior colliculus (iSC) into a rate code within the brainstem burst generator. The spatiotemporal transformation has traditionally been viewed as the purview of the oculomotor brainstem. Here, within the context of testing a biologically-plausible model of the spatiotemporal transformation, we show that reversible inactivation of the frontal eye fields (FEF) decreases the number of spikes issued by the iSC for metrically-matched saccades, with greater decreases accompanying more cognitively-demanding tasks. These results show that signals from the FEF influence the spatiotemporal transformation.
Sorin A Pojoga; Natasha Kharas; Valentin Dragoi
In: Nature Communications, 11 , pp. 1–12, 2020.
Our daily behavior is dynamically influenced by conscious and unconscious processes. Although the neural bases of conscious experience have been extensively investigated over the past several decades, how unconscious information impacts neural circuitry and behavior remains unknown. Here, we recorded populations of neurons in macaque primary visual cortex (V1) to find that perceptually unidentifiable stimuli repeatedly presented in the absence of awareness are encoded by neural populations in a way that facilitates their future processing in the context of a behavioral task. Such exposure increases stimulus sensitivity and information encoded in cell populations, even though animals are unaware of stimulus identity. This phenomenon is consistent with a Hebbian mechanism underlying an increase in functional connectivity specifically for the neurons activated by subthreshold stimuli. This form of unsupervised adaptation may constitute a vestigial pre-attention system using the mere frequency of stimulus occurrence to change stimulus representations even when sensory inputs are perceptually invisible.
Joern K Pomper; Silvia Spadacenta; Friedemann Bunjes; Daniel Arnstein; Martin A Giese; Peter Thier
In: Journal of Neurophysiology, 124 (3), pp. 941–961, 2020.
In the search for the function of mirror neurons, a previous study reported that F5 mirror neuron responses are modulated by the value that the observing monkey associates with the grasped object. Yet we do not know whether mirror neurons are modulated by the expected reward value for the observer or also by other variables, which are causally dependent on value (e.g., motivation, attention directed at the observed action, arousal). To clarify this, we trained two rhesus macaques to observe a grasping action on an object kept constant, followed by four fully predictable outcomes of different values (2 outcomes with positive and 2 with negative emotional valence). We found a consistent order in population activity of both mirror and nonmirror neurons that matches the order of the value of this predicted outcome but that does not match the order of the above-mentioned value-dependent variables. These variables were inferred from the probability not to abort a trial, saccade latency, modulation of eye position during action observation, heart rate, and pupil size. Moreover, we found subpopulations of neurons tuned to each of the four predicted outcome values. Multidimensional scaling revealed equal normalized distances of 0.25 between the two positive and between the two negative outcomes suggesting the representation of a relative value, scaled to the task setting. We conclude that F5 mirror neurons and nonmirror neurons represent the observer's predicted outcome value, which in the case of mirror neurons may be transferred to the observed object or action. NEW & NOTEWORTHY Both the populations of F5 mirror neurons and nonmirror neurons represent the predicted value of an outcome resulting from the observation of a grasping action. Value-dependent motivation, arousal, and attention directed at the observed action do not provide a better explanation for this representation. The population activity's metric suggests an optimal scaling of value representation to task setting.
Pierre Pouget; Stephen Frey; Harry Ahnine; David Attali; Julien Claron; Charlotte Constans; Jean Francois Aubry; Fabrice Arcizet
In: Frontiers in Physiology, 11 , pp. 1–13, 2020.
Since the late 2010s, Transcranial Ultrasound Stimulation (TUS) has been used experimentally to carryout safe, non-invasive stimulation of the brain with better spatial resolution than Transcranial Magnetic Stimulation (TMS). This innovative stimulation method has emerged as a novel and valuable device for studying brain function in humans and animals. In particular, single pulses of TUS directed to oculomotor regions have been shown to modulate visuomotor behavior of non-human primates during 100 ms ultrasound pulses. In the present study, a sustained effect was induced by applying 20-s trains of neuronavigated repetitive Transcranial Ultrasound Stimulation (rTUS) to oculomotor regions of the frontal cortex in three non-human primates performing an antisaccade task. With the help of MRI imaging and a frame-less stereotactic neuronavigation system (SNS), we were able to demonstrate that neuronavigated TUS (outside of the MRI scanner) is an efficient tool to carry out neuromodulation procedures in non-human primates. We found that, following neuronavigated rTUS, saccades were significantly modified, resulting in shorter latencies compared to no-rTUS trials. This behavioral modulation was maintained for up to 20 min. Oculomotor behavior returned to baseline after 18–31 min and could not be significantly distinguished from the no-rTUS condition. This study is the first to show that neuronavigated rTUS can have a persistent effect on monkey behavior with a quantified return-time to baseline. The specificity of the effects could not be explained by auditory confounds.
Paul Henri Prévot; Kevin Gehere; Fabrice Arcizet; Himanshu Akolkar; Mina A Khoei; Kévin Blaize; Omar Oubari; Pierre Daye; Marion Lanoë; Manon Valet; Sami Dalouz; Paul Langlois; Elric Esposito; Valérie Forster; Elisabeth Dubus; Nicolas Wattiez; Elena Brazhnikova; Céline Nouvel-Jaillard; Yannick LeMer; Joanna Demilly; Claire Maëlle Fovet; Philippe Hantraye; Morgane Weissenburger; Henri Lorach; Elodie Bouillet; Martin Deterre; Ralf Hornig; Guillaume Buc; José Alain Sahel; Guillaume Chenegros; Pierre Pouget; Ryad Benosman; Serge Picaud
In: Nature Biomedical Engineering, 4 (2), pp. 172–180, 2020.
Retinal dystrophies and age-related macular degeneration related to photoreceptor degeneration can cause blindness. In blind patients, although the electrical activation of the residual retinal circuit can provide useful artificial visual perception, the resolutions of current retinal prostheses have been limited either by large electrodes or small numbers of pixels. Here we report the evaluation, in three awake non-human primates, of a previously reported near-infrared-light-sensitive photovoltaic subretinal prosthesis. We show that multipixel stimulation of the prosthesis within radiation safety limits enabled eye tracking in the animals, that they responded to stimulations directed at the implant with repeated saccades and that the implant-induced responses were present two years after device implantation. Our findings pave the way for the clinical evaluation of the prosthesis in patients affected by dry atrophic age-related macular degeneration.
Rishi Rajalingham; Kohitij Kar; Sachi Sanghavi; Stanislas Dehaene; James J DiCarlo
In: Nature Communications, 11 (1), pp. 1–13, 2020.
The ability to recognize written letter strings is foundational to human reading, but the underlying neuronal mechanisms remain largely unknown. Recent behavioral research in baboons suggests that non-human primates may provide an opportunity to investigate this question. We recorded the activity of hundreds of neurons in V4 and the inferior temporal cortex (IT) while naïve macaque monkeys passively viewed images of letters, English words and non-word strings, and tested the capacity of those neuronal representations to support a battery of orthographic processing tasks. We found that simple linear read-outs of IT (but not V4) population responses achieved high performance on all tested tasks, even matching the performance and error patterns of baboons on word classification. These results show that the IT cortex of untrained primates can serve as a precursor of orthographic processing, suggesting that the acquisition of reading in humans relies on the recycling of a brain network evolved for other visual functions.
Sina Salehi; Mohammad Reza A Dehaqani; Behrad Noudoost; Hossein Esteky
In: Journal of Neurophysiology, 124 (4), pp. 1216–1228, 2020.
Face-selective neurons in the inferior temporal (IT) cortex respond to faces by either increasing (ENH) or decreasing (SUP) their spiking activities compared with their baseline. Although nearly half of IT face neurons are selectively suppressed by face stimulation, their role in face representation is not clear. To address this issue, we recorded the spiking activities and local field potential (LFP) from IT cortex of three monkeys while they viewed a large set of visual stimuli. LFP high-gamma (HG-LFP) power indicated the presence of both ENH and SUP face-selective neural clusters in IT cortex. The magnitude of HG-LFP power of the recording sites was correlated with the magnitude of change in the evoked spiking activities of its constituent neurons for both ENH and SUP face clusters. Spatial distribution of the ENH and SUP face clusters suggests the presence of a complex and heterogeneous face hypercluster organization in IT cortex. Importantly, ENH neurons conveyed more face category and SUP neurons conveyed more face identity information at both the single-unit and neuronal population levels. Onset and peak of suppressive single-unit, neuronal population, and HG-LFP power activities lagged those of the ENH ones. These results demonstrate that IT neuronal code for face representation is optimized by increasing sparseness through selective suppression of a subset of face neurons. We suggest that IT cortex contains spatial clusters of both ENH and SUP face neurons with distinct specialized functional role in face representation. NEW & NOTEWORTHY Electrophysiological and imaging studies have suggested that face information is encoded by a network of clusters of enhancive face-selective neurons in the visual cortex of man and monkey. We show that nearly half of face-selective neurons are suppressed by face stimulation. The suppressive neurons form spatial clusters and convey more face identity information than the enhancive face neurons. Our results suggest the presence of two neuronal subsystems for coarse and fine face information processing.
David J Schaeffer; Janahan Selvanayagam; Kevin D Johnston; Ravi S Menon; Winrich A Freiwald; Stefan Everling
Face selective patches in marmoset frontal cortex Journal Article
In: Nature Communications, 11 , pp. 1–8, 2020.
In humans and macaque monkeys, socially relevant face processing is accomplished via a distributed functional network that includes specialized patches in frontal cortex. It is unclear whether a similar network exists in New World primates, who diverged ~35 million years from Old World primates. The common marmoset is a New World primate species ideally placed to address this question given their complex social repertoire. Here, we demonstrate the existence of a putative high-level face processing network in marmosets. Like Old World primates, marmosets show differential activation in anterior cingulate and lateral prefrontal cortices while they view socially relevant videos of marmoset faces. We corroborate the locations of these frontal regions by demonstrating functional and structural connectivity between these regions and temporal lobe face patches. Given the evolutionary separation between macaques and marmosets, our results suggest this frontal network specialized for social face processing predates the separation between Platyrrhini and Catarrhini.
Philipp Schwedhelm; Daniel Baldauf; Stefan Treue
In: Scientific Reports, 10 , pp. 1–12, 2020.
The lateral prefrontal cortex of primates (lPFC) plays a central role in complex cognitive behavior, in decision-making as well as in guiding top-down attention. However, how and where in lPFC such behaviorally relevant signals are computed is poorly understood. We analyzed neural recordings from chronic microelectrode arrays implanted in lPFC region 8Av/45 of two rhesus macaques. The animals performed a feature match-to-sample task requiring them to match both motion and color information in a test stimulus. This task allowed to separate the encoding of stimulus motion and color from their current behavioral relevance on a trial-by-trial basis. We found that upcoming motor behavior can be robustly predicted from lPFC activity. In addition, we show that 8Av/45 encodes the color of a visual stimulus, regardless of its behavioral relevance. Most notably, whether a color matches the searched-for color can be decoded independent of a trial's motor outcome and while subjects detect unique feature conjunctions of color and motion. Thus, macaque area 8Av/45 computes, among other task-relevant information, the behavioral relevance of visual color features. Such a signal is most critical for both the selection of responses as well as the deployment of top-down modulatory signals, like feature-based attention.
H N Schwerdt; K Amemori; D J Gibson; L L Stanwicks; T Yoshida; N P Bichot; S Amemori; R Desimone; R Langer; M J Cima; A M Graybiel
In: Science Advances, 6 , pp. 1–17, 2020.
Parkinson's disease is characterized by decreased dopamine and increased beta-band oscillatory activity accompanying debilitating motor and mood impairments. Coordinate dopamine-beta opposition is considered a normative rule for basal ganglia function. We report a breakdown of this rule. We developed multimodal systems allowing the first simultaneous, chronic recordings of dopamine release and beta-band activity in the striatum of nonhuman primates during behavioral performance. Dopamine and beta signals were anticorrelated over secondslong time frames, in agreement with the posited rule, but at finer time scales, we identified conditions in which these signals were modulated with the same polarity. These measurements demonstrated that task-elicited beta suppressions preceded dopamine peaks and that relative dopamine-beta timing and polarity depended on reward value, performance history, movement, and striatal domain. These findings establish a new view of coordinate dopamine and beta signaling operations, critical to guide novel strategies for diagnosing and treating Parkinson's disease and related neurodegenerative disorders.
Caspar M Schwiedrzik; Sandrin S Sudmann
In: Journal of Neuroscience, 40 (23), pp. 4565–4575, 2020.
Pupil diameter determines how much light hits the retina, and thus, how much information is available for visual processing. This is regulated by a brainstem reflex pathway. Here, we investigate whether this pathway is under control of internal models about the environment. This would allow adjusting pupil dynamics to environmental statistics to augment information transmission. We present image sequences containing internal temporal structure to humans of either sex and male macaque monkeys. We then measure whether the pupil tracks this temporal structure not only at the rate of luminance variations, but also at the rate of statistics not available from luminance information alone. We find entrainment to environmental statistics in both species. This entrainment directly affects visual processing by increasing sensitivity at the environmentally relevant temporal frequency. Thus, pupil dynamics are matched to the temporal structure of the environment to optimize perception, in line with an active sensing account.
Maria C Romero; Marco Davare; Marcelo Armendariz; Peter Janssen
In: Nature Communications, 10 , pp. 2642, 2019.
Transcranial magnetic stimulation (TMS) can non-invasively modulate neural activity in humans. Despite three decades of research, the spatial extent of the cortical area activated by TMS is still controversial. Moreover, how TMS interacts with task-related activity during motor behavior is unknown. Here, we applied single-pulse TMS over macaque parietal cortex while recording single-unit activity at various distances from the center of stimulation during grasping. The spatial extent of TMS-induced activation is remarkably restricted, affecting the spiking activity of single neurons in an area of cortex measuring less than 2 mm in diameter. In task-related neurons, TMS evokes a transient excitation followed by reduced activity, paralleled by a significantly longer grasping time. Furthermore, TMS-induced activity and task-related activity do not summate in single neurons. These results furnish crucial experimental evidence for the neural effects of TMS at the single-cell level and uncover the neural underpinnings of behavioral effects of TMS.
Ariana R Andrei; Sorin A Pojoga; Roger Janz; Valentin Dragoi
Integration of cortical population signals for visual perception Journal Article
In: Nature Communications, 10 , pp. 3832, 2019.
Visual stimuli evoke heterogeneous responses across nearby neural populations. These signals must be locally integrated to contribute to perception, but the principles underlying this process are unknown. Here, we exploit the systematic organization of orientation preference in macaque primary visual cortex (V1) and perform causal manipulations to examine the limits of signal integration. Optogenetic stimulation and visual stimuli are used to simultaneously drive two neural populations with overlapping receptive fields. We report that optogenetic stimulation raises firing rates uniformly across conditions, but improves the detection of visual stimuli only when activating cells that are preferentially-tuned to the visual stimulus. Further, we show that changes in correlated variability are exclusively present when the optogenetically and visually-activated populations are functionally-proximal, suggesting that correlation changes represent a hallmark of signal integration. Our results demonstrate that information from functionally-proximal neurons is pooled for perception, but functionally-distal signals remain independent. Primary visual cortical neurons exhibit diverse responses to visual stimuli yet how these signals are integrated during visual perception is not well understood. Here, the authors show that optogenetic stimulation of neurons situated near the visually‐driven population leads to improved orientation detection in monkeys through changes in correlated variability.
Seth W Egger; Evan D Remington; Chia-Jung Chang; Mehrdad Jazayeri
In: Nature Neuroscience, 22 , pp. 1871–1882, 2019.
Sensorimotor control during overt movements is characterized in terms of three building blocks: a controller, a simulator and a state estimator. We asked whether the same framework could explain the control of internal states in the absence of movements. Recently, it was shown that the brain controls the timing of future movements by adjusting an internal speed command. We trained monkeys in a novel task in which the speed command had to be dynamically controlled based on the timing of a sequence of flashes. Recordings from the frontal cortex provided evidence that the brain updates the internal speed command after each flash based on the error between the timing of the flash and the anticipated timing of the flash derived from a simulated motor plan. These findings suggest that cognitive control of internal states may be understood in terms of the same computational principles as motor control. Control of movements can be understood in terms of the interplay between a controller, a simulator and an estimator. Egger et. al. show that cortical neurons establish the same building blocks to control cognitive states in the absence of movement.
Ramina Adam; Kevin D Johnston; Stefan Everling
In: Journal of Neurophysiology, 122 (2), pp. 672–690, 2019.
The caudal primate prefrontal cortex (PFC) is involved in target selection and visually guided saccades through both covert attention and overt orienting eye movements. Unilateral damage to the caudal PFC often leads to decreased awareness of a contralesional target alone, referred to as “neglect,” or when it is presented simultaneously with an ipsilesional target, referred to as “extinction.” In the current study, we examined whether deficits in contralesional target selection were due to contralesional oculomotor deficits, such as slower reaction times. We experimentally induced a focal ischemic lesion in the right caudal PFC of 4 male macaque monkeys using the vasoconstrictor endothelin-1 and measured saccade choice and reaction times on double-stimulus free-choice tasks and single-stimulus trials before and after the lesion. We found that 1) endothelin-1-induced lesions in the caudal PFC produced contralesional target selection deficits that varied in severity and duration based on lesion volume and location; 2) contralesional neglect-like deficits were transient and recovered by week 4 postlesion; 3) contralesional extinction-like deficits were longer lasting and recovered by weeks 8–16 postlesion; 4) contralesional reaction time returned to baseline well before the contralesional choice deficit had recovered; and 5) neither the mean reaction times nor the reaction time distributions could account for the degree of contralesional extinction on the free-choice task throughout recovery. These findings demonstrate that the saccade choice bias observed after a right caudal PFC lesion is not exclusively due to contralesional motor deficits, but instead reflects a combination of impaired motor and attentional processing.
Kun Guo; Zhihan Li; Yin Yan; Wu Li
In: Experimental Brain Research, 237 (8), pp. 2045–2059, 2019.
Common facial expressions of emotion have distinctive patterns of facial muscle movements that are culturally similar among humans, and perceiving these expressions is associated with stereotypical gaze allocation at local facial regions that are characteristic for each expression, such as eyes in angry faces. It is, however, unclear to what extent this ‘universality' view can be extended to process heterospecific facial expressions, and how ‘social learning' process contributes to heterospecific expression perception. In this eye-tracking study, we examined face-viewing gaze allocation of human (including dog owners and non-dog owners) and monkey observers while exploring expressive human, chimpanzee, monkey and dog faces (positive, neutral and negative expressions in human and dog faces; neutral and negative expressions in chimpanzee and monkey faces). Human observers showed species- and experience-dependent expression categorization accuracy. Furthermore, both human and monkey observers demonstrated different face-viewing gaze distributions which were also species dependent. Specifically, humans predominately attended at human eyes but animal mouth when judging facial expressions. Monkeys' gaze distributions in exploring human and monkey faces were qualitatively different from exploring chimpanzee and dog faces. Interestingly, the gaze behaviour of both human and monkey observers were further affected by their prior experience of the viewed species. It seems that facial expression processing is species dependent, and social learning may play a significant role in discriminating even rudimentary types of heterospecific expressions.
Florian Sandhaeger; Constantin von Nicolai; Earl K Miller; Markus Siegel
In: eLife, 8 , pp. 1–21, 2019.
It remains challenging to relate EEG and MEG to underlying circuit processes and comparable experiments on both spatial scales are rare. To close this gap between invasive and non-invasive electrophysiology we developed and recorded human-comparable EEG in macaque monkeys during visual stimulation with colored dynamic random dot patterns. Furthermore, we performed simultaneous microelectrode recordings from 6 areas of macaque cortex and human MEG. Motion direction and color information were accessible in all signals. Tuning of the non- invasive signals was similar to V4 and IT, but not to dorsal and frontal areas. Thus, MEG and EEG were dominated by early visual and ventral stream sources. Source level analysis revealed corresponding information and latency gradients across cortex. We show how information-based methods and monkey EEG can identify analogous properties of visual processing in signals spanning spatial scales from single units to MEG – a valuable framework for relating human and animal studies.
Junxiang Luo; Keyan He; Ian Max Andolina; Xiaohong Li; Jiapeng Yin; Zheyuan Chen; Yong Gu; Wei Wang
In: Journal of Neuroscience, 39 (14), pp. 2664 –2685, 2019.
Studying the mismatch between perception and reality helps us better understand the constructive nature of the visual brain. The Pinna-Brelstaff motion illusion is a compelling example illustrating how a complex moving pattern can generate an illusory motion perception. When an observer moves toward (expansion) or away (contraction) from the Pinna-Brelstaff figure, the figure appears to rotate. The neural mechanisms underlying the illusory complex-flow motion of rotation, expansion, and contraction remain unknown. We studied this question at both perceptual and neuronal levels in behaving male macaques by using carefully parametrized Pinna-Brelstaff figures that induce the above motion illusions. We first demonstrate that macaques perceive illusory motion in a manner similar to that of human observers. Neurophysiological recordings were subsequently performed in the middle temporal area (MT) and the dorsal portion of the medial superior temporal area (MSTd). We find that subgroups of MSTd neurons encoding a particular global pattern of real complex-flow motion (rotation, expansion, contraction) also represent illusory motion patterns of the same class. They require an extra 15 ms to reliably discriminate the illusion. In contrast, MT neurons encode both real and illusory local motions with similar temporal delays. These findings reveal that illusory complex-flow motion is first represented in MSTd by the same neurons that normally encode real complex-flow motion. However, the extraction of global illusory motion in MSTd from other classes of real complex-flow motion requires extra processing time. Our study illustrates a cascaded integration mechanism from MT to MSTd underlying the transformation from external physical to internal nonveridical flow-motion perception.
Liya Ma; Jason L Chan; Kevin D Johnston; Stephen G Lomber; Stefan Everling
In: PLoS Biology, 17 (7), pp. e3000045, 2019.
In primates, both the dorsal anterior cingulate cortex (dACC) and the dorsolateral prefrontal cortex (dlPFC) are key regions of the frontoparietal cognitive control network. To study the role of the dACC and its communication with the dlPFC in cognitive control, we recorded local field potentials (LFPs) from the dlPFC before and during the reversible deactivation of the dACC, in macaque monkeys engaging in uncued switches between 2 stimulus-response rules, namely prosaccade and antisaccade. Cryogenic dACC deactivation impaired response accuracy during maintenance of—but not the initial switching to—the cognitively demanding antisaccade rule, which coincided with a reduction in task-related theta activity and the correct-error (C-E) difference in dlPFC beta-band power. During both rule switching and maintenance, dACC deactivation prolonged the animals' reaction time and reduced task-related alpha power in the dlPFC. Our findings support a role of the dACC in prefrontal oscillatory activities that are involved the maintenance of a new, challenging task rule.
Corentin Massot; Uday K Jagadisan; Neeraj J Gandhi
In: Communications Biology, 2 , pp. 1–14, 2019.
The superior colliculus (SC) is an excellent substrate to study sensorimotor transformations. To date, the spatial and temporal properties of population activity along its dorsoventral axis have been inferred from single electrode studies. Here, we recorded SC population activity in non-human primates using a linear multi-contact array during delayed saccade tasks. We show that during the visual epoch, information appeared first in dorsal layers and systematically later in ventral layers. During the delay period, the laminar organization of low-spiking rate activity matched that of the visual epoch. During the pre-saccadic epoch, spiking activity emerged first in a more ventral layer, ~ 100 ms before saccade onset. This buildup of activity appeared later on nearby neurons situated both dorsally and ventrally, culminating in a synchronous burst across the dorsoventral axis, ~ 28 ms before saccade onset. Collectively, these results reveal a principled spatiotemporal organization of SC population activity underlying sensorimotor transformation for the control of gaze.
Vincent B McGinty
In: eNeuro, 6 (6), pp. 1–19, 2019.
Neural representations of value underlie many behaviors that are crucial for survival. Previously, we found that value representations in primate orbitofrontal cortex (OFC) are modulated by attention, specifically, by overt shifts of gaze toward or away from reward-associated visual cues (McGinty et al., 2016). Here, we investigate the influence of overt attention on behavior by asking how gaze shifts correlate with reward anticipatory responses and whether activity in OFC mediates this correlation. Macaque monkeys viewed pavlovian conditioned appetitive cues on a visual display, while the fraction of time they spent looking toward or away from the cues was measured using an eye tracker. Also measured during cue presentation were the reward anticipation, indicated by conditioned licking responses (CRs), and single-neuron activity in OFC. In general, gaze allocation predicted subsequent licking responses: the longer the monkeys spent looking at a cue at a given time point in a trial, the more likely they were to produce an anticipatory CR later in that trial, as if the subjective value of the cue were increased. To address neural mechanisms, mediation analysis measured the extent to which the gaze–CR correlation could be statistically explained by the concurrently recorded firing of OFC neurons. The resulting mediation effects were indistinguishable from chance. Therefore, while overt attention may increase the subjective value of reward-associated cues (as revealed by anticipatory behaviors), the underlying mechanism remains unknown, as does the functional significance of gaze-driven modulation of OFC value signals.
Priyanka S Mehta; Jiaxin Cindy Tu; Giuliana A LoConte; Meghan C Pesce; Benjamin Y Hayden
In: Journal of Neuroscience, 39 (27), pp. 5336–5350, 2019.
To make efficient foraging decisions, we must combine information about the values of available options with nonvalue information. Some accounts of ventromedial PFC (vmPFC) suggest that it has a narrow role limited to evaluating immediately available options. We examined responses of neurons in area 14 (a putative macaque homolog of human vmPFC) as 2 male macaques performed a novel foraging search task. Although many neurons encoded the values of immediately available offers, they also independently encoded several other variables that influence choice, but that are conceptually distinct from offer value. These variables include average reward rate, number of offers viewed per trial, previous offer values, previous outcome sizes, and the locations of the currently attended offer.We conclude that, rather than serving as specialized economic value center, vmPFC plays a broad role in integrating relevant environmental information to drive foraging decisions.
Adam P Morris; Bart Krekelberg
A stable visual world in primate primary visual cortex Journal Article
In: Current Biology, 29 (9), pp. 1471–1480, 2019.
Humans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina—and propagated throughout the visual cortical hierarchy—is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here, we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded “eye tracker” that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in area V1 of macaque monkeys during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of gaze direction. This decoded signal tracked the eye accurately not only during fixation but also during fast and slow eye movements. After a fast eye movement, the eye-position signal arrived in V1 at approximately the same time at which the new visual information arrived from the retina. Using simulations, we show that this V1 eye-position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world. Visual input arrives as a series of snapshots, each taken from a different line of sight, due to eye movements from one part of a scene to another. How do we nevertheless see a stable visual world? Morris and Krekelberg show that in primary visual cortex, the neural representation of each snapshot includes “metadata” that tracks gaze direction.
Aidan P Murphy; David A Leopold
In: Journal of Neuroscience Methods, 324 , pp. 1–14, 2019.
Background: Rhesus macaques are the most popular model species for studying the neural basis of visual face processing and social interaction using intracranial methods. However, the challenge of creating realistic, dynamic, and parametric macaque face stimuli has limited the experimental control and ethological validity of existing approaches. New method: We performed statistical analyses of in vivo computed tomography data to generate an anatomically accurate, three-dimensional representation of Rhesus macaque cranio-facial morphology. The surface structures were further edited, rigged and textured by a professional digital artist with careful reference to photographs of macaque facial expression, colouration and pelage. Results: The model offers precise, continuous, parametric control of craniofacial shape, emotional expression, head orientation, eye gaze direction, and many other parameters that can be adjusted to render either static or dynamic high-resolution faces. Example single-unit responses to such stimuli in macaque inferotemporal cortex demonstrate the value of parametric control over facial appearance and behaviours. Comparison with existing method(s): The generation of such a high-dimensionality and systematically controlled stimulus set of conspecific faces, with accurate craniofacial modelling and professional finalization of facial details, is currently not achievable using existing methods. Conclusions: The results herald a new set of possibilities in adaptive sampling of a high-dimensional and socially meaningful feature space, thus opening the door to systematic testing of hypotheses about the abundant neural specialization for faces found in the primate.
Sunny Nigam; Sorin A Pojoga; Valentin Dragoi
Synergistic coding of visual information in columnar networks Journal Article
In: Neuron, 104 , pp. 402–411, 2019.
Incoming stimuli are encoded collectively by populations of cortical neurons, which transmit information by using a neural code thought to be predominantly redundant. Redundant coding is widely believed to reflect a design choice whereby neurons with overlapping receptive fields sample environmental stimuli to convey similar information. Here, we performed multi-electrode laminar recordings in awake monkey V1 to report significant synergistic interactions between nearby neurons within a cortical column. These interactions are clustered non-randomly across cortical layers to form synergy and redundancy hubs. Homogeneous sub-populations comprising synergy hubs decode stimulus information significantly better compared to redundancy hubs or heterogeneous sub-populations. Mechanistically, synergistic interactions emerge from the stimulus dependence of correlated activity between neurons. Our findings suggest a refinement of the prevailing ideas regarding coding schemes in sensory cortex: columnar populations can efficiently encode information due to synergistic interactions even when receptive fields overlap and shared noise between cells is high.
Kaiser Niknam; Amir Akbarian; Kelsey Clark; Yasin Zamani; Behrad Noudoost; Neda Nategh
In many brain areas, sensory responses are heavily modulated by factors including attentional state, context, reward history, motor preparation, learned associations, and other cognitive variables. Modelling the effect of these modulatory factors on sensory responses has proven challenging, mostly due to the time-varying and nonlinear nature of the underlying computations. Here we present a computational model capable of capturing and dissociating multiple time-varying modulatory effects on neuronal responses on the order of milliseconds. The model's performance is tested on extrastriate perisaccadic visual responses in nonhuman primates. Visual neurons respond to stimuli presented around the time of saccades differently than during fixation. These perisaccadic changes include sensitivity to the stimuli presented at locations outside the neuron's receptive field, which suggests a contribution of multiple sources to perisaccadic response generation. Current computational approaches cannot quantitatively characterize the contribution of each modulatory source in response generation, mainly due to the very short timescale on which the saccade takes place. In this study, we use a high spatiotemporal resolution experimental paradigm along with a novel extension of the generalized linear model framework (GLM), termed the sparse-variable GLM, to allow for time-varying model parameters representing the temporal evolution of the system with a resolution on the order of milliseconds. We used this model framework to precisely map the temporal evolution of the spatiotemporal receptive field of visual neurons in the middle temporal area during the execution of a saccade. Moreover, an extended model based on a factorization of the sparse-variable GLM allowed us to disassociate and quantify the contribution of individual sources to the perisaccadic response. Our results show that our novel framework can precisely capture the changes in sensitivity of neurons around the time of saccades, and provide a general framework to quantitatively track the role of multiple modulatory sources over time.
Mariann Oemisch; Stephanie Westendorff; Marzyeh Azimi; Seyed Alireza Hassani; Salva Ardid; Paul Tiesinga; Thilo Womelsdorf
In: Nature Communications, 10 , pp. 176, 2019.
To adjust expectations efficiently, prediction errors need to be associated with the precise features that gave rise to the unexpected outcome, but this credit assignment may be problematic if stimuli differ on multiple dimensions and it is ambiguous which feature dimension caused the outcome. Here, we report a potential solution: neurons in four recorded areas of the anterior fronto-striatal networks encode prediction errors that are specific to feature values of different dimensions of attended multidimensional stimuli. The most ubiquitous prediction error occurred for the reward-relevant dimension. Feature-specific prediction error signals a) emerge on average shortly after non-specific prediction error signals, b) arise earliest in the anterior cingulate cortex and later in dorsolateral prefrontal cortex, caudate and ventral striatum, and c) contribute to feature-based stimulus selection after learning. Thus, a widely-distributed feature-specific eligibility trace may be used to update synaptic weights for improved feature-based attention.
Davide Paoletti; Christoph Braun; Elisabeth Julie Vargo; Wieske van Zoest
In: European Journal of Neuroscience, 49 , pp. 137–149, 2019.
Previous behavioural studies have accrued evidence that response time plays a critical role in determining whether selection is influenced by stimulus saliency or target template. In the present work, we investigated to what extent the variations in timing and consequent oculomotor controls are influenced by spontaneous variations in pre-stimulus alpha oscillations. We recorded simultaneously brain activity using magnetoencephalography (MEG) and eye movements while participants performed a visual search task. Our results show that slower saccadic reaction times were predicted by an overall stronger alpha power in the 500 ms time window preceding the stimulus onset, while weaker alpha power was a signature of faster responses. When looking separately at performance for fast and slow responses, we found evidence for two specific sources of alpha activity predicting correct versus incorrect responses. When saccades were quickly elicited, errors were predicted by stronger alpha activity in posterior areas, comprising the angular gyrus in the temporal-parietal junction (TPJ) and possibly the lateral intraparietal area (LIP). Instead, when participants were slower in responding, an increase of alpha power in frontal eye fields (FEF), supplementary eye fields (SEF) and dorsolateral pre-frontal cortex (DLPFC) predicted erroneous saccades. In other words, oculomotor accuracy in fast responses was predicted by alpha power differences in more posterior areas, while the accuracy in slow responses was predicted by alpha power differences in frontal areas, in line with the idea that these areas may be differentially related to stimulus-driven and goal-driven control of selection.
Michael A Paradiso; Seth Akers-Campbell; Octavio Ruiz; James E Niemeyer; Stuart Geman; Jackson Loper
In: Frontiers in Integrative Neuroscience, 12 , pp. 1–18, 2019.
Approximately three times per second, human visual perception is interrupted by a saccadic eye movement. In addition to taking the eyes to a new location, several lines of evidence suggest that the saccades play multiple roles in visual perception. Indeed, it may be crucial that visual processing is informed about movements of the eyes in order to analyze visual input distinctly and efficiently on each fixation and preserve stable visual perception of the world across saccades. A variety of studies has demonstrated that activity in multiple brain areas is modulated by saccades. The hypothesis tested here is that these signals carry significant information that could be used in visual processing. To test this hypothesis, local field potentials (LFPs) were simultaneously recorded from multiple electrodes in macaque primary visual cortex (V1); support vector machines (SVMs) were used to classify the peri-saccadic LFPs. We find that LFPs in area V1 carry information that can be used to distinguish neural activity associated with fixations from saccades, precisely estimate the onset time of fixations, and reliably infer the directions of saccades. This information may be used by the brain in processes including visual stability, saccadic suppression, receptive field (RF) remapping, fixation amplification, and trans-saccadic visual perception.
Aishwarya Parthasarathy; Cheng Tang; Roger Herikstad; Loong Fah Cheong; Shih Cheng Yen; Camilo Libedinsky
In: Nature Communications, 10 , pp. 4995, 2019.
Maintenance of working memory is thought to involve the activity of prefrontal neuronal populations with strong recurrent connections. However, it was recently shown that distractors evoke a morphing of the prefrontal population code, even when memories are maintained throughout the delay. How can a morphing code maintain time-invariant memory information? We hypothesized that dynamic prefrontal activity contains time-invariant memory information within a subspace of neural activity. Using an optimization algorithm, we found a low-dimensional subspace that contains time-invariant memory information. This information was reduced in trials where the animals made errors in the task, and was also found in periods of the trial not used to find the subspace. A bump attractor model replicated these properties, and provided predictions that were confirmed in the neural data. Our results suggest that the high-dimensional responses of prefrontal cortex contain subspaces where different types of information can be simultaneously encoded with minimal interference.
Alina Peter; Cem Uran; Johanna Klon-Lipok; Rasmus Roese; Sylvia Van Stijn; William Barnes; Jarrod R Dowdall; Wolf Singer; Pascal Fries; Martin Vinck
In: eLife, 8 , pp. 1–38, 2019.
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
Dina V Popovkina; Wyeth Bair; Anitha Pasupathy
In: Journal of Neurophysiology, 121 (3), pp. 1059–1077, 2019.
Visual area V4 is an important midlevel cortical processing stage that subserves object recognition in primates. Studies investigating shape coding in V4 have largely probed neuronal responses with filled shapes, i.e., shapes defined by both a boundary and an interior fill. As a result, we do not know whether form-selective V4 responses are dictated by boundary features alone or if interior fill is also important. We studied 43 V4 neurons in two male macaque monkeys ( Macaca mulatta) with a set of 362 filled shapes and their corresponding outlines to determine how interior fill modulates neuronal responses in shape-selective neurons. Only a minority of neurons exhibited similar response strength and shape preferences for filled and outline stimuli. A majority responded preferentially to one stimulus category (either filled or outline shapes) and poorly to the other. Our findings are inconsistent with predictions of the hierarchical-max (HMax) V4 model that builds form selectivity from oriented boundary features and takes little account of attributes related to object surface, such as the phase of the boundary edge. We modified the V4 HMax model to include sensitivity to interior fill by either removing phase-pooling or introducing unoriented units at the V1 level; both modifications better explained our data without increasing the number of free parameters. Overall, our results suggest that boundary orientation and interior surface information are both maintained until at least the midlevel visual representation, consistent with the idea that object fill is important for recognition and perception in natural vision.
Rishi Rajalingham; James J DiCarlo
In: Neuron, 102 , pp. 493–505, 2019.
Extensive research suggests that the inferior temporal (IT) population supports visual object recognition behavior. However, causal evidence for this hypothesis has been equivocal, particularly beyond the specific case of face-selective subregions of IT. Here, we directly tested this hypothesis by pharmacologically inactivating individual, millimeter-scale subregions of IT while monkeys performed several core object recognition subtasks, interleaved trial- by trial. First, we observed that IT inactivation resulted in reliable contralateral-biased subtask-selective behavioral deficits. Moreover, inactivating different IT subregions resulted in different patterns of subtask deficits, predicted by each subregion's neuronal object discriminability. Finally, the similarity between different inactivation effects was tightly related to the anatomical distance between corre- sponding inactivation sites. Taken together, these results provide direct evidence that the IT cortex causally supports general core object recognition and that the underlying IT coding dimensions are topographically organized.
Douglas A Ruff; Marlene R Cohen
In: Nature Neuroscience, 22 , pp. 1669–1676, 2019.
Visual attention dramatically improves individuals' ability to see and modulates the responses of neurons in every known visual and oculomotor area, but whether such modulations can account for perceptual improvements is unclear. We measured the relationship between populations of visual neurons, oculomotor neurons and behavior during detection and discrimination tasks. We found that neither of the two prominent hypothesized neuronal mechanisms underlying attention (which concern changes in information coding and the way sensory information is read out) provide a satisfying account of the observed behavioral improvements. Instead, our results are more consistent with the hypothesis that attention reshapes the representation of attended stimuli to more effectively influence behavior. Our results suggest a path toward understanding the neural underpinnings of perception and cognition in health and disease by analyzing neuronal responses in ways that are constrained by behavior and interactions between brain areas.
Amirsaman Sajad; David C Godlove; Jeffrey D Schall
Cortical microcircuitry of performance monitoring Journal Article
In: Nature Neuroscience, 22 , pp. 265–274, 2019.
The medial frontal cortex enables performance monitoring, indexed by the error-related negativity (ERN) and manifested by performance adaptations. We recorded electroencephalogram over and neural spiking across all layers of the supplementary eye field, an agranular cortical area, in monkeys performing a saccade-countermanding (stop signal) task. Neurons signaling error production, feedback predicting reward gain or loss, and delivery of fluid reward had different spike widths and were concentrated differently across layers. Neurons signaling error or loss of reward were more common in layers 2 and 3 (L2/3), whereas neurons signaling gain of reward were more common in layers 5 and 6 (L5/6). Variation of error– and reinforcement-related spike rates in L2/3 but not L5/6 predicted response time adaptation. Variation in error-related spike rate in L2/3 but not L5/6 predicted ERN magnitude. These findings reveal novel features of cortical microcircuitry supporting performance monitoring and confirm one cortical source of the ERN.
Jason M Samonds; Veronica Choi; Nicholas J Priebe
In: Journal of Neuroscience, 39 (41), pp. 8024–8037, 2019.
Stereopsis is aubiquitous feature ofprimatemammalianvision, but little is knownabout ifandhowrodentssuchasmiceuse stereoscopic vision. We used random dot stereograms to test for stereopsis in male and female mice, and they were able to discriminate near from far surfaces over a range of disparities, with diminishing performance for small and large binocular disparities. Based on two-photon measurements of disparity tuning, the range of disparities represented in the visual cortex aligns with the behavior and covers a broad range ofdisparities. Whenwe examined their binocular eye movements, we found that, unlike primates, mice did not systematically vary relative eye positions or use vergence eye movements when presented with different disparities. Nonetheless, the representation of disparity tuning was wide enough to capture stereoscopic information over a range of potential vergence angles. Although mice share fundamental characteristics of stereoscopic vision with primates and carnivores, their lack ofdisparity-dependent vergence eye move- ments and wide neuronal representation suggests that they may use a distinct strategy for stereopsis.
Morteza Sarafyazd; Mehrdad Jazayeri
Hierarchical reasoning by neural circuits in the frontal cortex Journal Article
In: Science, 364 , pp. 1–11, 2019.
Humans process information hierarchically. In the presence of hierarchies, sources of failures are ambiguous. Humans resolve this ambiguity by assessing their confidence after one or more attempts. To understand the neural basis of this reasoning strategy, we recorded from dorsomedial frontal cortex (DMFC) and anterior cingulate cortex (ACC) of monkeys in a task in which negative outcomes were caused either by misjudging the stimulus or by a covert switch between two stimulus-response contingency rules. We found that both areas harbored a representation of evidence supporting a rule switch. Additional perturbation experiments revealed that ACC functioned downstream of DMFC and was directly and specifically involved in inferring covert rule switches. These results reveal the computational principles of hierarchical reasoning, as implemented by cortical circuits.
Veronica E Scerra; M Gabriela Costello; Emilio Salinas; Terrence R Stanford
In: Current Biology, 29 (2), pp. 294–305, 2019.
Choices of where to look are informed by perceptual judgments, which locate objects of current value or interest within the visual scene. This perceptual-motor transform is partly implemented in the frontal eye field (FEF), where visually responsive neurons appear to select behaviorally relevant visual targets and, subsequently, saccade-related neurons select the movements required to look at them. Here, we use urgent decision-making tasks to show (1) that FEF motor activity can direct accurate, visually informed choices in the complete absence of prior target-distracter discrimination by FEF visual responses and (2) that such discrimination by FEF visual cells shows an all-or-none reliance on the presence of stimulus attributes strongly associated with saliency-driven attentional allocation. The present findings suggest that FEF visual target selection is specific to visual judgments made on the basis of saliency and may not play a significant role in guiding saccadic choices informed solely by feature content.
Shiva Farashahi; Christopher H Donahue; Benjamin Y Hayden; Daeyeol Lee; Alireza Soltani
Flexible combination of reward information across primates Journal Article
In: Nature Human Behaviour, 3 (11), pp. 1215–1224, 2019.
A fundamental but rarely contested assumption in economics and neuroeconomics is that decision-makers compute subjective values of risky options by multiplying functions of reward probability and magnitude. By contrast, an additive strategy for valuation allows flexible combination of reward information required in uncertain or changing environments. We hypothesized that the level of uncertainty in the reward environment should determine the strategy used for valuation and choice. To test this hypothesis, we examined choice between risky options in humans and rhesus macaques across three tasks with different levels of uncertainty. We found that whereas humans and monkeys adopted a multiplicative strategy under risk when probabilities are known, both species spontaneously adopted an additive strategy under uncertainty when probabilities must be learned. Additionally, the level of volatility influenced relative weighting of certain and uncertain reward information, and this was reflected in the encoding of reward magnitude by neurons in the dorsolateral prefrontal cortex.