State-specific gating of salient cues by midbrain dopaminergic input to basal amygdala.
ABSTRACT: Basal amygdala (BA) neurons guide associative learning via acquisition of responses to stimuli that predict salient appetitive or aversive outcomes. We examined the learning- and state-dependent dynamics of BA neurons and ventral tegmental area (VTA) dopamine (DA) axons that innervate BA (VTADA?BA) using two-photon imaging and photometry in behaving mice. BA neurons did not respond to arbitrary visual stimuli, but acquired responses to stimuli that predicted either rewards or punishments. Most VTADA?BA axons were activated by both rewards and punishments, and they acquired responses to cues predicting these outcomes during learning. Responses to cues predicting food rewards in VTADA?BA axons and BA neurons in hungry mice were strongly attenuated following satiation, while responses to cues predicting unavoidable punishments persisted or increased. Therefore, VTADA?BA axons may provide a reinforcement signal of motivational salience that invigorates adaptive behaviors by promoting learned responses to appetitive or aversive cues in distinct, intermingled sets of BA excitatory neurons.
Project description:Making predictions about future rewards or punishments is fundamental to adaptive behavior. These processes are influenced by prior experience. For example, prior exposure to aversive stimuli or stressors changes behavioral responses to negative- and positive-value predictive cues. Here, we demonstrate a role for medial prefrontal cortex (mPFC) neurons projecting to the paraventricular nucleus of the thalamus (PVT; mPFC→PVT) in this process. We found that a history of aversive stimuli negatively biased behavioral responses to motivationally relevant cues in mice and that this negative bias was associated with hyperactivity in mPFC→PVT neurons during exposure to those cues. Furthermore, artificially mimicking this hyperactive response with selective optogenetic excitation of the same pathway recapitulated the negative behavioral bias induced by aversive stimuli, whereas optogenetic inactivation of mPFC→PVT neurons prevented the development of the negative bias. Together, our results highlight how information flow within the mPFC→PVT circuit is critical for making predictions about motivationally-relevant outcomes as a function of prior experience.
Project description:Separate studies have implicated the lateral habenula (LHb) or amygdala-related regions in processing aversive stimuli, but their relationships to each other and to appetitive motivational systems are poorly understood. We show that neurons in the recently identified GABAergic rostromedial tegmental nucleus (RMTg), which receive a major LHb input, project heavily to midbrain dopamine neurons, and show phasic activations and/or Fos induction after aversive stimuli (footshocks, shock-predictive cues, food deprivation, or reward omission) and inhibitions after rewards or reward-predictive stimuli. RMTg lesions markedly reduce passive fear behaviors (freezing, open-arm avoidance) dependent on the extended amygdala, periaqueductal gray, or septum, all regions that project directly to the RMTg. In contrast, RMTg lesions spare or enhance active fear responses (treading, escape) in these same paradigms. These findings suggest that aversive inputs from widespread brain regions and stimulus modalities converge onto the RMTg, which opposes reward and motor-activating functions of midbrain dopamine neurons.
Project description:Animals and humans learn to approach and acquire pleasant stimuli and to avoid or defend against aversive ones. However, both pleasant and aversive stimuli can elicit arousal and attention, and their salience or intensity increases when they occur by surprise. Thus, adaptive behavior may require that neural circuits compute both stimulus valence--or value--and intensity. To explore how these computations may be implemented, we examined neural responses in the primate amygdala to unexpected reinforcement during learning. Many amygdala neurons responded differently to reinforcement depending upon whether or not it was expected. In some neurons, this modulation occurred only for rewards or aversive stimuli, but not both. In other neurons, expectation similarly modulated responses to both rewards and punishments. These different neuronal populations may subserve two sorts of processes mediated by the amygdala: those activated by surprising reinforcements of both valences-such as enhanced arousal and attention-and those that are valence-specific, such as fear or reward-seeking behavior.
Project description:Midbrain dopamine neurons are activated by reward or sensory stimuli predicting reward. These excitatory responses increase as the reward value increases. This response property has led to a hypothesis that dopamine neurons encode value-related signals and are inhibited by aversive events. Here we show that this is true only for a subset of dopamine neurons. We recorded the activity of dopamine neurons in monkeys (Macaca mulatta) during a Pavlovian procedure with appetitive and aversive outcomes (liquid rewards and airpuffs directed at the face, respectively). We found that some dopamine neurons were excited by reward-predicting stimuli and inhibited by airpuff-predicting stimuli, as the value hypothesis predicts. However, a greater number of dopamine neurons were excited by both of these stimuli, inconsistent with the hypothesis. Some dopamine neurons were also excited by both rewards and airpuffs themselves, especially when they were unpredictable. Neurons excited by the airpuff-predicting stimuli were located more dorsolaterally in the substantia nigra pars compacta, whereas neurons inhibited by the stimuli were located more ventromedially, some in the ventral tegmental area. A similar anatomical difference was observed for their responses to actual airpuffs. These findings suggest that different groups of dopamine neurons convey motivational signals in distinct manners.
Project description:Schizophrenia and bipolar disorder are associated with different clinical profiles of disturbances in motivation, yet few studies have compared the neurophysiological correlates of such disturbances. Outpatients with schizophrenia (n = 34), or bipolar disorder I (n = 33), and healthy controls (n = 31) completed a task in which the late positive potential (LPP), an index of motivated attention, was assessed along motivational gradients determined by apparent distance from potential rewards or punishments. Sequences of cues signaling possible monetary gains or losses appeared to loom progressively closer to the viewer; a reaction time (RT) task after the final cue determined the outcome. Controls showed the expected pattern with LPPs for appetitive and aversive cues that were initially elevated, smaller during intermediate positions, and escalated just prior to the RT task. The clinical groups showed different patterns in the final positions just prior to the RT task: the bipolar group's LPPs to both types of cues peaked relatively early during looming sequences and subsequently decreased, whereas the schizophrenia group showed relatively small LPP escalations, particularly for aversive cues. These distinct patterns suggest that the temporal unfolding of attentional resource allocation for motivationally significant events may qualitatively differ between these disorders. (PsycINFO Database Record
Project description:Hard-wired, Pavlovian, responses elicited by predictions of rewards and punishments exert significant benevolent and malevolent influences over instrumentally-appropriate actions. These influences come in two main groups, defined along anatomical, pharmacological, behavioural and functional lines. Investigations of the influences have so far concentrated on the groups as a whole; here we take the critical step of looking inside each group, using a detailed reinforcement learning model to distinguish effects to do with value, specific actions, and general activation or inhibition. We show a high degree of sophistication in Pavlovian influences, with appetitive Pavlovian stimuli specifically promoting approach and inhibiting withdrawal, and aversive Pavlovian stimuli promoting withdrawal and inhibiting approach. These influences account for differences in the instrumental performance of approach and withdrawal behaviours. Finally, although losses are as informative as gains, we find that subjects neglect losses in their instrumental learning. Our findings argue for a view of the Pavlovian system as a constraint or prior, facilitating learning by alleviating computational costs that come with increased flexibility.
Project description:Dopamine influences affective, motor and cognitive processing, and multiple forms of learning and memory. This multifaceted functionality, which operates across long temporal windows, is broader than the narrow and temporally constrained role often ascribed to dopamine neurons as reward prediction error detectors. Given the modulatory nature of dopamine neurotransmission, that dopamine release is activated by both aversive and appetitive stimuli, and that dopamine receptors are often localized extrasynaptically, a role for dopamine in transmitting precise error signals has been questioned. Here we recorded from ventral tegmental area (VTA) neurons, while exposing rats to novel stimuli that were predictive of an appetitive or aversive outcome in the same behavioral session. The VTA contains dopamine and -aminobutyric acid (GABA) neurons that project to striatal and cortical regions and are strongly implicated in learning and affective processing. The response of VTA neurons, regardless of whether they had putative dopamine or GABA waveforms, transformed flexibly as animals learned to associate novel stimuli from different sensory modalities to appetitive or aversive outcomes. Learning the appetitive association led to larger excitatory VTA responses, whereas acquiring the aversive association led to a biphasic response of brief excitation followed by sustained inhibition. These responses shifted rapidly as outcome contingencies changed. These data suggest that VTA neurons interface sensory information with representational memory of aversive and appetitive events. This pattern of plasticity was not selective for putative dopamine neurons and generalized to other cells, suggesting that the temporally precise information transfer from the VTA may be mediated by faster acting GABA neurons.
Project description:The salience of behaviorally relevant stimuli is dynamic and influenced by internal state and external environment. Monitoring such changes is critical for effective learning and flexible behavior, but the neuronal substrate for tracking the dynamics of stimulus salience is obscure. We found that neurons in the paraventricular thalamus (PVT) are robustly activated by a variety of behaviorally relevant events, including novel ("unfamiliar") stimuli, reinforcing stimuli and their predicting cues, as well as omission of the expected reward. PVT responses are scaled with stimulus intensity and modulated by changes in homeostatic state or behavioral context. Inhibition of the PVT responses suppresses appetitive or aversive associative learning and reward extinction. Our findings demonstrate that the PVT gates associative learning by providing a dynamic representation of stimulus salience.
Project description:Previous studies have not clearly demonstrated whether motivational tendencies during reward feedback are mainly characterized by appetitive responses to a gain or mainly by aversive consequences of reward omission. In the current study this issue was addressed employing a passive head or tails game and using the startle reflex as an index of the appetitive-aversive continuum. A second aim of the current study was to use startle-reflex modulation as a means to compare the subjective value of monetary rewards of varying magnitude. Startle responses after receiving feedback that a potential reward was won or not won were compared with a baseline condition without a potential gain. Furthermore, startle responses during anticipation of no versus potential gain were compared. Consistent with previous studies, startle-reflex magnitudes were significantly potentiated when participants anticipated a reward compared to no reward, which may reflect anticipatory arousal. Specifically for the largest reward (20-cents) startle magnitudes were potentiated when a reward was at stake but not won, compared to a neutral baseline without potential gain. In contrast, startle was not inhibited relative to baseline when a reward was won. This suggests that startle modulation during feedback is better characterized in terms of potentiation when missing out on reward rather than in terms of inhibition as a result of winning. However, neither of these effects were replicated in a more targeted second experiment. The discrepancy between these experiments may be due to differences in motivation to obtain rewards or differences in task engagement. From these experiments it may be concluded that the nature of the processing of reward feedback and reward cues is very sensitive to experimental parameters and settings. These studies show how apparently modest changes in these parameters and settings may lead to quite different modulations of appetitive/aversive motivation. A future experiment may shed more light on the question whether startle-reflex modulation after feedback is indeed mainly characterized by the aversive consequences of reward omission for relatively large rewards.
Project description:In studies of appetitive Pavlovian conditioning, rewards are often delivered to subjects in a manner that confounds several processes. For example, delivery of a sugar pellet to a rodent requires movement to collect the pellet and is associated with sensory stimuli such as the sight and sound of the pellet arrival. Thus, any neurochemical events occurring in proximity to the reward may be related to multiple coincident phenomena. We used fast-scan cyclic voltammetry in rats to compare nucleus accumbens dopamine responses to two different modes of delivery: sucrose pellets, which require goal-directed action for their collection and are associated with sensory stimuli, and intraoral infusions of sucrose, which are passively received and not associated with external stimuli. We found that when rewards were unpredicted, both pellets and infusions evoked similar dopamine release. However, when rewards were predicted by distinct cues, greater dopamine release was evoked by pellet cues than infusion cues. Thus, dopamine responses to pellets, infusions as well as predictive cues suggest a nuanced role for dopamine in both reward seeking and reward evaluation.