NMDAR-Dependent Emergence of Behavioral Representation in Primary Visual Cortex.
ABSTRACT: Although neocortical sensory areas are generally thought to faithfully represent external stimuli, cortical networks exhibit considerable functional plasticity, allowing them to modify their output to reflect ongoing behavioral demands. We apply longitudinal 2-photon imaging of activity in the primary visual cortex (V1) of mice learning a conditioned eyeblink task to investigate the dynamic representations of task-relevant information. We find that, although all V1 neurons robustly and stably encode visual input, pyramidal cells and parvalbumin-expressing interneurons exhibit experience-dependent emergence of accurate behavioral representations during learning. The functional plasticity driving performance-predictive activity requires cell-autonomous expression of NMDA-type glutamate receptors. Our findings demonstrate that accurate encoding of behavioral output is not inherent to V1 but develops during learning to support visual task performance.
Project description:We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli.
Project description:Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.
Project description:Visual perceptual learning (VPL) can lead to long-lasting perceptual improvements. One of the central topics in VPL studies is the locus of plasticity in the visual processing hierarchy. Here, we tackled this question in the context of motion processing. We took advantage of an established transition from component-dependent representations at the earliest level to pattern-dependent representations at the middle-level of cortical motion processing. Two groups of participants were trained on the same motion direction identification task using either grating or plaid stimuli. A set of pre- and post-training tests was used to determine the degree of learning specificity and generalizability. This approach allowed us to disentangle contributions from different levels of processing stages to behavioral improvements. We observed a complete bi-directional transfer of learning between component and pattern stimuli that moved to the same directions, indicating learning-induced plasticity associated with intermediate levels of motion processing. Moreover, we found that motion VPL is specific to the trained stimulus direction, speed, size, and contrast, diminishing the possibility of non-sensory decision-level enhancements. Taken together, these results indicate that, at least for the type of stimuli and the task used here, motion VPL most likely alters visual computation associated with signals at the middle stage of motion processing.
Project description:Visual perceptual learning (VPL) is long-term performance improvement as a result of perceptual experience. It is unclear whether VPL is associated with refinement in representations of the trained feature (feature-based plasticity), improvement in processing of the trained task (task-based plasticity), or both. Here, we provide empirical evidence that VPL of motion detection is associated with both types of plasticity which occur predominantly in different brain areas. Before and after training on a motion detection task, subjects' neural responses to the trained motion stimuli were measured using functional magnetic resonance imaging. In V3A, significant response changes after training were observed specifically to the trained motion stimulus but independently of whether subjects performed the trained task. This suggests that the response changes in V3A represent feature-based plasticity in VPL of motion detection. In V1 and the intraparietal sulcus, significant response changes were found only when subjects performed the trained task on the trained motion stimulus. This suggests that the response changes in these areas reflect task-based plasticity. These results collectively suggest that VPL of motion detection is associated with the 2 types of plasticity, which occur in different areas and therefore have separate mechanisms at least to some degree.
Project description:Perceptual learning is regarded as a manifestation of experience-dependent plasticity in the sensory systems, yet the underlying neural mechanisms remain unclear. We measured the dynamics of performance on a visual task and brain activation in the human primary visual cortex (V1) across the time course of perceptual learning. Within the first few weeks of training, brain activation in a V1 subregion corresponding to the trained visual field quadrant and task performance both increased. However, while performance levels then saturated and were maintained at a constant level, brain activation in the corresponding areas decreased to the level observed before training. These findings indicate that there are distinct temporal phases in the time course of perceptual learning, related to differential dynamics of BOLD activity in visual cortex.
Project description:Cannabinoids are notorious and profound modulators of behavioral state. In the brain, endocannabinoids act via Type 1-cannabinoid receptors (CB1) to modulate synaptic transmission and mediate multiple forms of synaptic plasticity. CB1 knockout (CB1KO) mice display a range of behavioral phenotypes, in particular hypoactivity and various deficits in learning and memory, including cerebellum-dependent delay eyeblink conditioning. Here we find that the apparent effects of CB1 deletion on cerebellar learning are not due to direct effects on CB1-dependent plasticity, but rather, arise as a secondary consequence of altered behavioral state. Hypoactivity of CB1KO mice accounts for their impaired eyeblink conditioning across both animals and trials. Moreover, learning in these mutants is rescued by walking on a motorized treadmill during training. Finally, cerebellar granule-cell-specific CB1KOs exhibit normal eyeblink conditioning, and both global and granule-cell-specific CB1KOs display normal cerebellum-dependent locomotor coordination and learning. These findings highlight the modulation of behavioral state as a powerful independent means through which individual genes contribute to complex behaviors.
Project description:The process by which visual information is incorporated into the brain's spatial framework to represent landmarks is poorly understood. Studies in humans and rodents suggest that retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-dependent behavioral task in which head-fixed mice learned the spatial relationship between visual landmark cues and hidden reward locations. Two-photon imaging revealed that these cues served as dominant reference points for most task-active neurons and anchored the spatial code in RSC. This encoding was more robust after task acquisition. Decoupling the virtual environment from mouse behavior degraded spatial representations and provided evidence that supralinear integration of visual and motor inputs contributes to landmark encoding. V1 axons recorded in RSC were less modulated by task engagement but showed surprisingly similar spatial tuning. Our data indicate that landmark representations in RSC are the result of local integration of visual, motor, and spatial information.
Project description:Sensory information is translated into ensemble representations by various populations of projection neurons in brain circuits. The dynamics of ensemble representations formed by distinct channels of output neurons in diverse behavioral contexts remains largely unknown. We studied the two output neuron layers in the olfactory bulb (OB), mitral and tufted cells, using chronic two-photon calcium imaging in awake mice. Both output populations displayed similar odor response profiles. During passive sensory experience, both populations showed reorganization of ensemble odor representations yet stable pattern separation across days. Intriguingly, during active odor discrimination learning, mitral but not tufted cells exhibited improved pattern separation, although both populations showed reorganization of ensemble representations. An olfactory circuitry model suggests that cortical feedback on OB interneurons can trigger both forms of plasticity. In conclusion, we show that different OB output layers display unique context-dependent long-term ensemble plasticity, allowing parallel transfer of non-redundant sensory information to downstream centers. VIDEO ABSTRACT.
Project description:Neurons in rodent primary visual cortex (V1) relate operantly conditioned stimulus-reward intervals with modulated patterns of spiking output, but little is known about the locus or mechanism of this plasticity. Here we show that cholinergic basal forebrain projections to V1 are necessary for the neural acquisition, but not the expression, of reward timing in the visual cortex of awake, behaving animals. We then mimic reward timing in vitro by pairing white matter stimulation with muscarinic receptor activation at a fixed interval and show that this protocol results in the prolongation of electrically evoked spike train durations out to the conditioned interval. Together, these data suggest that V1 possesses the circuitry and plasticity to support reward time prediction learning and the cholinergic system serves as an important reinforcement signal which, in vivo, conveys to the cortex the outcome of behavior.
Project description:The integration of visual stimuli and motor feedback is critical for successful visually guided navigation. These signals have been shown to shape neuronal activity in the primary visual cortex (V1), in an experience-dependent manner. Here, we examined whether visual, reward, and self-motion-related inputs are integrated in order to encode behaviorally relevant locations in V1 neurons. Using a behavioral task in a virtual environment, we monitored layer 2/3 neuronal activity as mice learned to locate a reward along a linear corridor. With learning, a subset of neurons became responsive to the expected reward location. Without a visual cue to the reward location, both behavioral and neuronal responses relied on self-motion-derived estimations. However, when visual cues were available, both neuronal and behavioral responses were driven by visual information. Therefore, a population of V1 neurons encode behaviorally relevant spatial locations, based on either visual cues or on self-motion feedback when visual cues are absent.