Project description:The retrosplenial cortex is reciprocally connected with multiple structures implicated in spatial cognition, and damage to the region itself produces numerous spatial impairments. Here, we sought to characterize spatial correlates of neurons within the region during free exploration in two-dimensional environments. We report that a large percentage of retrosplenial cortex neurons have spatial receptive fields that are active when environmental boundaries are positioned at a specific orientation and distance relative to the animal itself. We demonstrate that this vector-based location signal is encoded in egocentric coordinates, is localized to the dysgranular retrosplenial subregion, is independent of self-motion, and is context invariant. Further, we identify a subpopulation of neurons with this response property that are synchronized with the hippocampal theta oscillation. Accordingly, the current work identifies a robust egocentric spatial code in retrosplenial cortex that can facilitate spatial coordinate system transformations and support the anchoring, generation, and utilization of allocentric representations.
Project description:The process by which visual information is incorporated into the brain's spatial framework to represent landmarks is poorly understood. Studies in humans and rodents suggest that retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-dependent behavioral task in which head-fixed mice learned the spatial relationship between visual landmark cues and hidden reward locations. Two-photon imaging revealed that these cues served as dominant reference points for most task-active neurons and anchored the spatial code in RSC. This encoding was more robust after task acquisition. Decoupling the virtual environment from mouse behavior degraded spatial representations and provided evidence that supralinear integration of visual and motor inputs contributes to landmark encoding. V1 axons recorded in RSC were less modulated by task engagement but showed surprisingly similar spatial tuning. Our data indicate that landmark representations in RSC are the result of local integration of visual, motor, and spatial information.
Project description:Sparse orthogonal coding is a key feature of hippocampal neural activity, which is believed to increase episodic memory capacity and to assist in navigation. Some retrosplenial cortex (RSC) neurons convey distributed spatial and navigational signals, but place-field representations such as observed in the hippocampus have not been reported. Combining cellular Ca2+ imaging in RSC of mice with a head-fixed locomotion assay, we identified a population of RSC neurons, located predominantly in superficial layers, whose ensemble activity closely resembles that of hippocampal CA1 place cells during the same task. Like CA1 place cells, these RSC neurons fire in sequences during movement, and show narrowly tuned firing fields that form a sparse, orthogonal code correlated with location. RSC 'place' cell activity is robust to environmental manipulations, showing partial remapping similar to that observed in CA1. This population code for spatial context may assist the RSC in its role in memory and/or navigation.Neurons in the retrosplenial cortex (RSC) encode spatial and navigational signals. Here the authors use calcium imaging to show that, similar to the hippocampus, RSC neurons also encode place cell-like activity in a sparse orthogonal representation, partially anchored to the allocentric cues on the linear track.
Project description:The retrosplenial cortex (RSC) is thought to be involved in a variety of spatial and contextual memory processes. However, we do not know how contextual information might be encoded in the RSC or whether the RSC representations may be distinct from context representations seen in other brain regions such as the hippocampus. We recorded RSC neuronal responses while rats explored different environments and discovered 2 kinds of context representations: one involving a novel rate code in which neurons reliably fire at a higher rate in the preferred context regardless of spatial location, and a second involving context-dependent spatial firing patterns similar to those seen in the hippocampus. This suggests that the RSC employs a unique dual-factor representational mechanism to support contextual memory.
Project description:Visual landmarks provide powerful reference signals for efficient navigation by altering the activity of spatially tuned neurons, such as place cells, head direction cells, and grid cells. To understand the neural mechanism by which landmarks exert such strong influence, it is necessary to identify how these visual features gain spatial meaning. In this study, we characterized visual landmark representations in mouse retrosplenial cortex (RSC) using chronic two-photon imaging of the same neuronal ensembles over the course of spatial learning. We found a pronounced increase in landmark-referenced activity in RSC neurons that, once established, remained stable across days. Changing behavioral context by uncoupling treadmill motion from visual feedback systematically altered neuronal responses associated with the coherence between visual scene flow speed and self-motion. To explore potential underlying mechanisms, we modeled how burst firing, mediated by supralinear somatodendritic interactions, could efficiently mediate context- and coherence-dependent integration of landmark information. Our results show that visual encoding shifts to landmark-referenced and context-dependent codes as these cues take on spatial meaning during learning.
Project description:The retrosplenial cortex (RSC) is an area interconnected with regions of the brain that display spatial correlates. Neurons in connected regions may encode an animal's position in the environment and location or proximity to objects or boundaries. RSC has also been shown to be important for spatial memory, such as tracking distance from and between landmarks, contextual information, and orientation within an environment. For these reasons, it is important to determine how neurons in RSC represent cues such as objects or boundaries and their relationship to the environment. In the current work, we performed electrophysiological recordings in RSC, whereas rats foraged in arenas that could contain an object or in which the environment was altered. We report RSC neurons display changes in mean firing rate responding to alterations of the environment. These alterations include the arena rotating, changing size or shape, or an object being introduced into the arena.
Project description:Spatial navigation requires landmark coding from two perspectives, relying on viewpoint-invariant and self-referenced representations. The brain encodes information within each reference frame but their interactions and functional dependency remains unclear. Here we investigate the relationship between neurons in the rat's retrosplenial cortex (RSC) and entorhinal cortex (MEC) that increase firing near boundaries of space. Border cells in RSC specifically encode walls, but not objects, and are sensitive to the animal's direction to nearby borders. These egocentric representations are generated independent of visual or whisker sensation but are affected by inputs from MEC that contains allocentric spatial cells. Pharmaco- and optogenetic inhibition of MEC led to a disruption of border coding in RSC, but not vice versa, indicating allocentric-to-egocentric transformation. Finally, RSC border cells fire prospective to the animal's next motion, unlike those in MEC, revealing the MEC-RSC pathway as an extended border coding circuit that implements coordinate transformation to guide navigation behavior.
Project description:(1) Background: Humans use reference frames to elaborate the spatial representations needed for all space-oriented behaviors such as postural control, walking, or grasping. We investigated the neural bases of two egocentric tasks: the extracorporeal subjective straight-ahead task (SSA) and the corporeal subjective longitudinal body plane task (SLB) in healthy participants using functional magnetic resonance imaging (fMRI). This work was an ancillary part of a study involving stroke patients. (2) Methods: Seventeen healthy participants underwent a 3T fMRI examination. During the SSA, participants had to divide the extracorporeal space into two equal parts. During the SLB, they had to divide their body along the midsagittal plane. (3) Results: Both tasks elicited a parieto-occipital network encompassing the superior and inferior parietal lobules and lateral occipital cortex, with a right hemispheric dominance. Additionally, the SLB > SSA contrast revealed activations of the left angular and premotor cortices. These areas, involved in attention and motor imagery suggest a greater complexity of corporeal processes engaging body representation. (4) Conclusions: This was the first fMRI study to explore the SLB-related activity and its complementarity with the SSA. Our results pave the way for the exploration of spatial cognitive impairment in patients.
Project description:While the neural bases of the earliest stages of speech categorization have been widely explored using neural decoding methods, there is still a lack of consensus on questions as basic as how wordforms are represented and in what way this word-level representation influences downstream processing in the brain. Isolating and localizing the neural representations of wordform is challenging because spoken words activate a variety of representations (e.g., segmental, semantic, articulatory) in addition to form-based representations. We addressed these challenges through a novel integrated neural decoding and effective connectivity design using region of interest (ROI)-based, source-reconstructed magnetoencephalography/electroencephalography (MEG/EEG) data collected during a lexical decision task. To identify wordform representations, we trained classifiers on words and nonwords from different phonological neighborhoods and then tested the classifiers' ability to discriminate between untrained target words that overlapped phonologically with the trained items. Training with word neighbors supported significantly better decoding than training with nonword neighbors in the period immediately following target presentation. Decoding regions included mostly right hemisphere regions in the posterior temporal lobe implicated in phonetic and lexical representation. Additionally, neighbors that aligned with target word beginnings (critical for word recognition) supported decoding, but equivalent phonological overlap with word codas did not, suggesting lexical mediation. Effective connectivity analyses showed a rich pattern of interaction between ROIs that support decoding based on training with lexical neighbors, especially driven by right posterior middle temporal gyrus. Collectively, these results evidence functional representation of wordforms in temporal lobes isolated from phonemic or semantic representations.
Project description:Movement through space is a fundamental behavior for all animals. Cognitive maps of environments are encoded in the hippocampal formation in an allocentric reference frame, but motor movements that comprise physical navigation are represented within an egocentric reference frame. Allocentric navigational plans must be converted to an egocentric reference frame prior to implementation as overt behavior. Here we describe an egocentric spatial representation of environmental boundaries in the dorsomedial striatum.