Differential hippocampal and retrosplenial involvement in egocentric-updating, rotation, and allocentric processing during online spatial encoding: an fMRI study.
ABSTRACT: The way new spatial information is encoded seems to be crucial in disentangling the role of decisive regions within the spatial memory network (i.e., hippocampus, parahippocampal, parietal, retrosplenial,…). Several data sources converge to suggest that the hippocampus is not always involved or indeed necessary for allocentric processing. Hippocampal involvement in spatial coding could reflect the integration of new information generated by "online" self-related changes. In this fMRI study, the participants started by encoding several object locations in a virtual reality environment and then performed a pointing task. Allocentric encoding was maximized by using a survey perspective and an object-to-object pointing task. Two egocentric encoding conditions were used, involving self-related changes processed under a first-person perspective and implicating a self-to-object pointing task. The Egocentric-updating condition involved navigation whereas the Egocentric with rotation only condition involved orientation changes only. Conjunction analysis of spatial encoding conditions revealed a wide activation of the occipito-parieto-frontal network and several medio-temporal structures. Interestingly, only the cuneal areas were significantly more recruited by the allocentric encoding in comparison to other spatial conditions. Moreover, the enhancement of hippocampal activation was found during Egocentric-updating encoding whereas the retrosplenial activation was observed during the Egocentric with rotation only condition. Hence, in some circumstances, hippocampal and retrosplenial structures-known for being involved in allocentric environmental coding-demonstrate preferential involvement in the egocentric coding of space. These results indicate that the raw differentiation between allocentric versus egocentric representation seems to no longer be sufficient in understanding the complexity of the mechanisms involved during spatial encoding.
Project description:Updating navigational memories is important for everyday tasks. It was recently found that older adults are impaired in updating spatial representations in small, bi-dimensional layouts. Because performance in small-scale areas cannot predict navigational behavior, we investigated how aging affects the updating of navigational memories encoded in large, 3-dimensional environments. Moreover, since locations can be encoded relative to the observer (egocentric encoding) or relative to landmarks (allocentric encoding), we tested whether the presumed age-related spatial updating deficit depends on the available spatial cues. By combining whole-body motion tracking with immersive virtual reality, we could dissociate egocentric and allocentric spatial cues and assess navigational memory under ecologically valid conditions (i.e., providing body-based and visual cues). In the task, objects were relocated overnight, and young and older participants had to navigate to the updated locations of the objects. In addition to replicating age-related deficits in allocentric memory, we found age-related impairments in updating navigational memories following egocentric encoding. Finally, older participants depicted stronger representations of the previous navigational context that were correlated with their spatial updating deficits. Given that these effects may stem from inefficient suppression of former navigational memories, our findings propose a mechanism that helps explain the navigational decline in aging.
Project description:The retrosplenial cortex is reciprocally connected with multiple structures implicated in spatial cognition, and damage to the region itself produces numerous spatial impairments. Here, we sought to characterize spatial correlates of neurons within the region during free exploration in two-dimensional environments. We report that a large percentage of retrosplenial cortex neurons have spatial receptive fields that are active when environmental boundaries are positioned at a specific orientation and distance relative to the animal itself. We demonstrate that this vector-based location signal is encoded in egocentric coordinates, is localized to the dysgranular retrosplenial subregion, is independent of self-motion, and is context invariant. Further, we identify a subpopulation of neurons with this response property that are synchronized with the hippocampal theta oscillation. Accordingly, the current work identifies a robust egocentric spatial code in retrosplenial cortex that can facilitate spatial coordinate system transformations and support the anchoring, generation, and utilization of allocentric representations.
Project description:We present a model of how neural representations of egocentric spatial experiences in parietal cortex interface with viewpoint-independent representations in medial temporal areas, via retrosplenial cortex, to enable many key aspects of spatial cognition. This account shows how previously reported neural responses (place, head-direction and grid cells, allocentric boundary- and object-vector cells, gain-field neurons) can map onto higher cognitive function in a modular way, and predicts new cell types (egocentric and head-direction-modulated boundary- and object-vector cells). The model predicts how these neural populations should interact across multiple brain regions to support spatial memory, scene construction, novelty-detection, 'trace cells', and mental navigation. Simulated behavior and firing rate maps are compared to experimental data, for example showing how object-vector cells allow items to be remembered within a contextual representation based on environmental boundaries, and how grid cells could update the viewpoint in imagery during planning and short-cutting by driving sequential place cell activity.
Project description:Deficits in amnesic patients suggest that spatial cognition and episodic memory are intimately related. Among the different types of spatial processing, the allocentric, relying on the hippocampal formation, and the egocentric-updated, relying on parieto-temporal connections have both been considered to functionally underlie episodic memory encoding and retrieval. We explore the cerebral correlates underlying the episodic retrieval of words previously learnt outside the magnet while performing different spatial processes, allocentric and egocentric-updated. Subsequently and during fMRI, participants performed an episodic word recognition task. Data processing revealed that the correct recognition of words learnt in egocentric-updated condition enhanced activity of the medial and lateral parietal, as well as temporal cortices. No additional regions were activated in the present study by retrieving words learnt in allocentric condition. This study sheds new light on the functional links between episodic memory and spatial processing: The temporo-parietal network is confirmed to be crucial in episodic memory in healthy participants and could be linked to the egocentric-updated process.
Project description:Movement through space is a fundamental behavior for all animals. Cognitive maps of environments are encoded in the hippocampal formation in an allocentric reference frame, but motor movements that comprise physical navigation are represented within an egocentric reference frame. Allocentric navigational plans must be converted to an egocentric reference frame prior to implementation as overt behavior. Here we describe an egocentric spatial representation of environmental boundaries in the dorsomedial striatum.
Project description:Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Project description:Spatial navigation requires landmark coding from two perspectives, relying on viewpoint-invariant and self-referenced representations. The brain encodes information within each reference frame but their interactions and functional dependency remains unclear. Here we investigate the relationship between neurons in the rat's retrosplenial cortex (RSC) and entorhinal cortex (MEC) that increase firing near boundaries of space. Border cells in RSC specifically encode walls, but not objects, and are sensitive to the animal's direction to nearby borders. These egocentric representations are generated independent of visual or whisker sensation but are affected by inputs from MEC that contains allocentric spatial cells. Pharmaco- and optogenetic inhibition of MEC led to a disruption of border coding in RSC, but not vice versa, indicating allocentric-to-egocentric transformation. Finally, RSC border cells fire prospective to the animal's next motion, unlike those in MEC, revealing the MEC-RSC pathway as an extended border coding circuit that implements coordinate transformation to guide navigation behavior.
Project description:Episodic memory, the conscious recollection of past events, is typically experienced from a first-person (egocentric) perspective. The hippocampus plays an essential role in episodic memory and spatial cognition. Although the allocentric nature of hippocampal spatial coding is well understood, little is known about whether the hippocampus receives egocentric information about external items. We recorded in rats the activity of single neurons from the lateral entorhinal cortex (LEC) and medial entorhinal cortex (MEC), the two major inputs to the hippocampus. Many LEC neurons showed tuning for egocentric bearing of external items, whereas MEC cells tended to represent allocentric bearing. These results demonstrate a fundamental dissociation between the reference frames of LEC and MEC neural representations.
Project description:Different reference frames are used in daily life in order to structure the environment. The two-choice Simon task setting has been used to investigate how task-irrelevant spatial information influences human cognitive control. In recent studies, a Go/NoGo Simon task setting was used in order to divide the Simon task between a pair of participants. Yet, not only a human co-actor, but also even an attention-grabbing object can provide sufficient reference in order to reintroduce a Simon effect (SE) indicating cognitive conflict in Go/NoGo task settings. Interestingly, the SE could only occur when a reference point outside of the stimulus setup was available. The current studies exploited the dependency between different spatial reference frames (egocentric and allocentric) offered by the stimulus setup itself and the task setup (individual vs. joint Go/NoGot task setting). Two studies (Experiments 1 and 2) were carried out along with a human co-actor. Experiment 3 used an attention-grabbing object instead. The egocentric and allocentric SEs triggered by different features of the stimulus setup (global vs. local) were modulated by the task setup. When interacting with a human co-actor, an egocentric SE was found for global features of the stimulus setup (i.e., stimulus position on the screen). In contrast, an allocentric SE was yielded in the individual task setup illustrating the relevance of more local features of the stimulus setup (i.e., the manikin's ball position). Results point toward salience shifts between different spatial reference frames depending on the nature of the task setup.
Project description:Sleep facilitates the consolidation (i.e., enhancement) of simple, explicit (i.e., conscious) motor sequence learning (MSL). MSL can be dissociated into egocentric (i.e., motor) or allocentric (i.e., spatial) frames of reference. The consolidation of the allocentric memory representation is sleep-dependent, whereas the egocentric consolidation process is independent of sleep or wake for explicit MSL. However, it remains unclear the extent to which sleep contributes to the consolidation of implicit (i.e., unconscious) MSL, nor is it known what aspects of the memory representation (egocentric, allocentric) are consolidated by sleep. Here, we investigated the extent to which sleep is involved in consolidating implicit MSL, specifically, whether the egocentric or the allocentric cognitive representations of a learned sequence are enhanced by sleep, and whether these changes support the development of explicit sequence knowledge across sleep but not wake. Our results indicate that egocentric and allocentric representations can be behaviorally dissociated for implicit MSL. Neither representation was preferentially enhanced across sleep nor were developments of explicit awareness observed. However, after a 1-wk interval performance enhancement was observed in the egocentric representation. Taken together, these results suggest that like explicit MSL, implicit MSL has dissociable allocentric and egocentric representations, but unlike explicit sequence learning, implicit egocentric and allocentric memory consolidation is independent of sleep, and the time-course of consolidation differs significantly.