A time-stamp mechanism may provide temporal information necessary for egocentric to allocentric spatial transformations.
ABSTRACT: Learning the spatial organization of the environment is essential for most animals' survival. This requires the animal to derive allocentric spatial information from egocentric sensory and motor experience. The neural mechanisms underlying this transformation are mostly unknown. We addressed this problem in electric fish, which can precisely navigate in complete darkness and whose brain circuitry is relatively simple. We conducted the first neural recordings in the preglomerular complex, the thalamic region exclusively connecting the optic tectum with the spatial learning circuits in the dorsolateral pallium. While tectal topographic information was mostly eliminated in preglomerular neurons, the time-intervals between object encounters were precisely encoded. We show that this reliable temporal information, combined with a speed signal, can permit accurate estimation of the distance between encounters, a necessary component of path-integration that enables computing allocentric spatial relations. Our results suggest that similar mechanisms are involved in sequential spatial learning in all vertebrates.
Project description:Visuospatial working memory enables us to maintain access to visual information for processing even when a stimulus is no longer present, due to occlusion, our own movements, or transience of the stimulus. Here we show that, when localizing remembered stimuli, the precision of spatial recall does not rely solely on memory for individual stimuli, but additionally depends on the relative distances between stimuli and visual landmarks in the surroundings. Across three separate experiments, we consistently observed a spatially selective improvement in the precision of recall for items located near a persistent landmark. While the results did not require that the landmark be visible throughout the memory delay period, it was essential that it was visible both during encoding and response. We present a simple model that can accurately capture human performance by considering relative (allocentric) spatial information as an independent localization estimate which degrades with distance and is optimally integrated with egocentric spatial information. Critically, allocentric information was encoded without cost to egocentric estimation, demonstrating independent storage of the two sources of information. Finally, when egocentric and allocentric estimates were put in conflict, the model successfully predicted the resulting localization errors. We suggest that the relative distance between stimuli represents an additional, independent spatial cue for memory recall. This cue information is likely to be critical for spatial localization in natural settings which contain an abundance of visual landmarks.
Project description:Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Project description:Spatial navigation requires landmark coding from two perspectives, relying on viewpoint-invariant and self-referenced representations. The brain encodes information within each reference frame but their interactions and functional dependency remains unclear. Here we investigate the relationship between neurons in the rat's retrosplenial cortex (RSC) and entorhinal cortex (MEC) that increase firing near boundaries of space. Border cells in RSC specifically encode walls, but not objects, and are sensitive to the animal's direction to nearby borders. These egocentric representations are generated independent of visual or whisker sensation but are affected by inputs from MEC that contains allocentric spatial cells. Pharmaco- and optogenetic inhibition of MEC led to a disruption of border coding in RSC, but not vice versa, indicating allocentric-to-egocentric transformation. Finally, RSC border cells fire prospective to the animal's next motion, unlike those in MEC, revealing the MEC-RSC pathway as an extended border coding circuit that implements coordinate transformation to guide navigation behavior.
Project description:Sleep facilitates the consolidation (i.e., enhancement) of simple, explicit (i.e., conscious) motor sequence learning (MSL). MSL can be dissociated into egocentric (i.e., motor) or allocentric (i.e., spatial) frames of reference. The consolidation of the allocentric memory representation is sleep-dependent, whereas the egocentric consolidation process is independent of sleep or wake for explicit MSL. However, it remains unclear the extent to which sleep contributes to the consolidation of implicit (i.e., unconscious) MSL, nor is it known what aspects of the memory representation (egocentric, allocentric) are consolidated by sleep. Here, we investigated the extent to which sleep is involved in consolidating implicit MSL, specifically, whether the egocentric or the allocentric cognitive representations of a learned sequence are enhanced by sleep, and whether these changes support the development of explicit sequence knowledge across sleep but not wake. Our results indicate that egocentric and allocentric representations can be behaviorally dissociated for implicit MSL. Neither representation was preferentially enhanced across sleep nor were developments of explicit awareness observed. However, after a 1-wk interval performance enhancement was observed in the egocentric representation. Taken together, these results suggest that like explicit MSL, implicit MSL has dissociable allocentric and egocentric representations, but unlike explicit sequence learning, implicit egocentric and allocentric memory consolidation is independent of sleep, and the time-course of consolidation differs significantly.
Project description:A key function of the brain is to provide a stable representation of an object's location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.
Project description:The way new spatial information is encoded seems to be crucial in disentangling the role of decisive regions within the spatial memory network (i.e., hippocampus, parahippocampal, parietal, retrosplenial,…). Several data sources converge to suggest that the hippocampus is not always involved or indeed necessary for allocentric processing. Hippocampal involvement in spatial coding could reflect the integration of new information generated by "online" self-related changes. In this fMRI study, the participants started by encoding several object locations in a virtual reality environment and then performed a pointing task. Allocentric encoding was maximized by using a survey perspective and an object-to-object pointing task. Two egocentric encoding conditions were used, involving self-related changes processed under a first-person perspective and implicating a self-to-object pointing task. The Egocentric-updating condition involved navigation whereas the Egocentric with rotation only condition involved orientation changes only. Conjunction analysis of spatial encoding conditions revealed a wide activation of the occipito-parieto-frontal network and several medio-temporal structures. Interestingly, only the cuneal areas were significantly more recruited by the allocentric encoding in comparison to other spatial conditions. Moreover, the enhancement of hippocampal activation was found during Egocentric-updating encoding whereas the retrosplenial activation was observed during the Egocentric with rotation only condition. Hence, in some circumstances, hippocampal and retrosplenial structures-known for being involved in allocentric environmental coding-demonstrate preferential involvement in the egocentric coding of space. These results indicate that the raw differentiation between allocentric versus egocentric representation seems to no longer be sufficient in understanding the complexity of the mechanisms involved during spatial encoding.
Project description:Different reference frames are used in daily life in order to structure the environment. The two-choice Simon task setting has been used to investigate how task-irrelevant spatial information influences human cognitive control. In recent studies, a Go/NoGo Simon task setting was used in order to divide the Simon task between a pair of participants. Yet, not only a human co-actor, but also even an attention-grabbing object can provide sufficient reference in order to reintroduce a Simon effect (SE) indicating cognitive conflict in Go/NoGo task settings. Interestingly, the SE could only occur when a reference point outside of the stimulus setup was available. The current studies exploited the dependency between different spatial reference frames (egocentric and allocentric) offered by the stimulus setup itself and the task setup (individual vs. joint Go/NoGot task setting). Two studies (Experiments 1 and 2) were carried out along with a human co-actor. Experiment 3 used an attention-grabbing object instead. The egocentric and allocentric SEs triggered by different features of the stimulus setup (global vs. local) were modulated by the task setup. When interacting with a human co-actor, an egocentric SE was found for global features of the stimulus setup (i.e., stimulus position on the screen). In contrast, an allocentric SE was yielded in the individual task setup illustrating the relevance of more local features of the stimulus setup (i.e., the manikin's ball position). Results point toward salience shifts between different spatial reference frames depending on the nature of the task setup.
Project description:Prior studies have shown that spatial cognition is influenced by stress prior to task. The current study investigated the effects of real-time acute stress on allocentric and egocentric spatial processing. A virtual reality-based spatial reference rule learning (SRRL) task was designed in which participants were instructed to make a location selection by walking to one of three poles situated around a tower. A selection was reinforced by either an egocentric spatial reference rule (leftmost or rightmost pole relative to participant) or an allocentric spatial reference rule (nearest or farthest pole relative to the tower). In Experiment 1, 32 participants (16 males, 16 females; aged from 18 to 27) performed a SRRL task in a normal virtual reality environment (VRE). The hit rates and rule acquisition revealed no difference between allocentric and egocentric spatial reference rule learning. In Experiment 2, 64 participants (32 males, 34 females; aged from 19 to 30) performed the SRRL task in both a low-stress VRE (a mini virtual arena) and a high-stress VRE (mini virtual arena with a fire disaster). Allocentric references facilitated learning in the high-stressful VRE. The results suggested that acute stress facilitate allocentric spatial processing.
Project description:The main goal of our study is to gain insight into the reference frames involved in three-dimensional haptic spatial processing. Previous research has shown that two-dimensional haptic spatial processing is prone to large systematic deviations. A weighted average model that identifies the origin of the systematic error patterns in the biasing influence of an egocentric reference frame on the allocentric reference frame was proposed as an explanation of the results. The basis of the egocentric reference frame was linked either to the hand or to the body. In the present study participants had to construct a field of parallel bars that could be oriented in three dimensions. First, systematic error patterns were found also in this three-dimensional haptic parallelity task. Second, among the different models tested for their accuracy in explaining the error patterns, the Hand-centered weighted average model proved to most closely resemble the data. A participant-specific weighting factor determined the biasing influence of the hand-centered egocentric reference frame. A shift from the allocentric towards the egocentric frame of reference of approximately 20% was observed. These results support the hypothesis that haptic spatial processing is a product of the interplay of diverse, but synergistically operating frames of reference.
Project description:Individuals with autism spectrum disorder (ASD) present difficulties in forming relations among items and context. This capacity for relational binding is also involved in spatial navigation and research on this topic in ASD is scarce and inconclusive. Using a computerised version of the Morris Water Maze task, ASD participants showed particular difficulties in performing viewpoint independent (allocentric) navigation, leaving viewpoint dependent navigation (egocentric) intact. Further analyses showed that navigation deficits were not related to poor visual short-term memory or mental rotation in the ASD group. The results further confirm the need of autistic individuals for support at retrieval and have important implications for the design of signposts and maps.