Real world navigation independence in the early blind correlates with differential brain activity associated with virtual navigation.
ABSTRACT: Navigating is a complex cognitive task that places high demands on spatial abilities, particularly in the absence of sight. Significant advances have been made in identifying the neural correlates associated with various aspects of this skill; however, how the brain is able to navigate in the absence of visual experience remains poorly understood. Furthermore, how neural network activity relates to the wide variability in navigational independence and skill in the blind population is also unknown. Using functional magnetic resonance imaging, we investigated the neural correlates of audio-based navigation within a large scale, indoor virtual environment in early profoundly blind participants with differing levels of spatial navigation independence (assessed by the Santa Barbara Sense of Direction scale). Performing path integration tasks in the virtual environment was associated with activation within areas of a core network implicated in navigation. Furthermore, we found a positive relationship between Santa Barbara Sense of Direction scores and activation within right temporal parietal junction during the planning and execution phases of the task. These findings suggest that differential navigational ability in the blind may be related to the utilization of different brain network structures. Further characterization of the factors that influence network activity may have important implications regarding how this skill is taught in the blind community.
Project description:'Turn slightly to the left' the navigational system announces, with the aim of directing a blind user to merge into a corridor. Yet, due to long reaction time, the user turns too late and proceeds into the wrong hallway. Observations of such user behavior in real-world navigation settings motivate us to study the manner in which blind users react to the instructional feedback of a turn-by-turn guidance system. We found little previous work analyzing the extent of the variability among blind users in reaction to different instructional guidance during assisted navigation. To gain insight into how navigational interfaces can be better designed to accommodate the information needs of different users, we conduct a data-driven analysis of reaction variability as defined by motion and timing measures. Based on continuously tracked user motion during real-world navigation with a deployed system, we find significant variability between users in their reaction characteristics. Specifically, the statistical analysis reveals significant variability during the crucial elements of the navigation (e.g., turning and encountering obstacles). With the end-user experience in mind, we identify the need to not only adjust interface timing and content to each user's personal walking pace, but also their individual navigation skill and style. The design implications of our study inform the development of assistive systems which consider such user-specific behavior to ensure successful navigation.
Project description:Successful navigation involves finding the way, planning routes, and avoiding collisions. Whilst previous research has shown that people can navigate using non-visual cues, it is not clear to what degree learned non-visual navigational abilities generalise to 'new' environments. Furthermore, the ability to successfully avoid collisions has not been investigated separately from the ability to perceive spatial layout or to orient oneself in space. Here, we address these important questions using a virtual echolocation paradigm in sighted people. Fourteen sighted blindfolded participants completed 20 virtual navigation training sessions over the course of 10?weeks. In separate sessions, before and after training, we also tested their ability to perceive the spatial layout of virtual echo-acoustic space. Furthermore, three blind echolocation experts completed the tasks without training, thus validating our virtual echo-acoustic paradigm. We found that over the course of 10?weeks sighted people became better at navigating, i.e. they reduced collisions and time needed to complete the route, and increased success rates. This also generalised to 'new' (i.e. untrained) virtual spaces. In addition, after training, their ability to judge spatial layout was better than before training. The data suggest that participants acquired a 'true' sensory driven navigational ability using echo-acoustics. In addition, we show that people not only developed navigational skills related to avoidance of collisions and finding safe passage, but also processes related to spatial perception and orienting. In sum, our results provide strong support for the idea that navigation is a skill which people can achieve via various modalities, here: echolocation.
Project description:<h4>Introduction</h4>Spatial navigation is a complex cognitive skill that varies between individuals, and the mechanisms underlying this variability are not clear. Studying simpler components of spatial navigation may help illuminate factors that contribute to variation in this complex skill; path integration is one such component. Optic flow provides self-motion information while moving through an environment and is sufficient for path integration. This study aims to investigate whether self-reported navigation ability is related to information transfer between optic flow-sensitive (OF-sensitive) cortical regions and regions important to navigation during environmental spatial tasks.<h4>Methods</h4>Functional magnetic resonance imaging was used to define OF-sensitive regions and map their functional connectivity (FC) with the retrosplenial cortex and hippocampus during visual path integration (VPI) and turn counting (TC) tasks. Both tasks presented visual self-motion through a real-world environment. Correlations predicting a positive association between self-reported navigation ability (measured with the Santa Barbara Sense of Direction scale) and FC strength between OF-sensitive regions and retrosplenial cortex and OF-sensitive regions and the hippocampus were performed.<h4>Results</h4>During VPI, FC strength between left cingulate sulcus visual area (L CSv) and right retrosplenial cortex and L CSv and right hippocampus was positively associated with self-reported navigation ability. FC strength between right cingulate sulcus visual area (R CSv) and right retrosplenial cortex during VPI was also positively associated with self-reported navigation ability. These relationships were specific to VPI, and whole-brain exploratory analyses corroborated these results.<h4>Conclusions</h4>These findings support the hypothesis that perceived spatial navigation ability is associated with communication strength between OF-sensitive and navigationally relevant regions during visual path integration, which may represent the transformation accuracy of visual motion information into internal spatial representations. More broadly, these results illuminate underlying mechanisms that may explain some variability in spatial navigation ability.
Project description:In the past 20 years, many studies in the cognitive neurosciences have analyzed human ability to navigate in recently learned and familiar environments by investigating the cognitive processes involved in successful navigation. In this study, we reviewed the main experimental paradigms and made a cognitive-oriented meta-analysis of fMRI studies of human navigation to underline the importance of the experimental designs and cognitive tasks used to assess navigational skills. We performed a general activation likelihood estimation (ALE) meta-analysis of 66 fMRI experiments to identify the neural substrates underpinning general aspects of human navigation. Four individual ALE analyses were performed to identify the neural substrates of different experimental paradigms (i.e., familiar vs. recently learned environments) and different navigational strategies (allocentric vs. egocentric). Results of the general ALE analysis highlighted a wide network of areas with clusters in the occipital, parietal, frontal and temporal lobes, especially in the parahippocampal cortex. Familiar environments seem to be processed by an extended temporal-frontal network, whereas recently learned environments require activation in the parahippocampal cortex and the parietal and occipital lobes. Allocentric strategy is subtended by the same areas as egocentric strategy, but the latter elicits greater activation in the right precuneus, middle occipital lobe and angular gyrus. Our results suggest that different neural correlates are involved in recalling a well-learned or recently acquired environment and that different networks of areas subtend egocentric and allocentric strategies.
Project description:Older adults have difficulties in navigating unfamiliar environments and updating their wayfinding behavior when faced with blocked routes. This decline in navigational capabilities has traditionally been ascribed to memory impairments and dysexecutive function, whereas the impact of visual aging has often been overlooked. The ability to perceive visuospatial information such as salient landmarks is essential to navigating efficiently. To date, the functional and neurobiological factors underpinning landmark processing in aging remain insufficiently characterized. To address this issue, functional magnetic resonance imaging (fMRI) was used to investigate the brain activity associated with landmark-based navigation in young and healthy older participants. The performances of 25 young adults (? = 25.4 years, ? = 2.7; seven females) and 17 older adults (? = 73.0 years, ? = 3.9; 10 females) were assessed in a virtual-navigation task in which they had to orient using salient landmarks. The underlying whole-brain patterns of activity as well as the functional roles of specific cerebral regions involved in landmark processing, namely the parahippocampal place area (PPA), the occipital place area (OPA), and the retrosplenial cortex (RSC), were analyzed. Older adults' navigational abilities were overall diminished compared to young adults. Also, the two age groups relied on distinct navigational strategies to solve the task. Better performances during landmark-based navigation were associated with increased neural activity in an extended neural network comprising several cortical and cerebellar regions. Direct comparisons between age groups revealed that young participants had greater anterior temporal activity. Also, only young adults showed significant activity in occipital areas corresponding to the cortical projection of the central visual field during landmark-based navigation. The region-of-interest analysis revealed an increased OPA activation in older adult participants during the landmark condition. There were no significant between-group differences in PPA and RSC activations. These preliminary results hint at the possibility that aging diminishes fine-grained information processing in occipital and temporal regions, thus hindering the capacity to use landmarks adequately for navigation. Keeping sight of its exploratory nature, this work helps towards a better comprehension of the neural dynamics subtending landmark-based navigation and it provides new insights on the impact of age-related visuospatial processing differences on navigation capabilities.
Project description:A computational neural model that describes the competing roles of Basal Ganglia and Hippocampus in spatial navigation is presented. Model performance is evaluated on a simulated Morris water maze explored by a model rat. Cue-based and place-based navigational strategies, thought to be subserved by the Basal ganglia and Hippocampus respectively, are described. In cue-based navigation, the model rat learns to directly head towards a visible target, while in place-based navigation the target position is represented in terms of spatial context provided by an array of poles placed around the pool. Learning is formulated within the framework of Reinforcement Learning, with the nigrostriatal dopamine signal playing the role of Temporal Difference Error. Navigation inherently involves two apparently contradictory movements: goal oriented movements vs. random, wandering movements. The model hypothesizes that while the goal-directedness is determined by the gradient in Value function, randomness is driven by the complex activity of the SubThalamic Nucleus (STN)-Globus Pallidus externa (GPe) system. Each navigational system is associated with a Critic, prescribing actions that maximize value gradients for the corresponding system. In the integrated system, that incorporates both cue-based and place-based forms of navigation, navigation at a given position is determined by the system whose value function is greater at that position. The proposed model describes the experimental results of , a lesion-study that investigates the competition between cue-based and place-based navigational systems. The present study also examines impaired navigational performance under Parkinsonian-like conditions. The integrated navigational system, operated under dopamine-deficient conditions, exhibits increased escape latency as was observed in experimental literature describing MPTP model rats navigating a water maze.
Project description:<h4>Introduction</h4>Navigation is a fundamental and multidimensional cognitive function that individuals rely on to move around the environment. In this study, we investigated the neural basis of human spatial navigation ability.<h4>Methods</h4>A large cohort of participants (<i>N </i>> 200) was examined on their navigation ability behaviorally and structural and functional magnetic resonance imaging (MRI) were then used to explore the corresponding neural basis of spatial navigation.<h4>Results</h4>The gray matter volume (GMV) of the bilateral parahippocampus (PHG), retrosplenial complex (RSC), entorhinal cortex (EC), hippocampus (HPC), and thalamus (THAL) was correlated with the participants' self-reported navigational ability in general, and their sense of direction in particular. Further fMRI studies showed that the PHG, RSC, and EC selectively responded to visually presented scenes, whereas the HPC and THAL showed no selectivity, suggesting a functional division of labor among these regions in spatial navigation. The resting-state functional connectivity analysis further revealed a hierarchical neural network for navigation constituted by these regions, which can be further categorized into three relatively independent components (i.e., scene recognition component, cognitive map component, and the component of heading direction for locomotion, respectively).<h4>Conclusions</h4>Our study combined multi-modality imaging data to illustrate that multiple brain regions may work collaboratively to extract, integrate, store, and orientate spatial information to guide navigation behaviors.
Project description:The navigational ability of birds has been a focus of popular and scientific interest for centuries, but relatively little is known about the neuronal networks that support avian navigation. In the brain, regions like the piriform cortex, olfactory bulbs, hippocampal formation, vestibular nuclei, and the wulst, are among the brain regions often discussed as involved in avian navigation. However, despite large literature showing a prominent role of some anterior and dorsal thalamic nuclei in mammalian spatial navigation, little is known about the role of the thalamus in avian navigation. Here, we analyzed a possible role of the dorsal anterior thalamic nuclei in avian navigation by combining olfactory manipulations during the transport of young homing pigeons to a release site and c-Fos immunohistochemistry for the mapping brain activity. The results reveal that odor modulated neurons in the avian dorsolateral lateral (DLL) subdivision of the anterior thalamic nuclei are actively involved in processing outward journey, navigational information. Outward journey information is used by pigeons to correctly determine the homeward direction. DLL participation in acquiring path-based information, and its modulation by olfactory exposure, broadens our understanding of the neural pathways underlying avian navigation.
Project description:Spatial navigation is a universal behavior that varies depending on goals, experience and available sensory stimuli. Spatial navigational tasks are routinely used to study learning, memory and goal-directed behavior, in both animals and humans. One popular paradigm for testing spatial memory is the Morris water maze, where subjects learn the location of a hidden platform that offers escape from a pool of water. Researchers typically express learning as a function of the latency to escape, though this reveals little about the underlying navigational strategies. Recently, a number of studies have begun to classify water maze search strategies in order to clarify the precise spatial and mnemonic functions of different brain regions, and to identify which aspects of spatial memory are disrupted in disease models. However, despite their usefulness, strategy analyses have not been widely adopted due to the lack of software to automate analyses. To address this need we developed Pathfinder, an open source application for analyzing spatial navigation behaviors. In a representative dataset, we show that Pathfinder effectively characterizes the development of highly-specific spatial search strategies as male and female mice learn a standard spatial water maze. Pathfinder can read data files from commercially- and freely-available software packages, is optimized for classifying search strategies in water maze paradigms, and can also be used to analyze 2D navigation by other species, and in other tasks, as long as timestamped xy coordinates are available. Pathfinder is simple to use, can automatically determine pool and platform geometry, generates heat maps, analyzes navigation with respect to multiple goal locations, and can be updated to accommodate future developments in spatial behavioral analyses. Given these features, Pathfinder may be a useful tool for studying how navigational strategies are regulated by the environment, depend on specific neural circuits, and are altered by pathology.
Project description:Human navigation is generally believed to rely on two types of strategy adoption, route-based and map-based strategies. Both types of navigation require making spatial decisions along the traversed way although formal computational and neural links between navigational strategies and mechanisms of value-based decision making have so far been underexplored in humans. Here we employed functional magnetic resonance imaging (fMRI) while subjects located different objects in a virtual environment. We then modelled their paths using reinforcement learning (RL) algorithms, which successfully explained decision behavior and its neural correlates. Our results show that subjects used a mixture of route and map-based navigation and their paths could be well explained by the model-free and model-based RL algorithms. Furthermore, the value signals of model-free choices during route-based navigation modulated the BOLD signals in the ventro-medial prefrontal cortex (vmPFC), whereas the BOLD signals in parahippocampal and hippocampal regions pertained to model-based value signals during map-based navigation. Our findings suggest that the brain might share computational mechanisms and neural substrates for navigation and value-based decisions such that model-free choice guides route-based navigation and model-based choice directs map-based navigation. These findings open new avenues for computational modelling of wayfinding by directing attention to value-based decision, differing from common direction and distances approaches.