The Action Constraints of an Object Increase Distance Estimation in Extrapersonal Space.
ABSTRACT: This study investigated the role of action constraints related to an object as regards allocentric distance estimation in extrapersonal space. In two experiments conducted in both real and virtual environments, participants intending to push a trolley had to estimate its distance from a target situated in front of them. The trolley was either empty (i.e., light) or loaded with books (i.e., heavy). The results showed that the estimated distances were larger for the heavy trolley than for the light one, and that the actual distance between the participants and the trolley moderated this effect. This data suggests that the potential mobility of an object used as a reference affects distance estimation in extrapersonal space. According to embodied perception theories, our results show that people perceive space in terms of constraints related to their potential actions.
Project description:Learning the spatial organization of the environment is essential for most animals' survival. This requires the animal to derive allocentric spatial information from egocentric sensory and motor experience. The neural mechanisms underlying this transformation are mostly unknown. We addressed this problem in electric fish, which can precisely navigate in complete darkness and whose brain circuitry is relatively simple. We conducted the first neural recordings in the preglomerular complex, the thalamic region exclusively connecting the optic tectum with the spatial learning circuits in the dorsolateral pallium. While tectal topographic information was mostly eliminated in preglomerular neurons, the time-intervals between object encounters were precisely encoded. We show that this reliable temporal information, combined with a speed signal, can permit accurate estimation of the distance between encounters, a necessary component of path-integration that enables computing allocentric spatial relations. Our results suggest that similar mechanisms are involved in sequential spatial learning in all vertebrates.
Project description:An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
Project description:Hemispatial Neglect (HN) is a failure to allocate attention to a region of space opposite to where damage has occurred in the brain, usually the left side of space. It is widely documented that there are two types of neglect: egocentric neglect (neglect of information falling on the individual's left side) and allocentric neglect (neglect of the left side of each object, regardless of the position of that object in relation to the individual). We set out to address whether neglect presentation could be modified from egocentric to allocentric through manipulating the task demands whilst keeping the physical stimulus constant by measuring the eye movement behaviour of a single group of neglect patients engaged in two different tasks (copying and tracing). Eye movements and behavioural data demonstrated that patients exhibited symptoms consistent with egocentric neglect in one task (tracing), and allocentric neglect in another task (copying), suggesting that task requirements may influence the nature of the neglect symptoms produced by the same individual. Different task demands may be able to explain differential neglect symptoms in some individuals.
Project description:Human spatial representations are shaped by affordances for action offered by the environment. A prototypical example is the organization of space into peripersonal (within reach) and extrapersonal (outside reach) regions, mirrored by proximal (this/here) and distal (that/there) linguistic expressions. The peri-/extrapersonal distinction has been widely investigated in individual contexts, but little is known about how spatial representations are modulated by interaction with other people. Is near/far coding of space dynamically adapted to the position of a partner when space, objects, and action goals are shared? Over two preregistered experiments based on a novel interactive paradigm, we show that, in individual and social contexts involving no direct collaboration, linguistic coding of locations as proximal or distal depends on their distance from the speaker's hand. In contrast, in the context of collaborative interactions involving turn-taking and role reversal, proximal space is shifted towards the partner, and linguistic coding of near space ('this' / 'here') is remapped onto the partner's action space.
Project description:To act on the environment, organisms must perceive object locations in relation to their body. Several neuroscientific studies provide evidence of neural circuits that selectively represent space within reach (i.e., peripersonal) and space outside of reach (i.e., extrapersonal). However, the developmental emergence of these space representations remains largely unexplored. We investigated the development of space coding in infant macaques and found that they exhibit different motor strategies and hand configurations depending on the objects' size and location. Reaching-grasping improved from 2 to 4 weeks of age, suggesting a broadly defined perceptual body schema at birth, modified by the acquisition and refinement of motor skills through early sensorimotor experience, enabling the development of a mature capacity for coding space.
Project description:The way new spatial information is encoded seems to be crucial in disentangling the role of decisive regions within the spatial memory network (i.e., hippocampus, parahippocampal, parietal, retrosplenial,…). Several data sources converge to suggest that the hippocampus is not always involved or indeed necessary for allocentric processing. Hippocampal involvement in spatial coding could reflect the integration of new information generated by "online" self-related changes. In this fMRI study, the participants started by encoding several object locations in a virtual reality environment and then performed a pointing task. Allocentric encoding was maximized by using a survey perspective and an object-to-object pointing task. Two egocentric encoding conditions were used, involving self-related changes processed under a first-person perspective and implicating a self-to-object pointing task. The Egocentric-updating condition involved navigation whereas the Egocentric with rotation only condition involved orientation changes only. Conjunction analysis of spatial encoding conditions revealed a wide activation of the occipito-parieto-frontal network and several medio-temporal structures. Interestingly, only the cuneal areas were significantly more recruited by the allocentric encoding in comparison to other spatial conditions. Moreover, the enhancement of hippocampal activation was found during Egocentric-updating encoding whereas the retrosplenial activation was observed during the Egocentric with rotation only condition. Hence, in some circumstances, hippocampal and retrosplenial structures-known for being involved in allocentric environmental coding-demonstrate preferential involvement in the egocentric coding of space. These results indicate that the raw differentiation between allocentric versus egocentric representation seems to no longer be sufficient in understanding the complexity of the mechanisms involved during spatial encoding.
Project description:Posterior parahippocampal gyrus (PPHG) is strongly involved during scene recognition and spatial cognition. How PPHG electrophysiological activity could underlie these functions, and whether they share similar timing mechanisms is unknown. We addressed this question in two intracerebral experiments which revealed that PPHG neural activity dissociated an early stimulus-driven effect (>200 and <500 ms) and a late task-related effect (>600 and <800 ms). Strongest PPHG gamma band (50-150 Hz) activities were found early when subjects passively viewed scenes (scene selectivity effect) and lately when they had to estimate the position of an object relative to the environment (allocentric effect). Based on single trial analyses, we were able to predict when patients viewed scenes (compared to other visual categories) and when they performed allocentric judgments (compared to other spatial judgments). The anatomical location corresponding to the strongest effects was in the depth of the collateral sulcus. Our findings directly affect current theories of visual scene processing and spatial orientation by providing new timing constraints and by demonstrating the existence of separable information processing stages in the functionally defined parahippocampal place area.
Project description:A growing body of evidence suggests that people with Alzheimer's Disease (AD) show compromised spatial abilities. In addition, there exists from the earliest stages of AD a specific impairment in "mental frame syncing," which is the ability to synchronize an allocentric viewpoint-independent representation (including object-to-object information) with an egocentric one by computing the bearing of each relevant "object" in the environment in relation to the stored heading in space (i.e., information about our viewpoint contained in the allocentric viewpoint-dependent representation). The main objective of this development-of-concept trial was to evaluate the efficacy of a novel VR-based training protocol focused on the enhancement of the "mental frame syncing" of the different spatial representations in subjects with AD. We recruited 20 individuals with AD who were randomly assigned to either "VR-based training" or "Control Group." Moreover, eight cognitively healthy elderly individuals were recruited to participate in the VR-based training in order to have a different comparison group. Based on a neuropsychological assessment, our results indicated a significant improvement in long-term spatial memory after the VR-based training for patients with AD; this means that transference of improvements from the VR-based training to more general aspects of spatial cognition was observed. Interestingly, there was also a significant effect of VR-based training on executive functioning for cognitively healthy elderly individuals. In sum, VR could be considered as an advanced embodied tool suitable for treating spatial recall impairments.
Project description:Visuospatial working memory enables us to maintain access to visual information for processing even when a stimulus is no longer present, due to occlusion, our own movements, or transience of the stimulus. Here we show that, when localizing remembered stimuli, the precision of spatial recall does not rely solely on memory for individual stimuli, but additionally depends on the relative distances between stimuli and visual landmarks in the surroundings. Across three separate experiments, we consistently observed a spatially selective improvement in the precision of recall for items located near a persistent landmark. While the results did not require that the landmark be visible throughout the memory delay period, it was essential that it was visible both during encoding and response. We present a simple model that can accurately capture human performance by considering relative (allocentric) spatial information as an independent localization estimate which degrades with distance and is optimally integrated with egocentric spatial information. Critically, allocentric information was encoded without cost to egocentric estimation, demonstrating independent storage of the two sources of information. Finally, when egocentric and allocentric estimates were put in conflict, the model successfully predicted the resulting localization errors. We suggest that the relative distance between stimuli represents an additional, independent spatial cue for memory recall. This cue information is likely to be critical for spatial localization in natural settings which contain an abundance of visual landmarks.
Project description:The sensor localization problem can be formalized using distance and orientation constraints, typically in 3D. Local methods can be used to refine an initial location estimation, but in many cases such estimation is not available and a method able to determine all the feasible solutions from scratch is necessary. Unfortunately, existing methods able to find all the solutions in distance space can not take into account orientations, or they can only deal with one- or two-dimensional problems and their extension to 3D is troublesome. This paper presents a method that addresses these issues. The proposed approach iteratively projects the problem to decrease its dimension, then reduces the ranges of the variable distances, and back-projects the result to the original dimension, to obtain a tighter approximation of the feasible sensor locations. This paper extends previous works introducing accurate range reduction procedures which effectively integrate the orientation constraints. The mutual localization of a fleet of robots carrying sensors and the position analysis of a sensor moved by a parallel manipulator are used to validate the approach.