Virtually the same? How impaired sensory information in virtual reality may disrupt vision for action.
ABSTRACT: Virtual reality (VR) is a promising tool for expanding the possibilities of psychological experimentation and implementing immersive training applications. Despite a recent surge in interest, there remains an inadequate understanding of how VR impacts basic cognitive processes. Due to the artificial presentation of egocentric distance cues in virtual environments, a number of cues to depth in the optic array are impaired or placed in conflict with each other. Moreover, realistic haptic information is all but absent from current VR systems. The resulting conflicts could impact not only the execution of motor skills in VR but also raise deeper concerns about basic visual processing, and the extent to which virtual objects elicit neural and behavioural responses representative of real objects. In this brief review, we outline how the novel perceptual environment of VR may affect vision for action, by shifting users away from a dorsal mode of control. Fewer binocular cues to depth, conflicting depth information and limited haptic feedback may all impair the specialised, efficient, online control of action characteristic of the dorsal stream. A shift from dorsal to ventral control of action may create a fundamental disparity between virtual and real-world skills that has important consequences for how we understand perception and action in the virtual world.
Project description:BACKGROUND: Virtual reality (VR) is an emerging new modality for laparoscopic skills training; however, most simulators lack realistic haptic feedback. Augmented reality (AR) is a new laparoscopic simulation system offering a combination of physical objects and VR simulation. Laparoscopic instruments are used within an hybrid mannequin on tissue or objects while using video tracking. This study was designed to assess the difference in realism, haptic feedback, and didactic value between AR and VR laparoscopic simulation. METHODS: The ProMIS AR and LapSim VR simulators were used in this study. The participants performed a basic skills task and a suturing task on both simulators, after which they filled out a questionnaire about their demographics and their opinion of both simulators scored on a 5-point Likert scale. The participants were allotted to 3 groups depending on their experience: experts, intermediates and novices. Significant differences were calculated with the paired t-test. RESULTS: There was general consensus in all groups that the ProMIS AR laparoscopic simulator is more realistic than the LapSim VR laparoscopic simulator in both the basic skills task (mean 4.22 resp. 2.18, P < 0.000) as well as the suturing task (mean 4.15 resp. 1.85, P < 0.000). The ProMIS is regarded as having better haptic feedback (mean 3.92 resp. 1.92, P < 0.000) and as being more useful for training surgical residents (mean 4.51 resp. 2.94, P < 0.000). CONCLUSIONS: In comparison with the VR simulator, the AR laparoscopic simulator was regarded by all participants as a better simulator for laparoscopic skills training on all tested features.
Project description:In perceptual psychology, estimations of visual depth and size under different spatial layouts have been extensively studied. However, research evidence in virtual environments (VE) is relatively lacking. The emergence of human-computer interaction (HCI) and virtual reality (VR) has raised the question of how human operators perform actions based on the estimation of visual properties in VR, especially when the sensory cues associated with the same object are conflicting. We report on an experiment in which participants compared the size of a visual sphere to a haptic sphere, belonging to the same object in a VE. The sizes from the visual and haptic modalities were either identical or conflicting (with visual size being larger than haptic size, or vice versa). We used three standard haptic references (small, medium, and large sizes) and asked participants to compare the visual sizes with the given reference, by method of constant stimuli. Results show a dominant functional priority of the visual size perception. Moreover, observers demonstrated a central tendency effect: over-estimation for smaller haptic sizes but under-estimation for larger haptic sizes. The results are in-line with previous studies in real environments (RE). We discuss the current findings in the framework of adaptation level theory for haptic size reference. This work provides important implications for the optimal design of human-computer interactions when integrating 3D visual-haptic information in a VE.
Project description:Identifying an object's material properties supports recognition and action planning: we grasp objects according to how heavy, hard or slippery we expect them to be. Visual cues to material qualities such as gloss have recently received attention, but how they interact with haptic (touch) information has been largely overlooked. Here, we show that touch modulates gloss perception: objects that feel slippery are perceived as glossier (more shiny).Participants explored virtual objects that varied in look and feel. A discrimination paradigm (Experiment 1) revealed that observers integrate visual gloss with haptic information. Observers could easily detect an increase in glossiness when it was paired with a decrease in friction. In contrast, increased glossiness coupled with decreased slipperiness produced a small perceptual change: the visual and haptic changes counteracted each other. Subjective ratings (Experiment 2) reflected a similar interaction - slippery objects were rated as glossier and vice versa. The sensory system treats visual gloss and haptic friction as correlated cues to surface material. Although friction is not a perfect predictor of gloss, the visual system appears to know and use a probabilistic relationship between these variables to bias perception - a sensible strategy given the ambiguity of visual clues to gloss.
Project description:The visual system exploits multiple signals, including monocular and binocular cues, to determine the motion of objects through depth. In the laboratory, sensitivity to different three-dimensional (3D) motion cues varies across observers and is often weak for binocular cues. However, laboratory assessments may reflect factors beyond inherent perceptual sensitivity. For example, the appearance of weak binocular sensitivity may relate to extensive prior experience with two-dimensional (2D) displays in which binocular cues are not informative. Here we evaluated the impact of experience on motion-in-depth (MID) sensitivity in a virtual reality (VR) environment. We tested a large cohort of observers who reported having no prior VR experience and found that binocular cue sensitivity was substantially weaker than monocular cue sensitivity. As expected, sensitivity was greater when monocular and binocular cues were presented together than in isolation. Surprisingly, the addition of motion parallax signals appeared to cause observers to rely almost exclusively on monocular cues. As observers gained experience in the VR task, sensitivity to monocular and binocular cues increased. Notably, most observers were unable to distinguish the direction of MID based on binocular cues above chance level when tested early in the experiment, whereas most showed statistically significant sensitivity to binocular cues when tested late in the experiment. This result suggests that observers may discount binocular cues when they are first encountered in a VR environment. Laboratory assessments may thus underestimate the sensitivity of inexperienced observers to MID, especially for binocular cues.
Project description:Advances in Virtual Reality (VR) technologies allow the investigation of simulated moral actions in visually immersive environments. Using a robotic manipulandum and an interactive sculpture, we now also incorporate realistic haptic feedback into virtual moral simulations. In two experiments, we found that participants responded with greater utilitarian actions in virtual and haptic environments when compared to traditional questionnaire assessments of moral judgments. In experiment one, when incorporating a robotic manipulandum, we found that the physical power of simulated utilitarian responses (calculated as the product of force and speed) was predicted by individual levels of psychopathy. In experiment two, which integrated an interactive and life-like sculpture of a human into a VR simulation, greater utilitarian actions continued to be observed. Together, these results support a disparity between simulated moral action and moral judgment. Overall this research combines state-of-the-art virtual reality, robotic movement simulations, and realistic human sculptures, to enhance moral paradigms that are often contextually impoverished. As such, this combination provides a better assessment of simulated moral action, and illustrates the embodied nature of morally-relevant actions.
Project description:When used in educational settings, simulations utilizing virtual reality (VR) technologies can reduce training costs while providing a safe and effective learning environment. Tasks can be easily modified to maximize learning objectives of different levels of trainees (e.g., novice, intermediate, expert), and can be repeated for the development of psychomotor skills. VR offers a multisensory experience, providing visual, auditory, and haptic sensations with varying levels of fidelity. While simulating visual and auditory stimuli is relatively easy and cost-effective, similar representations of haptic sensation still require further development. Evidence suggests that mixing high- and low-fidelity realistic sensations (e.g., audition and haptic) can improve the overall perception of realism, however, whether this also leads to improved performance has not been examined. The current study examined whether audiohaptic stimuli presented in a virtual drilling task can lead to improved motor performance and subjective realism, compared to auditory stimuli alone. Right-handed participants (n = 16) completed 100 drilling trials of each stimulus type. Performance measures indicated that participants overshot the target during auditory trials, and undershot the target during audiohaptic trials. Undershooting is thought to be indicative of improved performance, optimizing both time and energy requirements.
Project description:Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.
Project description:The aim of this study is to develop and assess the peg transfer training module face, content and construct validation use of the box, virtual reality (VR), cognitive virtual reality (CVR), augmented reality (AR), and mixed reality (MR) trainer, thereby to compare advantages and disadvantages of these simulators. Training system (VatsSim-XR) design includes customized haptic-enabled thoracoscopic instruments, virtual reality helmet set, endoscope kit with navigation, and the patient-specific corresponding training environment. A cohort of 32 trainees comprising 24 novices and 8 experts underwent the real and virtual simulators that were conducted in the department of thoracic surgery of Yunnan First People's Hospital. Both subjective and objective evaluations have been developed to explore the visual and haptic potential promotions in peg transfer education. Experiments and evaluation results conducted by both professional and novice thoracic surgeons show that the surgery skills from experts are better than novices overall, AR trainer is able to provide a more balanced training environments on visuohaptic fidelity and accuracy, box trainer and MR trainer demonstrated the best realism 3D perception and surgical immersive performance, respectively, and CVR trainer shows a better clinic effect that the traditional VR trainer. Combining these in a systematic approach, tuned with specific fidelity requirements, medical simulation systems would be able to provide a more immersive and effective training environment.
Project description:When we use virtual and augmented reality (VR/AR) environments to investigate behaviour or train motor skills, we expect that the insights or skills acquired in VR/AR transfer to real-world settings. Motor behaviour is strongly influenced by perceptual uncertainty and the expected consequences of actions. VR/AR differ in both of these aspects from natural environments. Perceptual information in VR/AR is less reliable than in natural environments, and the knowledge of acting in a virtual environment might modulate our expectations of action consequences. Using mirror reflections to create a virtual environment free of perceptual artefacts, we show that hand movements in an obstacle avoidance task systematically differed between real and virtual obstacles and that these behavioural differences occurred independent of the quality of the available perceptual information. This suggests that even when perceptual correspondence between natural and virtual environments is achieved, action correspondence does not necessarily follow due to the disparity in the expected consequences of actions in the two environments.
Project description:BACKGROUND:Virtual reality (VR) offers unprecedented opportunity as a scientific tool to study visuomotor interactions, training, and rehabilitation applications. However, it remains unclear if haptic-free hand-object interactions in a virtual environment (VE) may differ from those performed in the physical environment (PE). We therefore sought to establish if the coordination structure between the transport and grasp components remain similar whether a reach-to-grasp movement is performed in PE and VE. METHOD:Reach-to-grasp kinematics were examined in 13 healthy right-handed young adults. Subjects were instructed to reach-to-grasp-to-lift three differently sized rectangular objects located at three different distances from the starting position. Object size and location were matched between the two environments. Contact with the virtual objects was based on a custom collision detection algorithm. Differences between the environments were evaluated by comparing movement kinematics of the transport and grasp components. RESULTS:Correlation coefficients, and the slope of the regression lines, between the reach and grasp components were similar for the two environments. Likewise, the kinematic profiles of the transport velocity and grasp aperture were strongly correlated across the two environments. A rmANOVA further identified some similarities and differences in the movement kinematics between the two environments - most prominently that the closure phase of reach-to-grasp movement was prolonged when movements were performed in VE. CONCLUSIONS:Reach-to-grasp movement patterns performed in a VE showed both similarities and specific differences compared to those performed in PE. Additionally, we demonstrate a novel approach for parsing the reach-to-grasp movement into three phases- initiation, shaping, closure- based on established kinematic variables, and demonstrate that the differences in performance between the environments are attributed to the closure phase. We discuss this in the context of how collision detection parameters may modify hand-object interactions in VE. Our study shows that haptic-free VE may be a useful platform to study reach-to-grasp movements, with potential implications for haptic-free VR in neurorehabilitation.