Project description:Background: Educators often face difficulties in explaining abstract concepts such as vectors. During the ongoing coronavirus disease 2019 (COVID-19) pandemic, fully online classes have also caused additional challenges to using conventional teaching methods. To explain a vector concept of more than 2 dimensions, visualization becomes a problem. Although Microsoft PowerPoint can integrate animation, the illustration is still in 2-dimensions. Augmented reality (AR) technology is recommended to aid educators and students in teaching-learning vectors, namely via a vector personal computer augmented reality system (VPCAR), to fulfil the demand for tools to support the learning and teaching of vectors. Methods: A PC learning module for vectors was developed in a 3-dimensional coordinate system by using AR technology. Purposive sampling was applied to get feedback from educators and students in Malaysia through an online survey. The supportiveness of using VPCAR based on six items (attractiveness, easiness, visualization, conceptual understanding, inspiration and helpfulness) was recorded on 5-points Likert-type scales. Findings are presented descriptively and graphically. Results: Surprisingly, both students and educators adapted to the new technology easily and provided significant positive feedback that showed a left-skewed and J-shaped distribution for each measurement item, respectively. The distributions were proven significantly different among the students and educators, where supportive level result of educators was higher than students. This study introduced a PC learning module other than mobile apps as students mostly use laptops to attend online class and educators also engage other IT tools in their teaching. Conclusions: Based on these findings, VPCAR provides a good prospect in supporting educators and students during their online teaching-learning process. However, the findings may not be generalizable to all students and educators in Malaysia as purposive sampling was applied. Further studies may focus on government-funded schools using the newly developed VPCAR system, which is the novelty of this study.
Project description:Background:Virtual Reality exposure therapy (VRET) is an evidence-based treatment of phobias and recent research suggests that this applies also to self-contained, automated interventions requiring no therapist guidance. With the advent and growing adoption of consumer VR technology, automated VR intervention have the potential to close the considerable treatment gap for specific phobias through dissemination as consumer applications, self-help at clinics, or as blended treatment. There is however a lack of translational effectiveness studies on VRET treatment effects under real-world conditions. Methods:We conducted a single-arm (n = 25), single-subject study of automated, gamified VRET for fear of spiders, under simulated real-world conditions. After setup and reading instructions, participants completed the automated, single-session treatment by themselves. Self-rated fear of spiders and quality of life served as outcome measures, measured twice before, and one and two weeks after treatment, and at a six-month follow-up. Session characteristics and user experience measures were collected at the end of the session. Results:Mixed-effects modeling revealed a significant and large (d = 1.26) effect of treatment-onset on phobia symptoms (p < .001), and a small (d = 0.49) effect on quality of life (p = .025). Results were maintained at a six-month follow-up (p > .053). The intervention was tolerable and practical. There were no significant correlations between any user experience measure and decrease in phobia symptoms (p > .209). Conclusions:An automated VRET intervention for fear of spiders showed equivalent effects on phobia symptoms under effectiveness conditions as previously reported under efficacy conditions. These results suggest that automated VRET applications are promising self-help treatments also when provided under real-world conditions. Pre-registration:Open Science Foundation, https://doi.org/10.17605/OSF.IO/78GUB.
Project description:BACKGROUND: Virtual reality (VR) is an emerging new modality for laparoscopic skills training; however, most simulators lack realistic haptic feedback. Augmented reality (AR) is a new laparoscopic simulation system offering a combination of physical objects and VR simulation. Laparoscopic instruments are used within an hybrid mannequin on tissue or objects while using video tracking. This study was designed to assess the difference in realism, haptic feedback, and didactic value between AR and VR laparoscopic simulation. METHODS: The ProMIS AR and LapSim VR simulators were used in this study. The participants performed a basic skills task and a suturing task on both simulators, after which they filled out a questionnaire about their demographics and their opinion of both simulators scored on a 5-point Likert scale. The participants were allotted to 3 groups depending on their experience: experts, intermediates and novices. Significant differences were calculated with the paired t-test. RESULTS: There was general consensus in all groups that the ProMIS AR laparoscopic simulator is more realistic than the LapSim VR laparoscopic simulator in both the basic skills task (mean 4.22 resp. 2.18, P < 0.000) as well as the suturing task (mean 4.15 resp. 1.85, P < 0.000). The ProMIS is regarded as having better haptic feedback (mean 3.92 resp. 1.92, P < 0.000) and as being more useful for training surgical residents (mean 4.51 resp. 2.94, P < 0.000). CONCLUSIONS: In comparison with the VR simulator, the AR laparoscopic simulator was regarded by all participants as a better simulator for laparoscopic skills training on all tested features.
Project description:There have been decades of research on the usability and educational value of augmented reality. However, less is known about how augmented reality affects social interactions. The current paper presents three studies that test the social psychological effects of augmented reality. Study 1 examined participants' task performance in the presence of embodied agents and replicated the typical pattern of social facilitation and inhibition. Participants performed a simple task better, but a hard task worse, in the presence of an agent compared to when participants complete the tasks alone. Study 2 examined nonverbal behavior. Participants met an agent sitting in one of two chairs and were asked to choose one of the chairs to sit on. Participants wearing the headset never sat directly on the agent when given the choice of two seats, and while approaching, most of the participants chose the rotation direction to avoid turning their heads away from the agent. A separate group of participants chose a seat after removing the augmented reality headset, and the majority still avoided the seat previously occupied by the agent. Study 3 examined the social costs of using an augmented reality headset with others who are not using a headset. Participants talked in dyads, and augmented reality users reported less social connection to their partner compared to those not using augmented reality. Overall, these studies provide evidence suggesting that task performance, nonverbal behavior, and social connectedness are significantly affected by the presence or absence of virtual content.
Project description:Recently, metasurfaces composed of artificially fabricated subwavelength structures have shown remarkable potential for the manipulation of light with unprecedented functionality. Here, we first demonstrate a metasurface application to realize a compact near-eye display system for augmented reality with a wide field of view. A key component is a see-through metalens with an anisotropic response, a high numerical aperture with a large aperture, and broadband characteristics. By virtue of these high-performance features, the metalens can overcome the existing bottleneck imposed by the narrow field of view and bulkiness of current systems, which hinders their usability and further development. Experimental demonstrations with a nanoimprinted large-area see-through metalens are reported, showing full-color imaging with a wide field of view and feasibility of mass production. This work on novel metasurface applications shows great potential for the development of optical display systems for future consumer electronics and computer vision applications.
Project description:PurposeMost published systematic reviews have focused on the use of virtual reality (VR)/augmented reality (AR) technology in ophthalmology as it relates to surgical training. To date, this is the first review that investigates the current state of VR/AR technology applied more broadly to the entire field of ophthalmology.MethodsPubMed, Embase, and CINAHL databases were searched systematically from January 2014 through December 1, 2020. Studies that discussed VR and/or AR as it relates to the field of ophthalmology and provided information on the technology used were considered. Abstracts, non-peer-reviewed literature, review articles, studies that reported only qualitative data, and studies without English translations were excluded.ResultsA total of 77 studies were included in this review. Of these, 28 evaluated the use of VR/AR in ophthalmic surgical training/assessment and guidance, 7 in clinical training, 23 in diagnosis/screening, and 19 in treatment/therapy. 15 studies used AR, 61 used VR, and 1 used both. Most studies focused on the validity and usability of novel technologies.ConclusionsOphthalmology is a field of medicine that is well suited for the use of VR/AR. However, further longitudinal studies examining the practical feasibility, efficacy, and safety of such novel technologies, the cost-effectiveness, and medical/legal considerations are still needed. We believe that time will indeed foster further technological advances and lead to widespread use of VR/AR in routine ophthalmic practice.
Project description:Study designA prospective, case-based, observational study.ObjectivesTo investigate how microscope-based augmented reality (AR) support can be utilized in various types of spine surgery.MethodsIn 42 spinal procedures (12 intra- and 8 extradural tumors, 7 other intradural lesions, 11 degenerative cases, 2 infections, and 2 deformities) AR was implemented using operating microscope head-up displays (HUDs). Intraoperative low-dose computed tomography was used for automatic registration. Nonlinear image registration was applied to integrate multimodality preoperative images. Target and risk structures displayed by AR were defined in preoperative images by automatic anatomical mapping and additional manual segmentation.ResultsAR could be successfully applied in all 42 cases. Low-dose protocols ensured a low radiation exposure for registration scanning (effective dose cervical 0.29 ± 0.17 mSv, thoracic 3.40 ± 2.38 mSv, lumbar 3.05 ± 0.89 mSv). A low registration error (0.87 ± 0.28 mm) resulted in a reliable AR representation with a close matching of visualized objects and reality, distinctly supporting anatomical orientation in the surgical field. Flexible AR visualization applying either the microscope HUD or video superimposition, including the ability to selectively activate objects of interest, as well as different display modes allowed a smooth integration in the surgical workflow, without disturbing the actual procedure. On average, 7.1 ± 4.6 objects were displayed visualizing target and risk structures reliably.ConclusionsMicroscope-based AR can be applied successfully to various kinds of spinal procedures. AR improves anatomical orientation in the surgical field supporting the surgeon, as well as it offers a potential tool for education.
Project description:In video surgery, and more specifically in arthroscopy, one of the major problems is positioning the camera and instruments within the anatomic environment. The concept of computer-guided video surgery has already been used in ear, nose, and throat (ENT), gynecology, and even in hip arthroscopy. These systems, however, rely on optical or mechanical sensors, which turn out to be restricting and cumbersome. The aim of our study was to develop and evaluate the accuracy of a navigation system based on electromagnetic sensors in video surgery.We used an electromagnetic localization device (Aurora, Northern Digital Inc., Ontario, Canada) to track the movements in space of both the camera and the instruments. We have developed a dedicated application in the Python language, using the VTK library for the graphic display and the OpenCV library for camera calibration.A prototype has been designed and evaluated for wrist arthroscopy. It allows display of the theoretical position of instruments onto the arthroscopic view with useful accuracy.The augmented reality view represents valuable assistance when surgeons want to position the arthroscope or locate their instruments. It makes the maneuver more intuitive, increases comfort, saves time, and enhances concentration.
Project description:Computer as an integral part of continual advancements in medicine has experienced tremendous development to minimize the risks and improving the precision of the surgery. Our review included multi-disciplinary publications in English from 2014 to 2017 using Springer, Oxford library, Elsevier, PubMed, Google Scholar, and Springer search engines using terms of "augmented reality (AR), "plastic surgery," and "surgery " and "Augmented Reality Ethics and challenges". It was shown that AR has been successfully effective in different branches of surgery, but with concerns and challenges like acceptance, privacy, different physical, security and behavioral threats. To come over them partially, a methodological approach for cyber threat landscape proactive exploration has been suggested.