Project description:Near-eye displays are fundamental technology in the next generation computing platforms for augmented reality and virtual reality. However, there are remaining challenges to deliver immersive and comfortable visual experiences to users, such as compact form factor, solving vergence-accommodation conflict, and achieving a high resolution with a large eyebox. Here we show a compact holographic near-eye display concept that combines the advantages of waveguide displays and holographic displays to overcome the challenges towards true 3D holographic augmented reality glasses. By modeling the coherent light interactions and propagation via the waveguide combiner, we demonstrate controlling the output wavefront using a spatial light modulator located at the input coupler side. The proposed method enables 3D holographic displays via exit-pupil expanding waveguide combiners, providing a large software-steerable eyebox. It also offers additional advantages such as resolution enhancement capability by suppressing phase discontinuities caused by pupil replication process. We build prototypes to verify the concept with experimental results and conclude the paper with discussion.
Project description:The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization.
Project description:The use of augmented reality (AR) in teaching and studying neuroanatomy has been well researched. Previous research showed that AR-based learning of neuroanatomy has both alleviated cognitive load and was attractive to young learners. However, how the attractiveness of AR effects student motivation has not been discovered. Therefore, the motivational effects of AR were investigated in this research by the use of quantitative and qualitative methods. Motivation elicited by the GreyMapp-AR, an AR application, was investigated in medical and biomedical sciences students (n = 222; mean age: 19.7 ± 1.4 years) using the instructional measure of motivation survey (IMMS). Additional components (i.e., attention, relevance, confidence, and satisfaction) were also evaluated with motivation as measured by IMMS. Additionally, 19 students underwent audio-recorded individual interviews which were transcribed for qualitative analysis. Males regarded the relevance of AR significantly higher than females (P < 0.024). Appreciation of the GreyMapp-AR program was found to be significantly higher in students studying biomedical sciences as compared to students studying medicine (P < 0.011). Other components and scores did not show significant differences between student groups. Students expressed that AR was beneficial in increasing their motivation to study subcortical structures, and that AR could be helpful and motivating for preparing an anatomy examination. This study suggests that students are motivated to study neuroanatomy by the use of AR, although the components that make up their individual motivation can differ significantly between groups of students.
Project description:Spatially resolved transcriptomics (SRT) produces complex, multi-dimensional gene expression data sets at up to subcellular spatial resolution. While SRT provides powerful datasets to probe biological processes, well-designed computational tools provide the key to extracting value from SRT technology. Currently, no single piece of software facilitates the combined automated analysis, visualisation, and subsequent interaction of single or multi-section SRT data as a desktop application or in an immersive environment. Here we present VR-Omics, a freely available, SRT platform agnostic, stand-alone programme that incorporates an in-built, automated workflow to pre-process and spatially mine SRT data within a user-friendly graphical interface. Benchmarking demonstrates VR-Omics has superior capabilities for seamless end-to-end analyses of SRT data, hence making SRT data processing and mining accessible to users regardless of computational and data handling skills. Importantly, VR-Omics supports comparison between datasets generated using different spatial technologies alongside processing and analysis of multiple 2D or 3D SRT datasets provides a unique environment for biological discovery. Finally, we utilise VR-Omics to uncover the molecular mechanisms that drive the growth of rare paediatric cardiac rhabdomyomas.
Project description:A typical optical based gait analysis laboratory uses expensive stereophotogrammetric motion capture systems. The study aims to propose and validate an affordable gait analysis method using augmented reality (AR) markers with a single action camera. Image processing software calculates the position and orientation of the AR markers. Anatomical landmark calibration is applied on the subject to calibrate each of the anatomical points with respect to their corresponding AR markers. This way, anatomical points are tracked through AR markers using homogeneous coordinate transformations, and the further processing of gait analysis is identical with conventional solutions. The proposed system was validated on nine participants of varying age using a conventional motion capture system on simultaneously measured treadmill gait trials on 2, 3 and 4.5 km/h walking speeds. Coordinates of the virtual anatomical points were compared using the Bland-Altman analysis. Spatial-temporal gait parameters (step length, stride length, walking base, cadence, pelvis range of motion) and angular gait parameters (range of motion of knee, hip and pelvis angles) were compared between measurement systems by RMS error and Bland-Altman analysis. The proposed method shows some differences in the raw coordinates of virtually tracked anatomical landmarks and gait parameters compared to the reference system. RMS errors of spatial parameters were below 23 mm, while the angular range of motion RMS errors varies from 2.55° to 6.73°. Some of these differences (e.g. knee angle range of motion) is comparable to previously reported differences between commercial motion capture systems and gait variability. The proposed method can be a very cheap gait analysis solution, but precision is not guaranteed for every aspect of gait analysis using the currently exemplified implementation of the AR marker tracking approach.
Project description:In this paper we introduce MoSART, a novel approach for Mobile Spatial Augmented Reality on Tangible objects. MoSART is dedicated to mobile interaction with tangible objects in single or collaborative situations. It is based on a novel "all-in-one" Head-Mounted Display (AMD) including a projector (for the SAR display) and cameras (for the scene registration). Equipped with the HMD the user is able to move freely around tangible objects and manipulate them at will. The system tracks the position and orientation of the tangible 3D objects and projects virtual content over them. The tracking is a feature-based stereo optical tracking providing high accuracy and low latency. A projection mapping technique is used for the projection on the tangible objects which can have a complex 3D geometry. Several interaction tools have also been designed to interact with the tangible and augmented content, such as a control panel and a pointer metaphor, which can benefit as well from the MoSART projection mapping and tracking features. The possibilities offered by our novel approach are illustrated in several use cases, in single or collaborative situations, such as for virtual prototyping, training or medical visualization.
Project description:Immersive technologies like stereo rendering, virtual reality, or augmented reality (AR) are often used in the field of molecular visualisation. Modern, comparably lightweight and affordable AR headsets like Microsoft's HoloLens open up new possibilities for immersive analytics in molecular visualisation. A crucial factor for a comprehensive analysis of molecular data in AR is the rendering speed. HoloLens, however, has limited hardware capabilities due to requirements like battery life, fanless cooling and weight. Consequently, insights from best practises for powerful desktop hardware may not be transferable. Therefore, we evaluate the capabilities of the HoloLens hardware for modern, GPU-enabled, high-quality rendering methods for the space-filling model commonly used in molecular visualisation. We also assess the scalability for large molecular data sets. Based on the results, we discuss ideas and possibilities for immersive molecular analytics. Besides more obvious benefits like the stereoscopic rendering offered by the device, this specifically includes natural user interfaces that use physical navigation instead of the traditional virtual one. Furthermore, we consider different scenarios for such an immersive system, ranging from educational use to collaborative scenarios.
Project description:The rise of virtual and online education in recent years has led to the development and popularization of many online tools, notably three-dimensional (3D) models and augmented reality (AR), for visualizing various structures in chemical sciences. The majority of the developed tools focus on either small molecules or biological systems, as information regarding their structure can be easily accessed from online databases or obtained through relatively quick calculations. As such, due to a lack of crystallographic and theoretical data available for nonbiological macromolecules, there is a noticeable lack of accessible online tools for the visualization of polymers in 3D. Herein, using a few sample polymers, we showcase a workflow for the generation of 3D models using molecular dynamics and Blender. The 3D structures can then be hosted on p3d.in, where AR models can be generated automatically. Furthermore, the hosted 3D models can then be shared via quick response (QR) codes and used in various settings without the need to download any applications.
Project description:Background: Educators often face difficulties in explaining abstract concepts such as vectors. During the ongoing coronavirus disease 2019 (COVID-19) pandemic, fully online classes have also caused additional challenges to using conventional teaching methods. To explain a vector concept of more than 2 dimensions, visualization becomes a problem. Although Microsoft PowerPoint can integrate animation, the illustration is still in 2-dimensions. Augmented reality (AR) technology is recommended to aid educators and students in teaching-learning vectors, namely via a vector personal computer augmented reality system (VPCAR), to fulfil the demand for tools to support the learning and teaching of vectors. Methods: A PC learning module for vectors was developed in a 3-dimensional coordinate system by using AR technology. Purposive sampling was applied to get feedback from educators and students in Malaysia through an online survey. The supportiveness of using VPCAR based on six items (attractiveness, easiness, visualization, conceptual understanding, inspiration and helpfulness) was recorded on 5-points Likert-type scales. Findings are presented descriptively and graphically. Results: Surprisingly, both students and educators adapted to the new technology easily and provided significant positive feedback that showed a left-skewed and J-shaped distribution for each measurement item, respectively. The distributions were proven significantly different among the students and educators, where supportive level result of educators was higher than students. This study introduced a PC learning module other than mobile apps as students mostly use laptops to attend online class and educators also engage other IT tools in their teaching. Conclusions: Based on these findings, VPCAR provides a good prospect in supporting educators and students during their online teaching-learning process. However, the findings may not be generalizable to all students and educators in Malaysia as purposive sampling was applied. Further studies may focus on government-funded schools using the newly developed VPCAR system, which is the novelty of this study.
Project description:Molecular case studies (MCSs) are open educational resources that use a storytelling approach to engage students in biomolecular structure-function explorations, at the interface of biology and chemistry. Although MCSs are developed for a particular target audience with specific learning goals, they are suitable for implementation in multiple disciplinary course contexts. Detailed teaching notes included in the case study help instructors plan and prepare for their implementation in diverse contexts. A newly developed MCS was simultaneously implemented in a biochemistry and a molecular parasitology course at two different institutions. Instructors participating in this cross-institutional and multidisciplinary implementation collaboratively identified the need for quick and effective ways to bridge the gap between the MCS authors' vision and the implementing instructor's interpretation of the case-related molecular structure-function discussions. Augmented reality (AR) is an interactive and engaging experience that has been used effectively in teaching molecular sciences. Its accessibility and ease-of-use with smart devices (e.g., phones and tablets) make it an attractive option for expediting and improving both instructor preparation and classroom implementation of MCSs. In this work, we report the incorporation of ready-to-use AR objects as checkpoints in the MCS. Interacting with these AR objects facilitated instructor preparation, reduced students' cognitive load, and provided clear expectations for their learning. Based on our classroom observations, we propose that the incorporation of AR in MCSs can facilitate its successful implementation, improve the classroom experience for educators and students, and make MCSs more broadly accessible in diverse curricular settings.