Project description:The Rhode Island IDeA Network of Biomedical Research Excellence Molecular Informatics Core at the University of Rhode Island Information Technology Services Innovative Learning Technologies developed virtual and augmented reality applications to teach concepts in biomedical science, including pharmacology, medicinal chemistry, cell culture and nanotechnology. The apps were developed as full virtual reality/augmented reality and 3D gaming versions, which do not require virtual reality headsets. Development challenges included creating intuitive user interfaces, text-to-voice functionality, visualization of molecules and implementing complex science concepts. In-app quizzes are used to assess the user's understanding of topics, and user feedback was collected for several apps to improve the experience. The apps were positively reviewed by users and are being implemented into the curriculum at the University of Rhode Island.
Project description:The Covid-19 pandemic has negatively affected every aspect of human life. In these challenging times nursing students, facing academic and psychological issues, are advised to use augmented reality applications in the field of health sciences for increasing their motivations and academic performances. The main motive of the study was to examine the acceptance status of nursing students in implementing augmented reality technology in their education and training. The study is a quantitative research study, and it uses the causal-comparative screening method. The data used in the study was collected online from 419 nursing students. The hybrid method was preferred. First, the hypotheses based on the linear relationships were defined between the variables which were then tested by the method of structural equation modeling. Second, the method of artificial neural networks was used to determine the non-linear relationships between the variables. The results show that the nursing students have a high intention of using augmented reality technology as a way of self-learning. It was also found that the most emphasized motive behind this intention is the expectation that using augmented reality technology will increase their academic performance. They also think that AR technology has many potential benefits to offer in the future. It was observed that a considerable number of students already use augmented reality technology for its usefulness and with a hedonic motivation. In conclusion, nursing students have a high acceptance of using augmented reality technology during their education and training process. Since we live in a world where e-learning and self-learning education/training have become widespread, it is estimated that students will demand augmented reality applications as a part of holistic education, and as an alternative to traditional textbooks.
Project description:Introduction:The field of augmented reality (AR) is rapidly growing with many new potential applications in medical education. This systematic review investigated the current state of augmented reality applications (ARAs) and developed an analytical model to guide future research in assessing ARAs as teaching tools in medical education. Methods:A literature search was conducted using PubMed, Embase, Web of Science, Cochrane Library, and Google Scholar. This review followed PRISMA guidelines and included publications from January 1, 2000 to June 18, 2018. Inclusion criteria were experimental studies evaluating ARAs implemented in healthcare education published in English. Our review evaluated study quality and determined whether studies assessed ARA validity using criteria established by the GRADE Working Group and Gallagher et al., respectively. These findings were used to formulate an analytical model to assess the readiness of ARAs for implementation in medical education. Results:We identified 100,807 articles in the initial literature search; 36 met inclusion criteria for final review and were categorized into three categories: Surgery (23), Anatomy (9), and Other (4). The overall quality of the studies was poor and no ARA was tested for all five stages of validity. Our analytical model evaluates the importance of research quality, application content, outcomes, and feasibility of an ARA to gauge its readiness for implementation. Conclusion:While AR technology is growing at a rapid rate, the current quality and breadth of AR research in medical training is insufficient to recommend the adoption into educational curricula. We hope our analytical model will help standardize AR assessment methods and define the role of AR technology in medical education.
Project description:Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.
Project description:BackgroundThe COVID-19 pandemic drastically reduced opportunities for surgical skill sharing between high-income and low to middle-income countries. Augmented reality (AR) technology allows mentors in one country to virtually train a mentee in another country during surgical cases without international travel. We hypothesize that AR technology is an effective live surgical training and mentorship modality.MethodsThree senior urologic surgeons in the US and UK worked with four urologic surgeon trainees across the continent of Africa using AR systems. Trainers and trainees individually completed post-operative questionnaires evaluating their experience.ResultsTrainees rated the quality of virtual training as equivalent to in-person training in 83% of cases (N = 5 of 6 responses). Trainers reported the technology's visual quality as "acceptable" in 67% of cases (N = 12 of 18 responses). The audiovisual capabilities of the technology had a "high" impact in the majority of the cases.ConclusionAR technology can effectively facilitate surgical training when in-person training is limited or unavailable.
Project description:ObjectivesThis study aimed to review and synthesize the current research and state of augmented reality (AR), mixed reality (MR) and the applications developed for healthcare education beyond surgery.MethodsAn integrative review was conducted on all relevant material, drawing on different data sources, including the databases of PubMed, PsycINFO, and ERIC from January 2013 till September 2018. Inductive content analysis and qualitative synthesis were performed. Additionally, the quality of the studies was assessed with different structured tools.ResultsTwenty-six studies were included. Studies based on both AR and MR involved established applications in 27% of all cases (n=6), the rest being prototypes. The most frequently studied subjects were related to anatomy and anesthesia (n=13). All studies showed several healthcare educational benefits of AR and MR, significantly outperforming traditional learning approaches in 11 studies examining various outcomes. Studies had a low-to-medium quality overall with a MERSQI mean of 12.26 (SD=2.63), while the single qualitative study had high quality.ConclusionsThis review suggests the progress of learning approaches based on AR and MR for various medical subjects while moving the research base away from feasibility studies on prototypes. Yet, lacking validity of study conclusions, heterogeneity of research designs and widely varied reporting challenges transferability of the findings in the studies included in the review. Future studies should examine suitable research designs and instructional objectives achievable by AR and MR-based applications to strengthen the evidence base, making it relevant for medical educators and institutions to apply the technologies.
Project description:The field of radiation oncology is rapidly advancing through technological and biomedical innovation backed by robust research evidence. However, cancer professionals are notoriously time-poor, meaning there is a need for high quality, accessible and tailored oncologic education programs. While traditional teaching methods including lectures and other in-person delivery formats remain important, digital learning (DL) has provided additional teaching options that can be delivered flexibly and on-demand from anywhere in the world. While evidence of this digital migration has been evident for some time now, it has not always been met with the same enthusiasm by the teaching community, in part due to questions about its pedagogical effectiveness. Many of these reservations have been driven by a rudimentary utilisation of the medium and inexperience with digital best-practice. With increasing familiarity and understanding of the medium, increasingly sophisticated and pedagogically-driven learning solutions can be produced. This article will review the application of immersive digital learning tools in radiation oncology education. This includes first and second-generation Virtual Reality (VR) environments and Augmented Reality (AR). It will explore the data behind, and best-practice application of, each of these tools as well as giving practical tips for educators who are looking to implement (or refine) their use of these learning methods. It includes a discussion of how to match the digital learning methods to the content being taught and ends with a horizon scan of where the digital medium may take us in the future. This article is the second in a two-part series, with the companion piece being on Screen-Based Digital Learning Methods in Radiation Oncology. Overall, the digital space is well-placed to cater to the evolving educational needs of oncology learners. Further uptake over the next decade is likely to be driven by the desire for flexible on demand delivery, high-yield products, engaging delivery methods and programs that are tailored to individual learning needs. Educational programs that embrace these principles will have unique opportunities to thrive in this space.
Project description:Gross anatomy knowledge is an essential element for medical students in their education, and nowadays, cadaver-based instruction represents the main instructional tool able to provide three-dimensional (3D) and topographical comprehensions. The aim of the study was to develop and test a prototype of an innovative tool for medical education in human anatomy based on the combination of augmented reality (AR) technology and a tangible 3D printed model that can be explored and manipulated by trainees, thus favoring a three-dimensional and topographical learning approach. After development of the tool, called AEducaAR (Anatomical Education with Augmented Reality), it was tested and evaluated by 62 second-year degree medical students attending the human anatomy course at the International School of Medicine and Surgery of the University of Bologna. Students were divided into two groups: AEducaAR-based learning ("AEducaAR group") was compared to standard learning using human anatomy atlas ("Control group"). Both groups performed an objective test and an anonymous questionnaire. In the objective test, the results showed no significant difference between the two learning methods; instead, in the questionnaire, students showed enthusiasm and interest for the new tool and highlighted its training potentiality in open-ended comments. Therefore, the presented AEducaAR tool, once implemented, may contribute to enhancing students' motivation for learning, increasing long-term memory retention and 3D comprehension of anatomical structures. Moreover, this new tool might help medical students to approach to innovative medical devices and technologies useful in their future careers.
Project description:Human anatomical specimen museums are commonly used by medical, nursing, and paramedical students. Through dissection and prosection, the specimens housed in these museums allow students to appreciate the complex relationships of organs and structures in more detail than textbooks could provide. However, it may be difficult for students, particularly novices, to identify the various parts of these anatomical structures without additional explanations from a docent or supplemental illustrations. Recently, augmented reality (AR) has been used in many museum exhibits to display virtual objects in videos captured from the real world. This technology can significantly enhance the learning experience. In this study, three AR-based support systems for tours in medical specimen museums were developed, and their usability and effectiveness for learning were examined. The first system was constructed using an AR marker. This system could display virtual label information for specimens by capturing AR markers using a tablet camera. Individual AR markers were required for all specimens, but their presence in and on the prosected specimens could also be obtrusive. The second system was developed to set the specimen image itself as an image marker, as most specimens were displayed in cross section. Visitors could then obtain the label information presented by AR without any markers intruding on the display or anatomical specimens. The third system was comprised of a head-mounted display combined with a natural click interface. The system could provide visitors with an environment for the natural manipulation of virtual objects with future scalability.
Project description:Augmented reality (AR) enhances the user's environment by projecting virtual objects into the real world in real-time. Brain-computer interfaces (BCIs) are systems that enable users to control external devices with their brain signals. BCIs can exploit AR technology to interact with the physical and virtual world and to explore new ways of displaying feedback. This is important for users to perceive and regulate their brain activity or shape their communication intentions while operating in the physical world. In this study, twelve healthy participants were introduced to and asked to choose between two motor-imagery tasks: mental drawing and interacting with a virtual cube. Participants first performed a functional localizer run, which was used to select a single fNIRS channel for decoding their intentions in eight subsequent choice-encoding runs. In each run participants were asked to select one choice of a six-item list. A rotating AR cube was displayed on a computer screen as the main stimulus, where each face of the cube was presented for 6 s and represented one choice of the six-item list. For five consecutive trials, participants were instructed to perform the motor-imagery task when the face of the cube that represented their choice was facing them (therewith temporally encoding the selected choice). In the end of each run, participants were provided with the decoded choice based on a joint analysis of all five trials. If the decoded choice was incorrect, an active error-correction procedure was applied by the participant. The choice list provided in each run was based on the decoded choice of the previous run. The experimental design allowed participants to navigate twice through a virtual menu that consisted of four levels if all choices were correctly decoded. Here we demonstrate for the first time that by using AR feedback and flexible choice encoding in form of search trees, we can increase the degrees of freedom of a BCI system. We also show that participants can successfully navigate through a nested menu and achieve a mean accuracy of 74% using a single motor-imagery task and a single fNIRS channel.