Project description:The availability of computational hardware and developments in (medical) machine learning (MML) increases medical mixed realities' (MMR) clinical usability. Medical instruments have played a vital role in surgery for ages. To further accelerate the implementation of MML and MMR, three-dimensional (3D) datasets of instruments should be publicly available. The proposed data collection consists of 103, 3D-scanned medical instruments from the clinical routine, scanned with structured light scanners. The collection consists, for example, of instruments, like retractors, forceps, and clamps. The collection can be augmented by generating likewise models using 3D software, resulting in an inflated dataset for analysis. The collection can be used for general instrument detection and tracking in operating room settings, or a freeform marker-less instrument registration for tool tracking in augmented reality. Furthermore, for medical simulation or training scenarios in virtual reality and medical diminishing reality in mixed reality. We hope to ease research in the field of MMR and MML, but also to motivate the release of a wider variety of needed surgical instrument datasets.
Project description:Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases.Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources.We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/.
Project description:People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.
Project description:Immersion in virtual environments is an important analog for scientists. Situations that cannot be safely organized in the real world are being simulated virtually to observe, evaluate, and train aspects of human behavior for psychology, therapy, and assessment. However, creating an immersive environment using traditional graphics practices may create conflict with a researcher's goal of evaluating user response to well-defined visual stimuli. Standard computer monitors may display color-accurate stimuli, but it is generally viewed from a seating position, where the participant can see real-world visual context. In this article, we propose a novel means to allow vision scientists to exert finer control over the participants visual stimuli and context. We propose and verify a device-agnostic approach to color calibration by analyzing display properties such as luminance, spectral distribution, and chromaticity. We evaluated five different head-mounted displays from different manufacturers and showed how our approach produces conforming visual outputs.
Project description:Due to technical roadblocks, it is unclear how visual circuits represent multiple features or how behaviorally relevant representations are selected for long-term memory. Here we developed Moculus, a head-mounted virtual reality platform for mice that covers the entire visual field, and allows binocular depth perception and full visual immersion. This controllable environment, with three-dimensional (3D) corridors and 3D objects, in combination with 3D acousto-optical imaging, affords rapid visual learning and the uncovering of circuit substrates in one measurement session. Both the control and reinforcement-associated visual cue coding neuronal assemblies are transiently expanded by reinforcement feedback to near-saturation levels. This increases computational capability and allows competition among assemblies that encode behaviorally relevant information. The coding assemblies form partially orthogonal and overlapping clusters centered around hub cells with higher and earlier ramp-like responses, as well as locally increased functional connectivity.
Project description:Shared governance is a concept that has been gaining popularity in the nursing field. It is a framework that allows nurses to have a greater role in clinical decision-making. This approach recognizes the expertise and knowledge that nurses possess and allows them to be active participants in the decision-making process. It is a way to empower nurses and to ensure that the best possible care is being provided to patients. By promoting shared governance, nurses are able to work collaboratively with other healthcare professionals and provide high-quality care that is evidence-based and patient-centered. This article presents data that was collected in an empirical study to investigate the impact of implementing a shared governance model on the perceptions of professional governance among nurses working in a tertiary hospital in Saudi Arabia by measuring the level of shared governance from the lowest level, the traditional governance level (management and administration only), to the highest level, the self-governance level (staff only), through six dimensions of nursing professional governance, including personnel, information, resources, participation, practice, and goals. The study was conducted over 8 months between July 2022 to February 2023 with the involvement of a random sample of 200 clinical nurses who completed a structured questionnaire before and after the study interventions as part of quasi-research. The interventions included designing and implementing a shared governance model, and providing a shared governance training to clinical to nurse participants. The pretest-posttest experimental group showed that there were improvements in the level of shared governance (shared governance level - primarily management/administration with some staff input), which denotes the effectiveness of nursing professionals governance training among nurses working in a tertiary hospital in Saudi Arabia. The data used in this study can be utilized by future studies for benchmarking purposes.
Project description:PurposeThe CAT-EyeQ is a computer adaptive test (CAT) which measures vision-related quality of life in patients having exudative retinal diseases. The aim of this study is to investigate the usability of the CAT-EyeQ in clinical practice and identify potential barriers and facilitators for implementation (problem analysis).MethodsPatients and health care professionals participated in the study regarding the usability of the CAT-EyeQ, and clinic managers and health care professionals were included in the problem analysis for implementation. In total, we conducted 18 semi-structured interviews. The Consolidated Framework for Implementation Research (CFIR) was used to develop the interview guides and to structure results.ResultsSix themes were derived from the usability study and problem analysis: (1) quality of the CAT-EyeQ and the applicability to patients' needs and preferences, (2) embedding the CAT-EyeQ in current practice, (3) implementation climate of the eye hospitals, (4) attitude of professionals, (5) engaging and encouraging professionals, and (6) integration of the CAT-EyeQ in health care - needs after piloting.ConclusionsPatients and professionals mentioned that the CAT-EyeQ improved insight into the impact of eye diseases on a patient's daily life, it allowed for more attention on the patient perspective and the structured measurement of vision-related quality of life. The main perceived barriers mentioned by professionals for using the CAT-EyeQ were lack of time and the integration of the patient-reported outcome measure (PROM) results within the electronic patient record (EPR).Translational relevanceThe CAT-EyeQ, accompanied by an overview of stakeholder perspectives resulting from this implementation study, can now be used in clinical practice.
Project description:Virtual reality (VR) has emerged as a novel and effective non-pharmacologic therapy for pain, and there is growing interest to use VR in the acute hospital setting. We sought to explore the cost and effectiveness thresholds VR therapy must meet to be cost-saving as an inpatient pain management program. The result is a framework for hospital administrators to evaluate the return on investment of implementing inpatient VR programs of varying effectiveness and cost. Utilizing decision analysis software, we compared adjuvant VR therapy for pain management vs. usual care among hospitalized patients. In the VR strategy, we analyzed potential cost-savings from reductions in opioid utilization and hospital length of stay (LOS), as well as increased reimbursements from higher patient satisfaction as measured by the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. The average overall hospitalization cost-savings per patient for the VR program vs. usual care was $5.39 (95% confidence interval -$11.00 to $156.17). In a probabilistic sensitivity analysis across 1000 hypothetical hospitals of varying size and staffing, VR remained cost-saving in 89.2% of trials. The VR program was cost-saving so long as it reduced LOS by ≥14.6%; the model was not sensitive to differences in opioid use or HCAHPS. We conclude that inpatient VR therapy may be cost-saving for a hospital system primarily if it reduces LOS. In isolation, cost-savings from reductions in opioid utilization and increased HCAHPS-related reimbursements are not sufficient to overcome the costs of VR.