An augmented reality system for image guidance of transcatheter procedures for structural heart disease.
ABSTRACT: The primary mode of visualization during transcatheter procedures for structrural heart disease is fluoroscopy, which suffers from low contrast and lacks any depth perception, thus limiting the ability of an interventionalist to position a catheter accurately. This paper describes a new image guidance system by utilizing augmented reality to provide a 3D visual environment and quantitative feedback of the catheter's position within the heart of the patient. The real-time 3D position of the catheter is acquired via two fluoroscopic images taken at different angles, and a patient-specific 3D heart rendering is produced pre-operatively from a CT scan. The spine acts as a fiduciary land marker, allowing the position and orientation of the catheter within the heart to be fully registered. The automated registration method is based on Fourier transformation, and has a high success rate (100%), low registration error (0.42 mm), and clinically acceptable computational cost (1.22 second). The 3D renderings are displayed and updated on the augmented reality device (i.e., Microsoft HoloLens), which can provide pre-set views of various angles of the heart using voice-command. This new image-guidance system with augmented reality provides a better visualization to interventionalists and potentially assists them in understanding of complicated cases. Furthermore, this system coupled with the developed 3D printed models can serve as a training tool for the next generation of cardiac interventionalists.
Project description:To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.
Project description:BACKGROUND:Freehand ventricular catheter placement may represent limited accuracy for the surgeon's intent to achieve primary optimal catheter position. OBJECTIVE:To investigate the accuracy of a ventricular catheter guide assisted by a simple mobile health application (mhealth app) in a multicenter, randomized, controlled, simple blinded study (GAVCA study). METHODS:In total, 139 eligible patients were enrolled in 9 centers. Catheter placement was evaluated by 3 different components: number of ventricular cannulation attempts, a grading scale, and the anatomical position of the catheter tip. The primary endpoint was the rate of primary cannulation of grade I catheter position in the ipsilateral ventricle. The secondary endpoints were rate of intraventricular position of the catheter's perforations, early ventricular catheter failure, and complications. RESULTS:The primary endpoint was reached in 70% of the guided group vs 56.5% (freehand group; odds ratio 1.79, 95% confidence interval 0.89-3.61). The primary successful puncture rate was 100% vs 91.3% (P = .012). Catheter perforations were located completely inside the ventricle in 81.4% (guided group) and 65.2% (freehand group; odds ratio 2.34, 95% confidence interval 1.07-5.1). No differences occurred in early ventricular catheter failure, complication rate, duration of surgery, or hospital stay. CONCLUSION:The guided ventricular catheter application proved to be a safe and simple method. The primary endpoint revealed a nonsignificant improvement of optimal catheter placement among the groups. Long-term follow-up is necessary in order to evaluate differences in catheter survival among shunted patients.
Project description:We present the case of a 59-year-old patient with persistent atrial fibrillation, referred for atrial fibrillation ablation. The procedure was performed with the help of NAVX 3D mapping system (Saint Jude Medical) and iLAB Ultra ICE Plus ultrasound imaging catheter (Boston Scientific). The catheter permits cross-sectional images perpendicular to catheter's long axis. From inside left atrial appendage (LAA) looks trabeculated, due to pectinate muscles running parallel to each other. The presence of a thrombus was excluded from the appendage. The contractility of LAA was also assessed using multiple frames recorded on videotape. Our case demonstrates that LAA's morphology and function can be directly assessed by intracardiac ultrasound with the probe inserted inside the appendage.
Project description:The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization.
Project description:Introduction:Endovascular aortic repair (EVAR) is a minimal-invasive technique that prevents life-threatening rupture in patients with aortic pathologies by implantation of an endoluminal stent graft. During the endovascular procedure, device navigation is currently performed by fluoroscopy in combination with digital subtraction angiography. This study presents the current iterative process of biomedical engineering within the disruptive interdisciplinary project Nav EVAR, which includes advanced navigation, image techniques and augmented reality with the aim of reducing side effects (namely radiation exposure and contrast agent administration) and optimising visualisation during EVAR procedures. This article describes the current prototype developed in this project and the experiments conducted to evaluate it. Methods:The current approach of the Nav EVAR project is guiding EVAR interventions in real-time with an electromagnetic tracking system after attaching a sensor on the catheter tip and displaying this information on Microsoft HoloLens glasses. This augmented reality technology enables the visualisation of virtual objects superimposed on the real environment. These virtual objects include three-dimensional (3D) objects (namely 3D models of the skin and vascular structures) and two-dimensional (2D) objects [namely orthogonal views of computed tomography (CT) angiograms, 2D images of 3D vascular models, and 2D images of a new virtual angioscopy whose appearance of the vessel wall follows that shown in ex vivo and in vivo angioscopies]. Specific external markers were designed to be used as landmarks in the registration process to map the tracking data and radiological data into a common space. In addition, the use of real-time 3D ultrasound (US) is also under evaluation in the Nav EVAR project for guiding endovascular tools and updating navigation with intraoperative imaging. US volumes are streamed from the US system to HoloLens and visualised at a certain distance from the probe by tracking augmented reality markers. A human model torso that includes a 3D printed patient-specific aortic model was built to provide a realistic test environment for evaluation of technical components in the Nav EVAR project. The solutions presented in this study were tested by using an US training model and the aortic-aneurysm phantom. Results:During the navigation of the catheter tip in the US training model, the 3D models of the phantom surface and vessels were visualised on HoloLens. In addition, a virtual angioscopy was also built from a CT scan of the aortic-aneurysm phantom. The external markers designed for this study were visible in the CT scan and the electromagnetically tracked pointer fitted in each marker hole. US volumes of the US training model were sent from the US system to HoloLens in order to display them, showing a latency of 259±86 ms (mean±standard deviation). Conclusion:The Nav EVAR project tackles the problem of radiation exposure and contrast agent administration during EVAR interventions by using a multidisciplinary approach to guide the endovascular tools. Its current state presents several limitations such as the rigid alignment between preoperative data and the simulated patient. Nevertheless, the techniques shown in this study in combination with fibre Bragg gratings and optical coherence tomography are a promising approach to overcome the problems of EVAR interventions.
Project description:What was once a science fiction fantasy, virtual reality (VR) technology has evolved and come a long way. Together with augmented reality (AR) technology, these simulations of an alternative environment have been incorporated into rehabilitation treatments. The introduction of head-mounted displays has made VR/AR devices more intuitive and compact, and no longer limited to upper-limb rehabilitation. However, there is still limited evidence supporting the use of VR and AR technology during locomotion, especially regarding the safety and efficacy relating to walking biomechanics. Therefore, the objective of this study is to explore the limitations of such technology through gait analysis. In this study, thirteen participants walked on a treadmill in normal, virtual and augmented versions of the laboratory environment. A series of spatiotemporal parameters and lower-limb joint angles were compared between conditions. The center of pressure (CoP) ellipse area (95% confidence ellipse) was significantly different between conditions (p = 0.002). Pairwise comparisons indicated a significantly greater CoP ellipse area for both the AR (p = 0.002) and VR (p = 0.005) conditions when compared to the normal laboratory condition. Furthermore, there was a significant difference in stride length (p<0.001) and cadence (p<0.001) between conditions. No statistically significant difference was found in the hip, knee and ankle joint kinematics between the three conditions (p>0.082), except for maximum ankle plantarflexion (p = 0.001). These differences in CoP ellipse area indicate that users of head-mounted VR/AR devices had difficulty maintaining a stable position on the treadmill. Also, differences in the gait parameters suggest that users walked with an unusual gait pattern which could potentially affect the effectiveness of gait rehabilitation treatments. Based on these results, position guidance in the form of feedback and the use of specialized treadmills should be considered when using head-mounted VR/AR devices.
Project description:Advanced visualization of medical image data in the form of three-dimensional (3D) printing continues to expand in clinical settings and many hospitals have started to adapt 3D technologies to aid in patient care. It is imperative that radiologists and other medical professionals understand the multi-step process of converting medical imaging data to digital files. To educate health care professionals about the steps required to prepare DICOM data for 3D printing anatomical models, hands-on courses have been delivered at the Radiological Society of North America (RSNA) annual meeting since 2014. In this paper, a supplement to the RSNA 2018 hands-on 3D printing course, we review methods to create cranio-maxillofacial (CMF), orthopedic, and renal cancer models which can be 3D printed or visualized in augmented reality (AR) or virtual reality (VR).
Project description:Human-machine interfaces are essential components between various human and machine interactions such as entertainment, robotics control, smart home, virtual/augmented reality, etc. Recently, various triboelectric-based interfaces have been developed toward flexible wearable and battery-less applications. However, most of them exhibit complicated structures and a large number of electrodes for multidirectional control. Herein, a bio-inspired spider-net-coding (BISNC) interface with great flexibility, scalability, and single-electrode output is proposed, through connecting information-coding electrodes into a single triboelectric electrode. Two types of coding designs are investigated, i.e., information coding by large/small electrode width (L/S coding) and information coding with/without electrode at a predefined position (0/1 coding). The BISNC interface shows high scalability with a single electrode for detection and/or control of multiple directions, by detecting different output signal patterns. In addition, it also has excellent reliability and robustness in actual usage scenarios, since recognition of signal patterns is in regardless of absolute amplitude and thereby not affected by sliding speed/force, humidity, etc. Based on the spider-net-coding concept, single-electrode interfaces for multidirectional 3D control, security code systems, and flexible wearable electronics are successfully developed, indicating the great potentials of this technology in diversified applications such as human-machine interaction, virtual/augmented reality, security, robotics, Internet of Things, etc.
Project description:BACKGROUND:Three-dimensional intracardiac echocardiography (3D ICE) with wide azimuthal elevation is a novel technique performed for assessment of cardiac anatomy and guidance of intracardiac procedures, being able to provide unique views with good spatial and temporal resolution. Complications arising from this invasive procedure and the value of 3D ICE in the detection and diagnosis of acute cardiovascular pathology are not comprehensively described. This case illustrates a previously unreported iatrogenic complication of clot displacement from the intra-vascular sheath upon insertion of a 3D ICE catheter and the value of 3D ICE in immediate diagnosis of clot in transit through the heart with pulmonary embolism. CASE PRESENTATION:We conducted a translational study of 3D ICE with wide azimuthal elevation to guide implantation of a left ventricular assist device (Impella CP®) in eight adult sheep. A large-bore 14 Fr central venous sheath was used to enable right atrial and right ventricular access for the intracardiac catheter. Insertion of the 3D ICE catheter was accompanied by a sudden severe cardiorespiratory deterioration in one animal. 3D ICE revealed a large highly mobile mass within the right heart chambers, determined to be a clot-in-transit. The diagnosis of pulmonary clot embolism resulting from the retrograde blood entry into the large-bore sheath introducer, rapid clot formation and consequent displacement into venous circulation by the ICE catheter was made. The sheep survived this life-threatening event following institution of cardiovascular support allowing completion of the primary research protocol. CONCLUSION:This report serves as a serious warning to the researchers and clinicians utilizing long large-bore sheath introducers for 3D ICE and illustrates the value of 3D ICE in detecting clot-in-transit within right heart chambers.
Project description:Commercial availability of three-dimensional (3D) augmented reality (AR) devices has increased interest in using this novel technology for visualizing neuroimaging data. Here, a technical workflow and algorithm for importing 3D surface-based segmentations derived from magnetic resonance imaging data into a head-mounted AR device is presented and illustrated on selected examples: the pial cortical surface of the human brain, fMRI BOLD maps, reconstructed white matter tracts, and a brain network of functional connectivity.