Use of Artificial Intelligence-Based Analytics From Live Colonoscopies to Optimize the Quality of the Colonoscopy Examination in Real Time: Proof of Concept.
Use of Artificial Intelligence-Based Analytics From Live Colonoscopies to Optimize the Quality of the Colonoscopy Examination in Real Time: Proof of Concept.
Project description:PurposeAutomation and computer assistance can support quality assurance tasks in radiotherapy. Retrospective image review requires significant human resources, and automation of image review remains a noteworthy missing element in previous work. Here, we present initial findings from a proof-of-concept clinical implementation of an AI-assisted review of CBCT registrations used for patient setup.MethodsAn automated pipeline was developed and executed nightly, utilizing python scripts to interact with the clinical database through DICOM networking protocol and automate data retrieval and analysis. A previously developed artificial intelligence (AI) algorithm scored CBCT setup registrations based on misalignment likelihood, using a scale from 0 (most unlikely) through 1 (most likely). Over a 45-day period, 1357 pre-treatment CBCT registrations from 197 patients were retrieved and analyzed by the pipeline. Daily summary reports of the previous day's registrations were produced. Initial action levels targeted 10% of cases to highlight for in-depth physics review. A validation subset of 100 cases was scored by three independent observers to characterize AI-model performance.ResultsFollowing an ROC analysis, a global threshold for model predictions of 0.87 was determined, with a sensitivity of 100% and specificity of 82%. Inspecting the observer scores for the stratified validation dataset showed a statistically significant correlation between observer scores and model predictions.ConclusionIn this work, we describe the implementation of an automated AI-analysis pipeline for daily quantitative analysis of CBCT-guided patient setup registrations. The AI-model was validated against independent expert observers, and appropriate action levels were determined to minimize false positives without sacrificing sensitivity. Case studies demonstrate the potential benefits of such a pipeline to bolster quality and safety programs in radiotherapy. To the authors' knowledge, there are no previous works performing AI-assisted assessment of pre-treatment CBCT-based patient alignment.
Project description:Several scoring systems have been devised to objectively predict survival for patients with intrahepatic cholangiocellular carcinoma (ICC) and support treatment stratification, but they have failed external validation. The aim of the present study was to improve prognostication using an artificial intelligence-based approach. We retrospectively identified 417 patients with ICC who were referred to our tertiary care center between 1997 and 2018. Of these, 293 met the inclusion criteria. Established risk factors served as input nodes for an artificial neural network (ANN). We compared the performance of the trained model to the most widely used conventional scoring system, the Fudan score. Predicting 1-year survival, the ANN reached an area under the ROC curve (AUC) of 0.89 for the training set and 0.80 for the validation set. The AUC of the Fudan score was significantly lower in the validation set (0.77, p < 0.001). In the training set, the Fudan score yielded a lower AUC (0.74) without reaching significance (p = 0.24). Thus, ANNs incorporating a multitude of known risk factors can outperform conventional risk scores, which typically consist of a limited number of parameters. In the future, such artificial intelligence-based approaches have the potential to improve treatment stratification when models trained on large multicenter data are openly available.
Project description:Efforts in the pharmaceutical market have been aimed at ensuring that the benefits obtained from the introduction of new therapies justify the associated costs. In recent years, drug payment models in healthcare have undergone a dramatic shift from focusing on volume (i.e., size of the target clinical population) to focusing on value (i.e., drug performance in real-world settings). In this context, value-based contracts (VBCs) were designed to align the payment of a drug to its clinical performance outside clinical trials by evaluating the effectiveness using real-word evidence (RWE). Despite their widespread implementation, different factors jeopardize the application of VBCs to most marketed drugs in a near future, including the need for easily measurable and relevant outcomes associated with clinical improvements, and access to a large patient population to assess said outcomes. Here, we argue that the extraction and analysis of massive amounts of RWE captured in patients' electronic health records (EHRs) will circumvent these issues and optimize negotiations in VBCs. Particularly, the use of Natural Language Processing (NLP) has proven successful in the analysis of structured and unstructured clinical information in EHRs in multicenter research studies. Thus, the application of NLP to analyze patient-centered information in EHRs in the context of innovative contracting can be utterly beneficial as it enables the real-time evaluation of treatment response and financial impact in real-world settings.
Project description:BackgroundIndocyanine green fluorescence angiography (ICGFA) is gaining popularity as an intraoperative tool to assess flap perfusion. However, it needs interpretation and there is concern regarding a potential for over-debridement with its use. Here we describe an artificial intelligence (AI) method that indicates the extent of flap trimming required.MethodsOperative ICGFA recordings from ten consenting patients undergoing flap reconstruction without subsequent partial/total necrosis as part of an approved prospective study (NCT04220242, Institutional Review Board Ref:1/378/2092), provided the training-testing datasets. Drawing from prior similar experience with ICGFA intestinal perfusion signal analysis, five fluorescence intensity and time-related features were analysed (MATLAB R2024a) from stabilised ICGFA imagery. Machine learning model training (with ten-fold cross-validation application) was grounded on the actual trimming by a consultant plastic surgeon (S.P.) experienced in ICGFA. MATLAB classification learner app was used to identify the most important feature and generate partial dependence plots for interpretability during training. Testing involved post-hoc application to unseen videos blinded to surgeon ICGFA interpretation.ResultsTraining:testing datasets comprised 7:3 ICGFA videos with 28 and 3 sampled lines respectively. Validation and testing accuracy were 99.9 % and 99.3 % respectively. Maximum fluorescence intensity identified as the most important predictive curve feature. Partial dependence plotting revealed a threshold of 22.1 grayscale units and regions with maximum intensity less then threshold being more likely to be predicted as "excise".ConclusionThe AI method proved discriminative regarding indicating whether to retain or excise peripheral flap portions. Additional prospective patients and expert references are needed to validate generalisability.
Project description:AimPre-injury frailty has been investigated as a tool to predict outcomes of older trauma patients. Using artificial intelligence principles of machine learning, we aimed to identify a "signature" (combination of clinical variables) that could predict which older adults are at risk of fall-related hospital admission. We hypothesized that frailty, measured using the 5-item modified Frailty Index, could be utilized in combination with other factors as a predictor of admission for fall-related injuries.MethodsThe National Readmission Database was mined to identify factors associated with admission of older adults for fall-related injuries. Older adults admitted for trauma-related injuries from 2010 to 2014 were included. Age, sex, number of chronic conditions and past fall-related admission, comorbidities, 5-item modified Frailty Index, and medical insurance status were included in the analysis. Two machine learning models were selected among six tested models (logistic regression and random forest). Using a decision tree as a surrogate model for random forest, we extracted high-risk combinations of factors associated with admission for fall-related injury.ResultsOur approach yielded 18 models. Being a woman was one of the factors most often associated with admission for fall-related injuries. Frailty appeared in four of the 18 combinations. Being a woman, aged 65-74 years and presenting a 5-item modified Frailty Index score >3 predicted admission for fall-related injuries in 80.3% of this population.ConclusionUsing artificial intelligence principles of machine learning, we were able to develop 18 signatures allowing us to identify older adults at risk of admission for fall-related injuries. Future studies using other databases, such as TQIP, are warranted to validate our high-risk combination models. Geriatr Gerontol Int 2025; 25: 232-242.
Project description:We present a structured approach to combine explainability of artificial intelligence (AI) with the scientific method for scientific discovery. We demonstrate the utility of this approach in a proof-of-concept study where we uncover biomarkers from a convolutional neural network (CNN) model trained to classify patient sex in retinal images. This is a trait that is not currently recognized by diagnosticians in retinal images, yet, one successfully classified by CNNs. Our methodology consists of four phases: In Phase 1, CNN development, we train a visual geometry group (VGG) model to recognize patient sex in retinal images. In Phase 2, Inspiration, we review visualizations obtained from post hoc interpretability tools to make observations, and articulate exploratory hypotheses. Here, we listed 14 hypotheses retinal sex differences. In Phase 3, Exploration, we test all exploratory hypotheses on an independent dataset. Out of 14 exploratory hypotheses, nine revealed significant differences. In Phase 4, Verification, we re-tested the nine flagged hypotheses on a new dataset. Five were verified, revealing (i) significantly greater length, (ii) more nodes, and (iii) more branches of retinal vasculature, (iv) greater retinal area covered by the vessels in the superior temporal quadrant, and (v) darker peripapillary region in male eyes. Finally, we trained a group of ophthalmologists (N=26) to recognize the novel retinal features for sex classification. While their pretraining performance was not different from chance level or the performance of a nonexpert group (N=31), after training, their performance increased significantly (p<0.001, d=2.63). These findings showcase the potential for retinal biomarker discovery through CNN applications, with the added utility of empowering medical practitioners with new diagnostic capabilities to enhance their clinical toolkit.
Project description:BackgroundArtificial intelligence (AI) using deep learning methods for polyp detection (CADe) and characterization (CADx) is on the verge of clinical application. CADe already implied its potential use in randomized controlled trials. Further efforts are needed to take CADx to the next level of development.AimThis work aims to give an overview of the current status of AI in colonoscopy, without going into too much technical detail.MethodsA literature search to identify important studies exploring the use of AI in colonoscopy was performed.ResultsThis review focuses on AI performance in screening colonoscopy summarizing the first prospective trials for CADe, the state of research in CADx as well as current limitations of those systems and legal issues.
Project description:ObjectiveTo develop and evaluate a data-driven process to generate suggestions for improving alert criteria using explainable artificial intelligence (XAI) approaches.MethodsWe extracted data on alerts generated from January 1, 2019 to December 31, 2020, at Vanderbilt University Medical Center. We developed machine learning models to predict user responses to alerts. We applied XAI techniques to generate global explanations and local explanations. We evaluated the generated suggestions by comparing with alert's historical change logs and stakeholder interviews. Suggestions that either matched (or partially matched) changes already made to the alert or were considered clinically correct were classified as helpful.ResultsThe final dataset included 2 991 823 firings with 2689 features. Among the 5 machine learning models, the LightGBM model achieved the highest Area under the ROC Curve: 0.919 [0.918, 0.920]. We identified 96 helpful suggestions. A total of 278 807 firings (9.3%) could have been eliminated. Some of the suggestions also revealed workflow and education issues.ConclusionWe developed a data-driven process to generate suggestions for improving alert criteria using XAI techniques. Our approach could identify improvements regarding clinical decision support (CDS) that might be overlooked or delayed in manual reviews. It also unveils a secondary purpose for the XAI: to improve quality by discovering scenarios where CDS alerts are not accepted due to workflow, education, or staffing issues.
Project description:Background and aimsColorectal cancer is the third most common cancer in the United States, with colonoscopy being the preferred screening method. Up to 25% of colonoscopies are associated with poor preparation which leads to prolonged procedure time, repeat colonoscopies, and decreased adenoma detection. Artificial intelligence (AI) is being increasingly used in medicine, assessing medical school exam questions, and writing medical reports. Its use in gastroenterology has been used to educate patients with cirrhosis and hepatocellular carcinoma, answer patient questions about colonoscopy and provide correct colonoscopy screening intervals, having the ability to augment the patient-provider relationship. This study aims at assessing the accuracy of a ChatGPT-generated precolonoscopy bowel preparation prompt.MethodsA nonrandomized cross-sectional study assessing the perceptions of an AI-generated colonoscopy preparation prompt was conducted in a large multisite quaternary health-care institution in the northeast United States. All practicing gastroenterologists in the health system were surveyed, with 208 having a valid email address and were included in the study. A Research Electronic Data Capture survey was then distributed to all participants and analyzed using descriptive statistics.ResultsOverall, 91% of gastroenterologist physicians determined the prompt easy to understand, 95% thought the prompt was scientifically accurate and 66% were comfortable giving the prompt to their patients. Sixty four percent of reviewers correctly identified the ChatGPT-generated prompt, but only 32% were confident in their answer.ConclusionThe ability of ChatGPT to create a sufficient bowel preparation prompt highlights how physicians can incorporate AI into clinical practice to improve ease and efficiency of communication with patients when it comes to bowel preparation.
Project description:Reviews have traditionally been based on extensive searches of the available bibliography on the topic of interest. However, this approach is frequently influenced by the authors' background, leading to possible selection bias. Artificial intelligence applied to natural language processing (NLP) is a powerful tool that can be used for systematic reviews by speeding up the process and providing more objective results, but its use in scientific literature reviews is still scarce. This manuscript addresses this challenge by developing a reproducible tool that can be used to develop objective reviews on almost every topic. This tool has been used to review the antibacterial activity of Cistus genus plant extracts as proof of concept, providing a comprehensive and objective state of the art on this topic based on the analysis of 1601 research manuscripts and 136 patents. Data were processed using a publicly available Jupyter Notebook in Google Collaboratory here. NLP, when applied to the study of antibacterial activity of Cistus plants, is able to recover the main scientific manuscripts and patents related to the topic, avoiding any biases. The NLP-assisted literature review reveals that C. creticus and C. monspeliensis are the first and second most studied Cistus species respectively. Leaves and fruits are the most commonly used plant parts and methanol, followed by butanol and water, the most widely used solvents to prepare plant extracts. Furthermore, Staphylococcus. aureus followed by Bacillus. cereus are the most studied bacterial species, which are also the most susceptible bacteria in all studied assays. This new tool aims to change the actual paradigm of the review of scientific literature to make the process more efficient, reliable, and reproducible, according to Open Science standards.