Association between surgical disease burden and research productivity in surgery across the globe: a big data comparative analysis using artificial intelligence.
Association between surgical disease burden and research productivity in surgery across the globe: a big data comparative analysis using artificial intelligence.
Project description:Study designRetrospective cohort study.ObjectivesLeveraging electronic health records (EHRs) for spine surgery research is impeded by concerns regarding patient privacy and data ownership. Synthetic data derivatives may help overcome these limitations. This study's objective was to validate the use of synthetic data for spine surgery research.MethodsData came from the EHR from 15 hospitals. Patients that underwent anterior cervical or posterior lumbar fusion (2010-2020) were included. Real data were obtained from the EHR. Synthetic data was generated to simulate the properties of the real data, without maintaining a one-to-one correspondence with real patients. Within each cohort, ability to predict 30-day readmissions and 30-day complications was evaluated using logistic regression and extreme gradient boosting machines (XGBoost).ResultsWe identified 9,072 real and 9,088 synthetic cervical fusion patients. Descriptive characteristics were nearly identical between the 2 datasets. When predicting readmission, models built using real and synthetic data both had c-statistics of .69-.71 using logistic regression and XGBoost. Among 12,111 real and 12,126 synthetic lumbar fusion patients, descriptive characteristics were nearly the same for most variables. Using logistic regression and XGBoost to predict readmission, discrimination was similar with models built using real and synthetic data (c-statistics .66-.69). When predicting complications, models derived using real and synthetic data showed similar discrimination in both cohorts. Despite some differences, the most influential predictors were similar in the real and synthetic datasets.ConclusionSynthetic data replicate most descriptive and predictive properties of real data, and therefore may expand EHR research in spine surgery.
Project description:Introduction Global surgery has become the undisputed starting point for addressing a myriad of problems in surgery today. Therefore, it is necessary to constantly evaluate the scientific productivity in surgery, its behavior, validity and impact. In Latin America, specifically in Colombia, there are no studies that have analyzed this production. Methods A retrospective cross-sectional bibliometric study was carried out, in which the Colombian Ministry of Science database was consulted with the validated results up to July 2021. In the search section for research groups, the key word “Surgery” was used, and all associated GrupLAC (platform where the information of the research groups can be found) and their registered products were reviewed. Results 40 groups were included. Only 5 (12.5%) were registered in surgery as main line of research. The great majority of the groups were in the medium-low category, 50% in category C and 22.5% in category B. The vast majority of surgical groups are located in Bogotá (19; 47.5%). The first surgery group in the country was created in 1994 and the last one in 2017. In 27 years of surgical research, a total of 4121 registered scientific articles were found, 83 books, 713 book chapters, 2891 products associated with participation in scientific events, 1221 theses directed, and 1670 projects in colombian surgical research groups. There was evidence of a high rate of underreporting of data, due to duplication of products and incomplete registration of data. Conclusions There is a high rate of underreporting of products and data in the GrupLAC of Colombian surgical research groups. Most of the production is located in the Andes region (Antioquia, Valle del Cauca and Bogotá), and is predominantly composed of scientific articles and products associated with participation in scientific events. Highlights • There is a high rate of underreporting of data in the Colombian surgical research groups.• Most of the production is composed of articles and participation in scientific events.• Surgical production in Colombia is centralized in the Andean region.
Project description:ImportanceSurgeons make complex, high-stakes decisions under time constraints and uncertainty, with significant effect on patient outcomes. This review describes the weaknesses of traditional clinical decision-support systems and proposes that artificial intelligence should be used to augment surgical decision-making.ObservationsSurgical decision-making is dominated by hypothetical-deductive reasoning, individual judgment, and heuristics. These factors can lead to bias, error, and preventable harm. Traditional predictive analytics and clinical decision-support systems are intended to augment surgical decision-making, but their clinical utility is compromised by time-consuming manual data management and suboptimal accuracy. These challenges can be overcome by automated artificial intelligence models fed by livestreaming electronic health record data with mobile device outputs. This approach would require data standardization, advances in model interpretability, careful implementation and monitoring, attention to ethical challenges involving algorithm bias and accountability for errors, and preservation of bedside assessment and human intuition in the decision-making process.Conclusions and relevanceIntegration of artificial intelligence with surgical decision-making has the potential to transform care by augmenting the decision to operate, informed consent process, identification and mitigation of modifiable risk factors, decisions regarding postoperative management, and shared decisions regarding resource use.
Project description:BackgroundArtificial intelligence (AI) methods and AI-enabled metrics hold tremendous potential to advance surgical education. Our objective was to generate consensus guidance on specific needs for AI methods and AI-enabled metrics for surgical education.Study designThe study included a systematic literature search, a virtual conference, and a 3-round Delphi survey of 40 representative multidisciplinary stakeholders with domain expertise selected through purposeful sampling. The accelerated Delphi process was completed within 10 days. The survey covered overall utility, anticipated future (10-year time horizon), and applications for surgical training, assessment, and feedback. Consensus was agreement among 80% or more respondents. We coded survey questions into 11 themes and descriptively analyzed the responses.ResultsThe respondents included surgeons (40%), engineers (15%), affiliates of industry (27.5%), professional societies (7.5%), regulatory agencies (7.5%), and a lawyer (2.5%). The survey included 155 questions; consensus was achieved on 136 (87.7%). The panel listed 6 deliverables each for AI-enhanced learning curve analytics and surgical skill assessment. For feedback, the panel identified 10 priority deliverables spanning 2-year (n = 2), 5-year (n = 4), and 10-year (n = 4) timeframes. Within 2 years, the panel expects development of methods to recognize anatomy in images of the surgical field and to provide surgeons with performance feedback immediately after an operation. The panel also identified 5 essential that should be included in operative performance reports for surgeons.ConclusionsThe Delphi panel consensus provides a specific, bold, and forward-looking roadmap for AI methods and AI-enabled metrics for surgical education.
Project description:Atopic dermatitis, food allergy, allergic rhinitis, and asthma are among the most common diseases in childhood. They are heterogeneous diseases, can co-exist in their development, and manifest complex associations with other disorders and environmental and hereditary factors. Elucidating these intricacies by identifying clinically distinguishable groups and actionable risk factors will allow for better understanding of the diseases, which will enhance clinical management and benefit society and affected individuals and families. Artificial intelligence (AI) is a promising tool in this context, enabling discovery of meaningful patterns in complex data. Numerous studies within pediatric allergy have and continue to use AI, primarily to characterize disease endotypes/phenotypes and to develop models to predict future disease outcomes. However, most implementations have used relatively simplistic data from one source, such as questionnaires. In addition, methodological approaches and reporting are lacking. This review provides a practical hands-on guide for conducting AI-based studies in pediatric allergy, including (1) an introduction to essential AI concepts and techniques, (2) a blueprint for structuring analysis pipelines (from selection of variables to interpretation of results), and (3) an overview of common pitfalls and remedies. Furthermore, the state-of-the art in the implementation of AI in pediatric allergy research, as well as implications and future perspectives are discussed.ConclusionAI-based solutions will undoubtedly transform pediatric allergy research, as showcased by promising findings and innovative technical solutions, but to fully harness the potential, methodologically robust implementation of more advanced techniques on richer data will be needed.What is known• Pediatric allergies are heterogeneous and common, inflicting substantial morbidity and societal costs. • The field of artificial intelligence is undergoing rapid development, with increasing implementation in various fields of medicine and research.What is new• Promising applications of AI in pediatric allergy have been reported, but implementation largely lags behind other fields, particularly in regard to use of advanced algorithms and non-tabular data. Furthermore, lacking reporting on computational approaches hampers evidence synthesis and critical appraisal. • Multi-center collaborations with multi-omics and rich unstructured data as well as utilization of deep learning algorithms are lacking and will likely provide the most impactful discoveries.
Project description:Debilitating hearing loss (HL) affects ~6% of the human population. Only 20% of the people in need of a hearing assistive device will eventually seek and acquire one. The number of people that are satisfied with their Hearing Aids (HAids) and continue using them in the long term is even lower. Understanding the personal, behavioral, environmental, or other factors that correlate with the optimal HAid fitting and with users' experience of HAids is a significant step in improving patient satisfaction and quality of life, while reducing societal and financial burden. In SMART BEAR we are addressing this need by making use of the capacity of modern HAids to provide dynamic logging of their operation and by combining this information with a big amount of information about the medical, environmental, and social context of each HAid user. We are studying hearing rehabilitation through a 12-month continuous monitoring of HL patients, collecting data, such as participants' demographics, audiometric and medical data, their cognitive and mental status, their habits, and preferences, through a set of medical devices and wearables, as well as through face-to-face and remote clinical assessments and fitting/fine-tuning sessions. Descriptive, AI-based analysis and assessment of the relationships between heterogeneous data and HL-related parameters will help clinical researchers to better understand the overall health profiles of HL patients, and to identify patterns or relations that may be proven essential for future clinical trials. In addition, the future state and behavioral (e.g., HAids Satisfiability and HAids usage) of the patients will be predicted with time-dependent machine learning models to assist the clinical researchers to decide on the nature of the interventions. Explainable Artificial Intelligence (XAI) techniques will be leveraged to better understand the factors that play a significant role in the success of a hearing rehabilitation program, constructing patient profiles. This paper is a conceptual one aiming to describe the upcoming data collection process and proposed framework for providing a comprehensive profile for patients with HL in the context of EU-funded SMART BEAR project. Such patient profiles can be invaluable in HL treatment as they can help to identify the characteristics making patients more prone to drop out and stop using their HAids, using their HAids sufficiently long during the day, and being more satisfied by their HAids experience. They can also help decrease the number of needed remote sessions with their Audiologist for counseling, and/or HAids fine tuning, or the number of manual changes of HAids program (as indication of poor sound quality and bad adaptation of HAids configuration to patients' real needs and daily challenges), leading to reduced healthcare cost.
Project description:Kidney diseases form part of the major health burdens experienced all over the world. Kidney diseases are linked to high economic burden, deaths, and morbidity rates. The great importance of collecting a large quantity of health-related data among human cohorts, what scholars refer to as "big data", has increasingly been identified, with the establishment of a large group of cohorts and the usage of electronic health records (EHRs) in nephrology and transplantation. These data are valuable, and can potentially be utilized by researchers to advance knowledge in the field. Furthermore, progress in big data is stimulating the flourishing of artificial intelligence (AI), which is an excellent tool for handling, and subsequently processing, a great amount of data and may be applied to highlight more information on the effectiveness of medicine in kidney-related complications for the purpose of more precise phenotype and outcome prediction. In this article, we discuss the advances and challenges in big data, the use of EHRs and AI, with great emphasis on the usage of nephrology and transplantation.
Project description:Several factors, including advances in computational algorithms, the availability of high-performance computing hardware, and the assembly of large community-based databases, have led to the extensive application of Artificial Intelligence (AI) in the biomedical domain for nearly 20 years. AI algorithms have attained expert-level performance in cancer research. However, only a few AI-based applications have been approved for use in the real world. Whether AI will eventually be capable of replacing medical experts has been a hot topic. In this article, we first summarize the cancer research status using AI in the past two decades, including the consensus on the procedure of AI based on an ideal paradigm and current efforts of the expertise and domain knowledge. Next, the available data of AI process in the biomedical domain are surveyed. Then, we review the methods and applications of AI in cancer clinical research categorized by the data types including radiographic imaging, cancer genome, medical records, drug information and biomedical literatures. At last, we discuss challenges in moving AI from theoretical research to real-world cancer research applications and the perspectives toward the future realization of AI participating cancer treatment.