Project description:ObjectiveTo identify the most frequent obstacles preventing physicians from answering their patient-care questions and the most requested improvements to clinical information resources.DesignQualitative analysis of questions asked by 48 randomly selected generalist physicians during ambulatory care.MeasurementsFrequency of reported obstacles to answering patient-care questions and recommendations from physicians for improving clinical information resources.ResultsThe physicians asked 1,062 questions but pursued answers to only 585 (55%). The most commonly reported obstacle to the pursuit of an answer was the physician's doubt that an answer existed (52 questions, 11%). Among pursued questions, the most common obstacle was the failure of the selected resource to provide an answer (153 questions, 26%). During audiotaped interviews, physicians made 80 recommendations for improving clinical information resources. For example, they requested comprehensive resources that answer questions likely to occur in practice with emphasis on treatment and bottom-line advice. They asked for help in locating information quickly by using lists, tables, bolded subheadings, and algorithms and by avoiding lengthy, uninterrupted prose.ConclusionPhysicians do not seek answers to many of their questions, often suspecting a lack of usable information. When they do seek answers, they often cannot find the information they need. Clinical resource developers could use the recommendations made by practicing physicians to provide resources that are more useful for answering clinical questions.
Project description:Management of brain metastases is challenging, both because of the historically guarded prognosis and evolving, more efficacious treatment paradigms for metastatic cancer. This perspective addresses several of the important difficult questions that practitioners treating patients with brain tumors face in the clinic. Successfully answering these questions requires knowledge of the clinical evidence, thoughtful discussion of the patient's goals of care and collaboration in a multi-disciplinary setting.
Project description:Electronic cigarettes (ECIGs) are a relatively new class of tobacco products and a subject of much debate for scientists and policymakers worldwide. Objective data that address the ECIG risk-benefit ratio for individual and public health are needed, and addressing this need requires a multidisciplinary approach that spans several areas of psychology as well as chemistry, toxicant inhalation, and physiology. This multidisciplinary approach would benefit from methods that are reliable, valid, and swift. For this reason, we formed a multidisciplinary team to develop methods that could answer questions about ECIGs and other potential modified risk tobacco products. Our team includes scientists with expertise in psychology (clinical, community, and experimental) and other disciplines, including aerosol research, analytical chemistry, biostatistics, engineering, internal medicine, and public health. The psychologists on our team keep other members focused on factors that influence individual behavior, and other team members keep the psychologists aware of other issues, such as product design. Critically, all team members are willing to extend their interests beyond the boundaries of their discipline to collaborate effectively with the shared goal of producing the rigorous science needed to inform empirically based tobacco policy. In addition, our trainees gain valuable knowledge from these collaborations and learn that other disciplines are accessible, exciting, and can enhance their own research. Multidisciplinary work presents challenges: learning other scientists' languages and staying focused on our core mission. Overall, our multidisciplinary team has led to several major findings that inform the scientific, regulatory, and public health communities about ECIGs and their effects. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Project description:Background and aimsPatients frequently have concerns about their disease and find it challenging to obtain accurate Information. OpenAI's ChatGPT chatbot (ChatGPT) is a new large language model developed to provide answers to a wide range of questions in various fields. Our aim is to evaluate the performance of ChatGPT in answering patients' questions regarding gastrointestinal health.MethodsTo evaluate the performance of ChatGPT in answering patients' questions, we used a representative sample of 110 real-life questions. The answers provided by ChatGPT were rated in consensus by three experienced gastroenterologists. The accuracy, clarity, and efficacy of the answers provided by ChatGPT were assessed.ResultsChatGPT was able to provide accurate and clear answers to patients' questions in some cases, but not in others. For questions about treatments, the average accuracy, clarity, and efficacy scores (1 to 5) were 3.9 ± 0.8, 3.9 ± 0.9, and 3.3 ± 0.9, respectively. For symptoms questions, the average accuracy, clarity, and efficacy scores were 3.4 ± 0.8, 3.7 ± 0.7, and 3.2 ± 0.7, respectively. For diagnostic test questions, the average accuracy, clarity, and efficacy scores were 3.7 ± 1.7, 3.7 ± 1.8, and 3.5 ± 1.7, respectively.ConclusionsWhile ChatGPT has potential as a source of information, further development is needed. The quality of information is contingent upon the quality of the online information provided. These findings may be useful for healthcare providers and patients alike in understanding the capabilities and limitations of ChatGPT.
Project description:BackgroundLarge language models (LLMs), such as ChatGPT (Open AI), are increasingly used in medicine and supplement standard search engines as information sources. This leads to more "consultations" of LLMs about personal medical symptoms.ObjectiveThis study aims to evaluate ChatGPT's performance in answering clinical case-based questions in otorhinolaryngology (ORL) in comparison to ORL consultants' answers.MethodsWe used 41 case-based questions from established ORL study books and past German state examinations for doctors. The questions were answered by both ORL consultants and ChatGPT 3. ORL consultants rated all responses, except their own, on medical adequacy, conciseness, coherence, and comprehensibility using a 6-point Likert scale. They also identified (in a blinded setting) if the answer was created by an ORL consultant or ChatGPT. Additionally, the character count was compared. Due to the rapidly evolving pace of technology, a comparison between responses generated by ChatGPT 3 and ChatGPT 4 was included to give an insight into the evolving potential of LLMs.ResultsRatings in all categories were significantly higher for ORL consultants (P<.001). Although inferior to the scores of the ORL consultants, ChatGPT's scores were relatively higher in semantic categories (conciseness, coherence, and comprehensibility) compared to medical adequacy. ORL consultants identified ChatGPT as the source correctly in 98.4% (121/123) of cases. ChatGPT's answers had a significantly higher character count compared to ORL consultants (P<.001). Comparison between responses generated by ChatGPT 3 and ChatGPT 4 showed a slight improvement in medical accuracy as well as a better coherence of the answers provided. Contrarily, neither the conciseness (P=.06) nor the comprehensibility (P=.08) improved significantly despite the significant increase in the mean amount of characters by 52.5% (n= (1470-964)/964; P<.001).ConclusionsWhile ChatGPT provided longer answers to medical problems, medical adequacy and conciseness were significantly lower compared to ORL consultants' answers. LLMs have potential as augmentative tools for medical care, but their "consultation" for medical problems carries a high risk of misinformation as their high semantic quality may mask contextual deficits.
Project description:UnlabelledOur team explored the utility of unpaid versions of 3 artificial intelligence chatbots in offering patient-facing responses to questions about 5 common dermatological diagnoses, and highlighted the strengths and limitations of different artificial intelligence chatbots, while demonstrating how chatbots presented the most potential in tandem with dermatologists' diagnosis.
Project description:Perceived knowledge gaps in general practice are not well documented but must be understood to ensure relevant and timely evidence for busy general practitioners (GPs) which reflects their diverse and changing needs. The aim of this study was to classify the types of questions submitted by Australian GPs to an evidence-based practice information service using established and inductive coding systems. We analysed 126 clinical questions submitted by 53 Australian GPs over a 1.5-year period. Questions were coded using the International Classification of Primary Care (ICPC-2 PLUS) and Ely and colleagues' generic questions taxonomy by two independent coders. Inductive qualitative content analysis was also used to identify perceived knowledge gaps. Treatment (71%), diagnosis (15%) and epidemiology (9%) were the most common categories of questions. Using the ICPC-2 classification, questions were most commonly coded to the endocrine/metabolic and nutritional chapter heading, followed by general and unspecified, digestive and musculoskeletal. Seventy per cent of all questions related to the need to stay up-to-date with the evidence, or be informed about new tests or treatments (including complementary and alternative therapies). These findings suggest that current guideline formats for common clinical problems may not meet the knowledge demands of GPs and there is gap in access to evidence updates on new tests, treatments and complementary and alternative therapies. Better systems for 'pulling' real-time questions from GPs could better inform the 'push' of more relevant and timely evidence for use in the clinical encounter.
Project description:Evidence suggests that research protocols often lack important information on study design, which hinders external review. The study protocol should provide an adequate explanation for why the proposed study methodology is appropriate for the question posed, why the study design is likely to answer the research question, and why it is the best approach. It is especially important that researchers explain why the treatment difference sought is worthwhile to patients, and they should reference consultations with the public and patient groups and existing literature. Moreover, the study design should be underpinned by a systematic review of the existing evidence, which should be included in the research protocol. The Health Research Authority in collaboration with partners has published guidance entitled 'Specific questions that need answering when considering the design of clinical trials'. The guidance will help those designing research and those reviewing it to address key issues.
Project description:In December 2019, China reported the first cases of the coronavirus disease 2019 (COVID-19). This disease, caused by the severe acute respiratory syndrome-related coronavirus 2 (SARS-CoV-2), has developed into a pandemic. To date, it has resulted in ~9 million confirmed cases and caused almost 500 000 related deaths worldwide. Unequivocally, the COVID-19 pandemic is the gravest health and socioeconomic crisis of our time. In this context, numerous questions have emerged in demand of basic scientific information and evidence-based medical advice on SARS-CoV-2 and COVID-19. Although the majority of the patients show a very mild, self-limiting viral respiratory disease, many clinical manifestations in severe patients are unique to COVID-19, such as severe lymphopenia and eosinopenia, extensive pneumonia, a "cytokine storm" leading to acute respiratory distress syndrome, endothelitis, thromboembolic complications, and multiorgan failure. The epidemiologic features of COVID-19 are distinctive and have changed throughout the pandemic. Vaccine and drug development studies and clinical trials are rapidly growing at an unprecedented speed. However, basic and clinical research on COVID-19-related topics should be based on more coordinated high-quality studies. This paper answers pressing questions, formulated by young clinicians and scientists, on SARS-CoV-2, COVID-19, and allergy, focusing on the following topics: virology, immunology, diagnosis, management of patients with allergic disease and asthma, treatment, clinical trials, drug discovery, vaccine development, and epidemiology. A total of 150 questions were answered by experts in the field providing a comprehensive and practical overview of COVID-19 and allergic disease.
Project description:Background/aimsPatients with cirrhosis and hepatocellular carcinoma (HCC) require extensive and personalized care to improve outcomes. ChatGPT (Generative Pre-trained Transformer), a large language model, holds the potential to provide professional yet patient-friendly support. We aimed to examine the accuracy and reproducibility of ChatGPT in answering questions regarding knowledge, management, and emotional support for cirrhosis and HCC.MethodsChatGPT's responses to 164 questions were independently graded by two transplant hepatologists and resolved by a third reviewer. The performance of ChatGPT was also assessed using two published questionnaires and 26 questions formulated from the quality measures of cirrhosis management. Finally, its emotional support capacity was tested.ResultsWe showed that ChatGPT regurgitated extensive knowledge of cirrhosis (79.1% correct) and HCC (74.0% correct), but only small proportions (47.3% in cirrhosis, 41.1% in HCC) were labeled as comprehensive. The performance was better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine. For the quality measures, the model answered 76.9% of questions correctly but failed to specify decision-making cut-offs and treatment durations. ChatGPT lacked knowledge of regional guidelines variations, such as HCC screening criteria. However, it provided practical and multifaceted advice to patients and caregivers regarding the next steps and adjusting to a new diagnosis.ConclusionWe analyzed the areas of robustness and limitations of ChatGPT's responses on the management of cirrhosis and HCC and relevant emotional support. ChatGPT may have a role as an adjunct informational tool for patients and physicians to improve outcomes.