Unknown

Dataset Information

0

Large language models for generating medical examinations: systematic review.


ABSTRACT:

Background

Writing multiple choice questions (MCQs) for the purpose of medical exams is challenging. It requires extensive medical knowledge, time and effort from medical educators. This systematic review focuses on the application of large language models (LLMs) in generating medical MCQs.

Methods

The authors searched for studies published up to November 2023. Search terms focused on LLMs generated MCQs for medical examinations. Non-English, out of year range and studies not focusing on AI generated multiple-choice questions were excluded. MEDLINE was used as a search database. Risk of bias was evaluated using a tailored QUADAS-2 tool.

Results

Overall, eight studies published between April 2023 and October 2023 were included. Six studies used Chat-GPT 3.5, while two employed GPT 4. Five studies showed that LLMs can produce competent questions valid for medical exams. Three studies used LLMs to write medical questions but did not evaluate the validity of the questions. One study conducted a comparative analysis of different models. One other study compared LLM-generated questions with those written by humans. All studies presented faulty questions that were deemed inappropriate for medical exams. Some questions required additional modifications in order to qualify.

Conclusions

LLMs can be used to write MCQs for medical examinations. However, their limitations cannot be ignored. Further study in this field is essential and more conclusive evidence is needed. Until then, LLMs may serve as a supplementary tool for writing medical examinations. 2 studies were at high risk of bias. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.

SUBMITTER: Artsi Y 

PROVIDER: S-EPMC10981304 | biostudies-literature | 2024 Mar

REPOSITORIES: biostudies-literature

altmetric image

Publications

Large language models for generating medical examinations: systematic review.

Artsi Yaara Y   Sorin Vera V   Konen Eli E   Glicksberg Benjamin S BS   Nadkarni Girish G   Klang Eyal E  

BMC medical education 20240329 1


<h4>Background</h4>Writing multiple choice questions (MCQs) for the purpose of medical exams is challenging. It requires extensive medical knowledge, time and effort from medical educators. This systematic review focuses on the application of large language models (LLMs) in generating medical MCQs.<h4>Methods</h4>The authors searched for studies published up to November 2023. Search terms focused on LLMs generated MCQs for medical examinations. Non-English, out of year range and studies not focu  ...[more]

Similar Datasets

| S-EPMC11806300 | biostudies-literature
| S-EPMC11866893 | biostudies-literature
| S-EPMC11669866 | biostudies-literature
| S-EPMC11745146 | biostudies-literature
| S-EPMC11530718 | biostudies-literature
| S-EPMC11659327 | biostudies-literature
| S-EPMC11228775 | biostudies-literature
| S-EPMC10950983 | biostudies-literature
| S-EPMC11848527 | biostudies-literature
| S-EPMC10449915 | biostudies-literature