Assessing ChatGPT's theoretical knowledge and prescriptive accuracy in bacterial infections: a comparative study with infectious diseases residents and specialists.

Details

Serval ID
serval:BIB_019BD30F4EC5
Type
Article: article from journal or magazin.
Collection
Publications
Institution
Title
Assessing ChatGPT's theoretical knowledge and prescriptive accuracy in bacterial infections: a comparative study with infectious diseases residents and specialists.
Journal
Infection
Author(s)
De Vito A., Geremia N., Marino A., Bavaro D.F., Caruana G., Meschiari M., Colpani A., Mazzitelli M., Scaglione V., Venanzi Rullo E., Fiore V., Fois M., Campanella E., Pistarà E., Faltoni M., Nunnari G., Cattelan A., Mussini C., Bartoletti M., Vaira L.A., Madeddu G.
ISSN
1439-0973 (Electronic)
ISSN-L
0300-8126
Publication state
In Press
Peer-reviewed
Oui
Language
english
Notes
Publication types: Journal Article
Publication Status: aheadofprint
Abstract
Advancements in Artificial Intelligence(AI) have made platforms like ChatGPT increasingly relevant in medicine. This study assesses ChatGPT's utility in addressing bacterial infection-related questions and antibiogram-based clinical cases.
This study involved a collaborative effort involving infectious disease (ID) specialists and residents. A group of experts formulated six true/false, six open-ended questions, and six clinical cases with antibiograms for four types of infections (endocarditis, pneumonia, intra-abdominal infections, and bloodstream infection) for a total of 96 questions. The questions were submitted to four senior residents and four specialists in ID and inputted into ChatGPT-4 and a trained version of ChatGPT-4. A total of 720 responses were obtained and reviewed by a blinded panel of experts in antibiotic treatments. They evaluated the responses for accuracy and completeness, the ability to identify correct resistance mechanisms from antibiograms, and the appropriateness of antibiotics prescriptions.
No significant difference was noted among the four groups for true/false questions, with approximately 70% correct answers. The trained ChatGPT-4 and ChatGPT-4 offered more accurate and complete answers to the open-ended questions than both the residents and specialists. Regarding the clinical case, we observed a lower accuracy from ChatGPT-4 to recognize the correct resistance mechanism. ChatGPT-4 tended not to prescribe newer antibiotics like cefiderocol or imipenem/cilastatin/relebactam, favoring less recommended options like colistin. Both trained- ChatGPT-4 and ChatGPT-4 recommended longer than necessary treatment periods (p-value = 0.022).
This study highlights ChatGPT's capabilities and limitations in medical decision-making, specifically regarding bacterial infections and antibiogram analysis. While ChatGPT demonstrated proficiency in answering theoretical questions, it did not consistently align with expert decisions in clinical case management. Despite these limitations, the potential of ChatGPT as a supportive tool in ID education and preliminary analysis is evident. However, it should not replace expert consultation, especially in complex clinical decision-making.
Keywords
Abdominal infection, Antibiotic resistance, Antimicrobial stewardship, Artificial intelligence, Bacterial infections, Blood-stream infection, ChatGPT, Endocarditis, Infectious diseases, Pneumonia
Pubmed
Web of science
Open Access
Yes
Create date
19/07/2024 7:44
Last modification date
13/08/2024 6:48
Usage data