AI in obstetrics: Evaluating residents' capabilities and interaction strategies with ChatGPT.
Détails
Télécharger: 39326228.pdf (415.18 [Ko])
Etat: Public
Version: Final published version
Licence: CC BY 4.0
Etat: Public
Version: Final published version
Licence: CC BY 4.0
ID Serval
serval:BIB_E5FFE38A9EAB
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Institution
Titre
AI in obstetrics: Evaluating residents' capabilities and interaction strategies with ChatGPT.
Périodique
European journal of obstetrics, gynecology, and reproductive biology
ISSN
1872-7654 (Electronic)
ISSN-L
0301-2115
Statut éditorial
Publié
Date de publication
11/2024
Peer-reviewed
Oui
Volume
302
Pages
238-241
Langue
anglais
Notes
Publication types: Journal Article ; Observational Study
Publication Status: ppublish
Publication Status: ppublish
Résumé
In line with the digital transformation trend in medical training, students may resort to artificial intelligence (AI) for learning. This study assessed the interaction between obstetrics residents and ChatGPT during clinically oriented summative evaluations related to acute hepatic steatosis of pregnancy, and their self-reported competencies in information technology (IT) and AI. The participants in this semi-qualitative observational study were 14 obstetrics residents from two university hospitals. Students' queries were categorized into three distinct types: third-party enquiries; search-engine-style queries; and GPT-centric prompts. Responses were compared against a standardized answer produced by ChatGPT with a Delphi-developed expert prompt. Data analysis employed descriptive statistics and correlation analysis to explore the relationship between AI/IT skills and response accuracy. The study participants showed moderate IT proficiency but low AI proficiency. Interaction with ChatGPT regarding clinical signs of acute hepatic steatosis gravidarum revealed a preference for third-party questioning, resulting in only 21% accurate responses due to misinterpretation of medical acronyms. No correlation was found between AI response accuracy and the residents' self-assessed IT or AI skills, with most expressing dissatisfaction with their AI training. This study underlines the discrepancy between perceived and actual AI proficiency, highlighted by clinically inaccurate yet plausible AI responses - a manifestation of the 'stochastic parrot' phenomenon. These findings advocate for the inclusion of structured AI literacy programmes in medical education, focusing on prompt engineering. These academic skills are essential to exploit AI's potential in obstetrics and gynaecology. The ultimate aim is to optimize patient care in AI-augmented health care, and prevent misleading and unsafe knowledge acquisition.
Mots-clé
Humans, Obstetrics/education, Artificial Intelligence, Internship and Residency, Female, Pregnancy, Clinical Competence, Adult, Medical Education, Obstetrics, Prompt Engineering
Pubmed
Web of science
Open Access
Oui
Création de la notice
30/09/2024 14:25
Dernière modification de la notice
29/10/2024 7:35