The Clinicians' Guide to Large Language Models: A General Perspective With a Focus on Hallucinations.
Détails
Etat: Public
Version: Final published version
Licence: CC BY 4.0
ID Serval
serval:BIB_0BB62F7C554D
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Institution
Titre
The Clinicians' Guide to Large Language Models: A General Perspective With a Focus on Hallucinations.
Périodique
Interactive journal of medical research
ISSN
1929-073X (Print)
ISSN-L
1929-073X
Statut éditorial
Publié
Date de publication
28/01/2025
Peer-reviewed
Oui
Volume
14
Pages
e59823
Langue
anglais
Notes
Publication types: Journal Article
Publication Status: epublish
Publication Status: epublish
Résumé
Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.
Mots-clé
Ai, Llm, artificial intelligence, artificial intelligence tool, clinical informatics, computer assisted, decision, decision support, decision support techniques, decision-making, electronic data system, false information, hallucinations, large language model, medical informatics, technical framework, AI, LLM
Pubmed
Web of science
Open Access
Oui
Création de la notice
31/01/2025 16:30
Dernière modification de la notice
27/02/2025 8:07