Beyond the myth of neutrality: Addressing linguistic biases in medical AI
Détails
ID Serval
serval:BIB_E3477157F302
Type
Actes de conférence (partie): contribution originale à la littérature scientifique, publiée à l'occasion de conférences scientifiques, dans un ouvrage de compte-rendu (proceedings), ou dans l'édition spéciale d'un journal reconnu (conference proceedings).
Sous-type
Abstract (résumé de présentation): article court qui reprend les éléments essentiels présentés à l'occasion d'une conférence scientifique dans un poster ou lors d'une intervention orale.
Collection
Publications
Institution
Titre
Beyond the myth of neutrality: Addressing linguistic biases in medical AI
Titre de la conférence
Nordic Biomedical Law Conference- Bergen 2023
Statut éditorial
Publié
Date de publication
2023
Langue
anglais
Résumé
The digitalisation of medicine is both an opportunity and a risk for patients. The
democratisation of the use of LLMs such as Chat GPT makes it easier for individuals to find
medical information online. However, these digital tools need to be monitored, both in their
development and their deployment, against the demonstrated risks of providing false
information and propagating bias. There is an abundant literature on algorithmic biases.
Scholars have highlighted the spread of biases related to gender, origin or based on other
legally protected characteristics. Less attention has been paid to linguistic bias. Yet the
interactions between culture and medicine are numerous. Failure to consider cultural factors
including language or dialect can lead to inequitable health predictions for the populations
concerned. Beyond biases, the acceptability of digital medicine should imply that systems are
developed with regard to the cultural and linguistic norms of the target population. The
presentation will contribute to deconstructing the myth of neutral AI by establishing the link
between linguistic bias and health inequity on the one hand, and the myth of AI "neutralization"
on the other hand, by advocating for an active integration of cultural - especially linguisticfactors
in algorithmic systems in order to build adapted thus sustainable medical AI.
democratisation of the use of LLMs such as Chat GPT makes it easier for individuals to find
medical information online. However, these digital tools need to be monitored, both in their
development and their deployment, against the demonstrated risks of providing false
information and propagating bias. There is an abundant literature on algorithmic biases.
Scholars have highlighted the spread of biases related to gender, origin or based on other
legally protected characteristics. Less attention has been paid to linguistic bias. Yet the
interactions between culture and medicine are numerous. Failure to consider cultural factors
including language or dialect can lead to inequitable health predictions for the populations
concerned. Beyond biases, the acceptability of digital medicine should imply that systems are
developed with regard to the cultural and linguistic norms of the target population. The
presentation will contribute to deconstructing the myth of neutral AI by establishing the link
between linguistic bias and health inequity on the one hand, and the myth of AI "neutralization"
on the other hand, by advocating for an active integration of cultural - especially linguisticfactors
in algorithmic systems in order to build adapted thus sustainable medical AI.
Création de la notice
19/08/2024 13:25
Dernière modification de la notice
17/12/2024 7:09