Beyond the myth of neutrality: Addressing linguistic biases in medical AI
Details
Serval ID
serval:BIB_E3477157F302
Type
Inproceedings: an article in a conference proceedings.
Publication sub-type
Abstract (Abstract): shot summary in a article that contain essentials elements presented during a scientific conference, lecture or from a poster.
Collection
Publications
Institution
Title
Beyond the myth of neutrality: Addressing linguistic biases in medical AI
Title of the conference
Nordic Biomedical Law Conference- Bergen 2023
Publication state
Published
Issued date
2023
Language
english
Abstract
The digitalisation of medicine is both an opportunity and a risk for patients. The
democratisation of the use of LLMs such as Chat GPT makes it easier for individuals to find
medical information online. However, these digital tools need to be monitored, both in their
development and their deployment, against the demonstrated risks of providing false
information and propagating bias. There is an abundant literature on algorithmic biases.
Scholars have highlighted the spread of biases related to gender, origin or based on other
legally protected characteristics. Less attention has been paid to linguistic bias. Yet the
interactions between culture and medicine are numerous. Failure to consider cultural factors
including language or dialect can lead to inequitable health predictions for the populations
concerned. Beyond biases, the acceptability of digital medicine should imply that systems are
developed with regard to the cultural and linguistic norms of the target population. The
presentation will contribute to deconstructing the myth of neutral AI by establishing the link
between linguistic bias and health inequity on the one hand, and the myth of AI "neutralization"
on the other hand, by advocating for an active integration of cultural - especially linguisticfactors
in algorithmic systems in order to build adapted thus sustainable medical AI.
democratisation of the use of LLMs such as Chat GPT makes it easier for individuals to find
medical information online. However, these digital tools need to be monitored, both in their
development and their deployment, against the demonstrated risks of providing false
information and propagating bias. There is an abundant literature on algorithmic biases.
Scholars have highlighted the spread of biases related to gender, origin or based on other
legally protected characteristics. Less attention has been paid to linguistic bias. Yet the
interactions between culture and medicine are numerous. Failure to consider cultural factors
including language or dialect can lead to inequitable health predictions for the populations
concerned. Beyond biases, the acceptability of digital medicine should imply that systems are
developed with regard to the cultural and linguistic norms of the target population. The
presentation will contribute to deconstructing the myth of neutral AI by establishing the link
between linguistic bias and health inequity on the one hand, and the myth of AI "neutralization"
on the other hand, by advocating for an active integration of cultural - especially linguisticfactors
in algorithmic systems in order to build adapted thus sustainable medical AI.
Create date
19/08/2024 13:25
Last modification date
17/12/2024 7:09