Tunable Privacy Risk Evaluation of Generative Adversarial Networks

Détails

Ressource 1Télécharger: 39176604.pdf (201.73 [Ko])
Etat: Public
Version: Final published version
Licence: CC BY-NC 4.0
ID Serval
serval:BIB_1E8620478EEF
Type
Partie de livre
Sous-type
Chapitre: chapitre ou section
Collection
Publications
Institution
Titre
Tunable Privacy Risk Evaluation of Generative Adversarial Networks
Titre du livre
Digital Health and Informatics Innovations for Sustainable Health Care Systems
Auteur⸱e⸱s
Kaabachi Bayrem, Briki Farah, Kulynych Bogdan, Despraz Jérémie, Raisaro Jean Louis
Editeur
IOS Press
ISBN
9781643685335
ISSN
0926-9630
1879-8365
ISSN-L
0926-9630
Statut éditorial
Publié
Date de publication
22/08/2024
Peer-reviewed
Oui
Volume
316
Série
Studies in Health Technology and Informatics
Pages
1233-1237
Langue
anglais
Résumé
Generative machine learning models such as Generative Adversarial Networks (GANs) have been shown to be especially successful in generating realistic synthetic data in image and tabular domains. However, it has been shown that such generative models, as well as the generated synthetic data, can reveal information contained in their privacy-sensitive training data, and therefore must be carefully evaluated before being used. The gold standard method through which such privacy leakage can be estimated is simulating membership inference attacks (MIAs), in which an attacker attempts to learn whether a given sample was part of the training data of a generative model. The state-of-the art MIAs against generative models, however, rely on strong assumptions (knowledge of the exact training dataset size), or require a lot of computational power (to retrain many "surrogate" generative models), which make them hard to use in practice. In this work, we propose a technique for evaluating privacy risks in GANs which exploits the outputs of the discriminator part of the standard GAN architecture. We evaluate our attacks in terms of performance in two synthetic image generation applications in radiology and ophthalmology, showing that our technique provides a more complete picture of the threats by performing worst-case privacy risk estimation and by identifying attacks with higher precision than the prior work.
Pubmed
Open Access
Oui
Création de la notice
30/08/2024 13:42
Dernière modification de la notice
05/09/2024 9:03
Données d'usage