Distill-SODA: Distilling Self-Supervised Vision Transformer for Source-Free Open-Set Domain Adaptation in Computational Pathology.

Détails

ID Serval
serval:BIB_1975474395B6
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Institution
Titre
Distill-SODA: Distilling Self-Supervised Vision Transformer for Source-Free Open-Set Domain Adaptation in Computational Pathology.
Périodique
IEEE transactions on medical imaging
Auteur⸱e⸱s
Vray G., Tomar D., Bozorgtabar B., Thiran J.P.
ISSN
1558-254X (Electronic)
ISSN-L
0278-0062
Statut éditorial
Publié
Date de publication
05/2024
Peer-reviewed
Oui
Volume
43
Numéro
5
Pages
2021-2032
Langue
anglais
Notes
Publication types: Journal Article ; Research Support, Non-U.S. Gov't
Publication Status: ppublish
Résumé
Developing computational pathology models is essential for reducing manual tissue typing from whole slide images, transferring knowledge from the source domain to an unlabeled, shifted target domain, and identifying unseen categories. We propose a practical setting by addressing the above-mentioned challenges in one fell swoop, i.e., source-free open-set domain adaptation. Our methodology focuses on adapting a pre-trained source model to an unlabeled target dataset and encompasses both closed-set and open-set classes. Beyond addressing the semantic shift of unknown classes, our framework also deals with a covariate shift, which manifests as variations in color appearance between source and target tissue samples. Our method hinges on distilling knowledge from a self-supervised vision transformer (ViT), drawing guidance from either robustly pre-trained transformer models or histopathology datasets, including those from the target domain. In pursuit of this, we introduce a novel style-based adversarial data augmentation, serving as hard positives for self-training a ViT, resulting in highly contextualized embeddings. Following this, we cluster semantically akin target images, with the source model offering weak pseudo-labels, albeit with uncertain confidence. To enhance this process, we present the closed-set affinity score (CSAS), aiming to correct the confidence levels of these pseudo-labels and to calculate weighted class prototypes within the contextualized embedding space. Our approach establishes itself as state-of-the-art across three public histopathological datasets for colorectal cancer assessment. Notably, our self-training method seamlessly integrates with open-set detection methods, resulting in enhanced performance in both closed-set and open-set recognition tasks.
Mots-clé
Humans, Algorithms, Image Interpretation, Computer-Assisted/methods, Databases, Factual, Supervised Machine Learning
Pubmed
Web of science
Création de la notice
24/05/2024 16:03
Dernière modification de la notice
25/05/2024 7:12
Données d'usage