A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.

Details

Ressource 1Download: 40233316.pdf (26427.02 [Ko])
State: Public
Version: Final published version
License: CC BY 4.0
Serval ID
serval:BIB_298B2FA1D8C4
Type
Article: article from journal or magazin.
Collection
Publications
Institution
Title
A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.
Journal
PLOS digital health
Author(s)
Rahnfeld J., Naouar M., Kalweit G., Boedecker J., Dubruc E., Kalweit M.
ISSN
2767-3170 (Electronic)
ISSN-L
2767-3170
Publication state
Published
Issued date
04/2025
Peer-reviewed
Oui
Volume
4
Number
4
Pages
e0000792
Language
english
Notes
Publication types: Journal Article
Publication Status: epublish
Abstract
Recent advancements in deep learning have shown promise in enhancing the performance of medical image analysis. In pathology, automated whole slide imaging has transformed clinical workflows by streamlining routine tasks and diagnostic and prognostic support. However, the lack of transparency of deep learning models, often described as black boxes, poses a significant barrier to their clinical adoption. This study evaluates various explainability methods for Vision Transformers, assessing their effectiveness in explaining the rationale behind their classification predictions on histopathological images. Using a Vision Transformer trained on the publicly available CAMELYON16 dataset comprising of 399 whole slide images of lymph node metastases of patients with breast cancer, we conducted a comparative analysis of a diverse range of state-of-the-art techniques for generating explanations through heatmaps, including Attention Rollout, Integrated Gradients, RISE, and ViT-Shapley. Our findings reveal that Attention Rollout and Integrated Gradients are prone to artifacts, while RISE and particularly ViT-Shapley generate more reliable and interpretable heatmaps. ViT-Shapley also demonstrated faster runtime and superior performance in insertion and deletion metrics. These results suggest that integrating ViT-Shapley-based heatmaps into pathology reports could enhance trust and scalability in clinical workflows, facilitating the adoption of explainable artificial intelligence in pathology.
Pubmed
Open Access
Yes
Create date
16/04/2025 13:25
Last modification date
17/04/2025 7:12
Usage data