Fusing learned representations from Riesz Filters and Deep CNN for lung tissue classification.

Détails

ID Serval
serval:BIB_F78834B85B25
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Titre
Fusing learned representations from Riesz Filters and Deep CNN for lung tissue classification.
Périodique
Medical image analysis
Auteur⸱e⸱s
Joyseeree R., Otálora S., Müller H., Depeursinge A.
ISSN
1361-8423 (Electronic)
ISSN-L
1361-8415
Statut éditorial
Publié
Date de publication
08/2019
Peer-reviewed
Oui
Volume
56
Pages
172-183
Langue
anglais
Notes
Publication types: Journal Article ; Research Support, Non-U.S. Gov't
Publication Status: ppublish
Résumé
A novel method to detect and classify several classes of diseased and healthy lung tissue in CT (Computed Tomography), based on the fusion of Riesz and deep learning features, is presented. First, discriminative parametric lung tissue texture signatures are learned from Riesz representations using a one-versus-one approach. The signatures are generated for four diseased tissue types and a healthy tissue class, all of which frequently appear in the publicly available Interstitial Lung Diseases (ILD) dataset used in this article. Because the Riesz wavelets are steerable, they can easily be made invariant to local image rotations, a property that is desirable when analyzing lung tissue micro-architectures in CT images. Second, features from deep Convolutional Neural Networks (CNN) are computed by fine-tuning the Inception V3 architecture using an augmented version of the same ILD dataset. Because CNN features are both deep and non-parametric, they can accurately model virtually any pattern that is useful for tissue discrimination, and they are the de facto standard for many medical imaging tasks. However, invariance to local image rotations is not explicitly implemented and can only be approximated with rotation-based data augmentation. This motivates the fusion of Riesz and deep CNN features, as the two techniques are very complementary. The two learned representations are combined in a joint softmax model for final classification, where early and late feature fusion schemes are compared. The experimental results show that a late fusion of the independent probabilities leads to significant improvements in classification performance when compared to each of the separate feature representations and also compared to an ensemble of deep learning approaches.
Mots-clé
Datasets as Topic, Deep Learning, Humans, Lung Diseases/classification, Lung Diseases/diagnostic imaging, Radiographic Image Interpretation, Computer-Assisted/methods, Reproducibility of Results, Tomography, X-Ray Computed, Classification, Deep learning, ILD, Texture signatures
Pubmed
Web of science
Création de la notice
29/08/2023 7:44
Dernière modification de la notice
10/10/2023 13:41
Données d'usage