MAFIA-CT: MAchine Learning Tool for Image Quality Assessment in Computed Tomography
Détails
ID Serval
serval:BIB_6FC939BE1040
Type
Partie de livre
Sous-type
Chapitre: chapitre ou section
Collection
Publications
Institution
Titre
MAFIA-CT: MAchine Learning Tool for Image Quality Assessment in Computed Tomography
Titre du livre
Medical Image Understanding and Analysis
Editeur
Springer International Publishing
ISBN
9783030804312
9783030804329
9783030804329
ISSN
0302-9743
1611-3349
1611-3349
Statut éditorial
Publié
Date de publication
2021
Pages
472-487
Langue
anglais
Résumé
Abstract
Different metrics are available for evaluating image quality (IQ) in computed tomography (CT). One of those is human observer studies, unfortunately they are time consuming and susceptible to variability. With these in mind, we developed a platform, based on deep learning, to optimise the work-flow and score IQ based human observations of low contrast lesions.
1476 images (from 43 CT devices) were used. The platform was evaluated for its accuracy, reliability and performance in both held-out tests, synthetic data and designed measurements. Synthetic data to evaluate the model capabilities and performance regarding varying structures and background. Designed measurements to evaluate the model performance in characterising CT protocols and devices regarding protocol dose and reconstruction.
We obtained 99.7% success rate on inlays detection and over 96% accuracy for given observer. From the synthetic data experiments, we observed a correlation between the minimum visible contrast and the lesion size; lesion's contrast and visibility degradation due to noise levels; and no influence from external lesions to the central lesions detectability by the model. From the measurements in relation to dose, only between 20 and 25 mGy protocols differences were not statistically significant (p-values 0.076 and 0.408, respectively for 5 and 8mm lesions). Additionally, our model showed improvements in IQ by using iterative reconstruction and the effect of reconstruction kernel.
Our platform enables the evaluation of large data-sets without the variability and time-cost associated with human scoring and subsequently providing a reliable and relatable metric for dose harmonisation and imaging optimisation in CT.
Different metrics are available for evaluating image quality (IQ) in computed tomography (CT). One of those is human observer studies, unfortunately they are time consuming and susceptible to variability. With these in mind, we developed a platform, based on deep learning, to optimise the work-flow and score IQ based human observations of low contrast lesions.
1476 images (from 43 CT devices) were used. The platform was evaluated for its accuracy, reliability and performance in both held-out tests, synthetic data and designed measurements. Synthetic data to evaluate the model capabilities and performance regarding varying structures and background. Designed measurements to evaluate the model performance in characterising CT protocols and devices regarding protocol dose and reconstruction.
We obtained 99.7% success rate on inlays detection and over 96% accuracy for given observer. From the synthetic data experiments, we observed a correlation between the minimum visible contrast and the lesion size; lesion's contrast and visibility degradation due to noise levels; and no influence from external lesions to the central lesions detectability by the model. From the measurements in relation to dose, only between 20 and 25 mGy protocols differences were not statistically significant (p-values 0.076 and 0.408, respectively for 5 and 8mm lesions). Additionally, our model showed improvements in IQ by using iterative reconstruction and the effect of reconstruction kernel.
Our platform enables the evaluation of large data-sets without the variability and time-cost associated with human scoring and subsequently providing a reliable and relatable metric for dose harmonisation and imaging optimisation in CT.
Mots-clé
Computed tomographyDeep learningImage quality
Web of science
Création de la notice
08/04/2022 15:22
Dernière modification de la notice
20/12/2023 7:14