MAFIA-CT: MAchine Learning Tool for Image Quality Assessment in Computed Tomography
Details
Serval ID
serval:BIB_6FC939BE1040
Type
A part of a book
Publication sub-type
Chapter: chapter ou part
Collection
Publications
Institution
Title
MAFIA-CT: MAchine Learning Tool for Image Quality Assessment in Computed Tomography
Title of the book
Medical Image Understanding and Analysis
Publisher
Springer International Publishing
ISBN
9783030804312
9783030804329
9783030804329
ISSN
0302-9743
1611-3349
1611-3349
Publication state
Published
Issued date
2021
Pages
472-487
Language
english
Abstract
Abstract
Different metrics are available for evaluating image quality (IQ) in computed tomography (CT). One of those is human observer studies, unfortunately they are time consuming and susceptible to variability. With these in mind, we developed a platform, based on deep learning, to optimise the work-flow and score IQ based human observations of low contrast lesions.
1476 images (from 43 CT devices) were used. The platform was evaluated for its accuracy, reliability and performance in both held-out tests, synthetic data and designed measurements. Synthetic data to evaluate the model capabilities and performance regarding varying structures and background. Designed measurements to evaluate the model performance in characterising CT protocols and devices regarding protocol dose and reconstruction.
We obtained 99.7% success rate on inlays detection and over 96% accuracy for given observer. From the synthetic data experiments, we observed a correlation between the minimum visible contrast and the lesion size; lesion's contrast and visibility degradation due to noise levels; and no influence from external lesions to the central lesions detectability by the model. From the measurements in relation to dose, only between 20 and 25 mGy protocols differences were not statistically significant (p-values 0.076 and 0.408, respectively for 5 and 8mm lesions). Additionally, our model showed improvements in IQ by using iterative reconstruction and the effect of reconstruction kernel.
Our platform enables the evaluation of large data-sets without the variability and time-cost associated with human scoring and subsequently providing a reliable and relatable metric for dose harmonisation and imaging optimisation in CT.
Different metrics are available for evaluating image quality (IQ) in computed tomography (CT). One of those is human observer studies, unfortunately they are time consuming and susceptible to variability. With these in mind, we developed a platform, based on deep learning, to optimise the work-flow and score IQ based human observations of low contrast lesions.
1476 images (from 43 CT devices) were used. The platform was evaluated for its accuracy, reliability and performance in both held-out tests, synthetic data and designed measurements. Synthetic data to evaluate the model capabilities and performance regarding varying structures and background. Designed measurements to evaluate the model performance in characterising CT protocols and devices regarding protocol dose and reconstruction.
We obtained 99.7% success rate on inlays detection and over 96% accuracy for given observer. From the synthetic data experiments, we observed a correlation between the minimum visible contrast and the lesion size; lesion's contrast and visibility degradation due to noise levels; and no influence from external lesions to the central lesions detectability by the model. From the measurements in relation to dose, only between 20 and 25 mGy protocols differences were not statistically significant (p-values 0.076 and 0.408, respectively for 5 and 8mm lesions). Additionally, our model showed improvements in IQ by using iterative reconstruction and the effect of reconstruction kernel.
Our platform enables the evaluation of large data-sets without the variability and time-cost associated with human scoring and subsequently providing a reliable and relatable metric for dose harmonisation and imaging optimisation in CT.
Keywords
Computed tomographyDeep learningImage quality
Web of science
Create date
08/04/2022 15:22
Last modification date
20/12/2023 7:14