Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: Findings from the second edition of the HECKTOR challenge.

Details

Ressource 1Download: 37742374.pdf (2276.84 [Ko])
State: Public
Version: Final published version
License: CC BY 4.0
Serval ID
serval:BIB_1A6EAD47D2CA
Type
Article: article from journal or magazin.
Collection
Publications
Institution
Title
Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: Findings from the second edition of the HECKTOR challenge.
Journal
Medical image analysis
Author(s)
Andrearczyk V., Oreiller V., Boughdad S., Le Rest C.C., Tankyevych O., Elhalawani H., Jreige M., Prior J.O., Vallières M., Visvikis D., Hatt M., Depeursinge A.
ISSN
1361-8423 (Electronic)
ISSN-L
1361-8415
Publication state
Published
Issued date
12/2023
Peer-reviewed
Oui
Volume
90
Pages
102972
Language
english
Notes
Publication types: Journal Article
Publication Status: ppublish
Abstract
By focusing on metabolic and morphological tissue properties respectively, FluoroDeoxyGlucose (FDG)-Positron Emission Tomography (PET) and Computed Tomography (CT) modalities include complementary and synergistic information for cancerous lesion delineation and characterization (e.g. for outcome prediction), in addition to usual clinical variables. This is especially true in Head and Neck Cancer (HNC). The goal of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge was to develop and compare modern image analysis methods to best extract and leverage this information automatically. We present here the post-analysis of HECKTOR 2nd edition, at the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2021. The scope of the challenge was substantially expanded compared to the first edition, by providing a larger population (adding patients from a new clinical center) and proposing an additional task to the challengers, namely the prediction of Progression-Free Survival (PFS). To this end, the participants were given access to a training set of 224 cases from 5 different centers, each with a pre-treatment FDG-PET/CT scan and clinical variables. Their methods were subsequently evaluated on a held-out test set of 101 cases from two centers. For the segmentation task (Task 1), the ranking was based on a Borda counting of their ranks according to two metrics: mean Dice Similarity Coefficient (DSC) and median Hausdorff Distance at 95th percentile (HD95). For the PFS prediction task, challengers could use the tumor contours provided by experts (Task 3) or rely on their own (Task 2). The ranking was obtained according to the Concordance index (C-index) calculated on the predicted risk scores. A total of 103 teams registered for the challenge, for a total of 448 submissions and 29 papers. The best method in the segmentation task obtained an average DSC of 0.759, and the best predictions of PFS obtained a C-index of 0.717 (without relying on the provided contours) and 0.698 (using the expert contours). An interesting finding was that best PFS predictions were reached by relying on DL approaches (with or without explicit tumor segmentation, 4 out of the 5 best ranked) compared to standard radiomics methods using handcrafted features extracted from delineated tumors, and by exploiting alternative tumor contours (automated and/or larger volumes encompassing surrounding tissues) rather than relying on the expert contours. This second edition of the challenge confirmed the promising performance of fully automated primary tumor delineation in PET/CT images of HNC patients, although there is still a margin for improvement in some difficult cases. For the first time, the prediction of outcome was also addressed and the best methods reached relatively good performance (C-index above 0.7). Both results constitute another step forward toward large-scale outcome prediction studies in HNC.
Keywords
Automatic segmentation, Head and Neck Cancer, Medical imaging, Radiomics
Pubmed
Web of science
Open Access
Yes
Create date
29/09/2023 15:02
Last modification date
14/11/2023 8:09
Usage data