Biomedical image analysis competitions: The state of current participation practice

Details

Serval ID
serval:BIB_B56FC614D863
Type
Autre: use this type when nothing else fits.
Collection
Publications
Title
Biomedical image analysis competitions: The state of current participation practice
Author(s)
Eisenmann Matthias, Reinke Annika, Weru Vivienn, Tizabi Minu Dietlinde, Isensee Fabian, Adler Tim J., Godau Patrick, Cheplygina Veronika, Kozubek Michal, Ali Sharib, Gupta Anubha, Kybic Jan, Noble Alison, Solórzano Carlos Ortiz de, Pachade Samiksha, Petitjean Caroline, Sage Daniel, Wei Donglai, Wilden Elizabeth, Alapatt Deepak, Andrearczyk Vincent, Baid Ujjwal, Bakas Spyridon, Balu Niranjan, Bano Sophia, Bawa Vivek Singh, Bernal Jorge, Bodenstedt Sebastian, Casella Alessandro, Choi Jinwook, Commowick Olivier, Daum Marie, Depeursinge Adrien, Dorent Reuben, Egger Jan, Eichhorn Hannah, Engelhardt Sandy, Ganz Melanie, Girard Gabriel, Hansen Lasse, Heinrich Mattias, Heller Nicholas, Hering Alessa, Huaulmé Arnaud, Kim Hyunjeong, Landman Bennett, Li Hongwei Bran, Li Jianning, Ma Jun, Martel Anne, Martín-Isla Carlos
Issued date
2022
Language
english
Abstract
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
Create date
29/08/2023 8:44
Last modification date
09/10/2023 16:03
Usage data