Assessing the Performance of Artificial Intelligence Models: Insights from the American Society of Functional Neuroradiology Artificial Intelligence Competition.


Serval ID
Article: article from journal or magazin.
Assessing the Performance of Artificial Intelligence Models: Insights from the American Society of Functional Neuroradiology Artificial Intelligence Competition.
AJNR. American journal of neuroradiology
Jiang B., Ozkara B.B., Zhu G., Boothroyd D., Allen J.W., Barboriak D.P., Chang P., Chan C., Chaudhari R., Chen H., Chukus A., Ding V., Douglas D., Filippi C.G., Flanders A.E., Godwin R., Hashmi S., Hess C., Hsu K., Lui Y.W., Maldjian J.A., Michel P., Nalawade S.S., Patel V., Raghavan P., Sair H.I., Tanabe J., Welker K., Whitlow C., Zaharcuk G., Wintermark M.
1936-959X (Electronic)
Publication state
In Press
Publication types: Journal Article
Publication Status: aheadofprint
Artificial intelligence (AI) models in radiology are frequently developed and validated using datasets from a single institution and are rarely tested on independent, external datasets, raising questions about their generalizability and applicability in clinical practice. The American Society of Functional Neuroradiology (ASFNR) organized a multi-center AI competition to evaluate the proficiency of developed models in identifying various pathologies on NCCT, assessing age-based normality and estimating medical urgency.
In total, 1201 anonymized, full-head NCCT clinical scans from five institutions were pooled to form the dataset. The dataset encompassed normal studies as well as pathologies including acute ischemic stroke, intracranial hemorrhage, traumatic brain injury, and mass effect (detection of these-task 1). NCCTs were also assessed to determine if findings were consistent with expected brain changes for the patient's age (task 2: age-based normality assessment) and to identify any abnormalities requiring immediate medical attention (task 3: evaluation of findings for urgent intervention). Five neuroradiologists labeled each NCCT, with consensus interpretations serving as the ground truth. The competition was announced online, inviting academic institutions and companies. Independent central analysis assessed each model's performance. Accuracy, sensitivity, specificity, positive and negative predictive values, and receiver operating characteristic (ROC) curves were generated for each AI model, along with the area under the ROC curve (AUROC).
1177 studies were processed by four teams. The median age of patients was 62, with an interquartile range of 33. 19 teams from various academic institutions registered for the competition. Of these, four teams submitted their final results. No commercial entities participated in the competition. For task 1, AUROCs ranged from 0.49 to 0.59. For task 2, two teams completed the task with AUROC values of 0.57 and 0.52. For task 3, teams had little to no agreement with the ground truth.
To assess the performance of AI models in real-world clinical scenarios, we analyzed their performance in the ASFNR AI Competition. The first ASFNR Competition underscored the gap between expectation and reality; the models largely fell short in their assessments. As the integration of AI tools into clinical workflows increases, neuroradiologists must carefully recognize the capabilities, constraints, and consistency of these technologies. Before institutions adopt these algorithms, thorough validation is essential to ensure acceptable levels of performance in clinical settings.ABBREVIATIONS: AI = artificial intelligence; ASFNR = American Society of Functional Neuroradiology; AUROC = area under the receiver operating characteristic curve; DICOM = Digital Imaging and Communications in Medicine; GEE = generalized estimation equation; IQR = interquartile range; NPV = negative predictive value; PPV = positive predictive value; ROC = receiver operating characteristic; TBI = traumatic brain injury.
Create date
03/05/2024 14:45
Last modification date
04/05/2024 7:07
Usage data