Deep Learning Based Cloud Cover Parameterization for ICON

Détails

Ressource 1Télécharger: J Adv Model Earth Syst - 2022 - Grundner - Deep Learning Based Cloud Cover Parameterization for ICON.pdf (2168.12 [Ko])
Etat: Public
Version: Final published version
Licence: CC BY-NC 4.0
ID Serval
serval:BIB_768D96868930
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Institution
Titre
Deep Learning Based Cloud Cover Parameterization for ICON
Périodique
Journal of advances in modeling earth systems
Auteur⸱e⸱s
Grundner A., Beucler T., Gentine P., Iglesias-Suarez F., Giorgetta M.A., Eyring V.
ISSN
1942-2466 (Print)
ISSN-L
1942-2466
Statut éditorial
Publié
Date de publication
12/2022
Peer-reviewed
Oui
Volume
14
Numéro
12
Pages
e2021MS002959
Langue
anglais
Notes
Publication types: Journal Article
Publication Status: ppublish
Résumé
A promising approach to improve cloud parameterizations within climate models and thus climate projections is to use deep learning in combination with training data from storm-resolving model (SRM) simulations. The ICOsahedral Non-hydrostatic (ICON) modeling framework permits simulations ranging from numerical weather prediction to climate projections, making it an ideal target to develop neural network (NN) based parameterizations for sub-grid scale processes. Within the ICON framework, we train NN based cloud cover parameterizations with coarse-grained data based on realistic regional and global ICON SRM simulations. We set up three different types of NNs that differ in the degree of vertical locality they assume for diagnosing cloud cover from coarse-grained atmospheric state variables. The NNs accurately estimate sub-grid scale cloud cover from coarse-grained data that has similar geographical characteristics as their training data. Additionally, globally trained NNs can reproduce sub-grid scale cloud cover of the regional SRM simulation. Using the game-theory based interpretability library SHapley Additive exPlanations, we identify an overemphasis on specific humidity and cloud ice as the reason why our column-based NN cannot perfectly generalize from the global to the regional coarse-grained SRM data. The interpretability tool also helps visualize similarities and differences in feature importance between regionally and globally trained column-based NNs, and reveals a local relationship between their cloud cover predictions and the thermodynamic environment. Our results show the potential of deep learning to derive accurate yet interpretable cloud cover parameterizations from global SRMs, and suggest that neighborhood-based models may be a good compromise between accuracy and generalizability.
Mots-clé
cloud cover, parameterization, machine learning, neural network, explainable AI, SHAP
Pubmed
Web of science
Open Access
Oui
Création de la notice
21/02/2023 15:36
Dernière modification de la notice
26/10/2023 7:15
Données d'usage