Interpreting and Stabilizing Machine-Learning Parametrizations of Convection
Détails
ID Serval
serval:BIB_D99C79F5B319
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Institution
Titre
Interpreting and Stabilizing Machine-Learning Parametrizations of Convection
Périodique
Journal of the Atmospheric Sciences
ISSN
0022-4928
1520-0469
1520-0469
Statut éditorial
Publié
Date de publication
12/2020
Peer-reviewed
Oui
Volume
77
Numéro
12
Pages
4357-4375
Langue
anglais
Résumé
Neural networks are a promising technique for parameterizing subgrid-scale physics (e.g., moist atmospheric convection) in coarse-resolution climate models, but their lack of interpretability and reliability prevents widespread adoption. For instance, it is not fully understood why neural network parameterizations often cause dramatic instability when coupled to atmospheric fluid dynamics. This paper introduces tools for interpreting their behavior that are customized to the parameterization task. First, we assess the nonlinear sensitivity of a neural network to lower-tropospheric stability and the midtropospheric moisture, two widely studied controls of moist convection. Second, we couple the linearized response functions of these neural networks to simplified gravity wave dynamics, and analytically diagnose the corresponding phase speeds, growth rates, wavelengths, and spatial structures. To demonstrate their versatility, these techniques are tested on two sets of neural networks, one trained with a superparameterized version of the Community Atmosphere Model (SPCAM) and the second with a near-global cloud-resolving model (GCRM). Even though the SPCAM simulation has a warmer climate than the cloud-resolving model, both neural networks predict stronger heating/drying in moist and unstable environments, which is consistent with observations. Moreover, the spectral analysis can predict that instability occurs when GCMs are coupled to networks that support gravity waves that are unstable and have phase speeds larger than 5 m s<jats:sup>−1</jats:sup>. In contrast, standing unstable modes do not cause catastrophic instability. Using these tools, differences between the SPCAM-trained versus GCRM-trained neural networks are analyzed, and strategies to incrementally improve both of their coupled online performance unveiled.
Mots-clé
Conditional instability, Cloud resolving models, Parameterization, Machine learning
Web of science
Site de l'éditeur
Open Access
Oui
Création de la notice
21/02/2023 14:36
Dernière modification de la notice
25/10/2023 13:26