Bayesian tomography with prior-knowledge-based parametrization and surrogate modelling

Details

Ressource 1Download: 2201.02444.pdf (9186.26 [Ko])
State: Public
Version: author
License: Not specified
Serval ID
serval:BIB_F6461418CA5D
Type
Article: article from journal or magazin.
Collection
Publications
Institution
Title
Bayesian tomography with prior-knowledge-based parametrization and surrogate modelling
Journal
Geophysical Journal International
Author(s)
Meles Giovanni Angelo, Linde Niklas, Marelli Stefano
ISSN
0956-540X
Publication state
Published
Issued date
2022
Volume
231
Number
1
Pages
673-691
Language
english
Abstract
We present a Bayesian tomography framework operating with prior-knowledge-based parametrization that is accelerated by surrogate models. Standard high-fidelity forward solvers (e.g. finite-difference time-domain schemes) solve wave equations with natural spatial parametrizations based on fine discretization. Similar parametrizations, typically involving tens of thousand of variables, are usually employed to parametrize the subsurface in tomography applications. When the data do not allow to resolve details at such finely parametrized scales, it is often beneficial to instead rely on a prior-knowledge-based parametrization defined on a lower dimension domain (or manifold). Due to the increased identifiability in the reduced domain, the concomitant inversion is better constrained and generally faster. We illustrate the potential of a prior-knowledge-based approach by considering ground penetrating radar (GPR) traveltime tomography in a crosshole configuration with synthetic data. An effective parametrization of the input (i.e. the permittivity distributions determining the slowness field) and compression in the output (i.e. the traveltime gathers) spaces are achieved via data-driven principal component decomposition based on random realizations of the prior Gaussian-process model with a truncation determined by the performances of the standard solver on the full and reduced model domains. To accelerate the inversion process, we employ a high-fidelity polynomial chaos expansion (PCE) surrogate model. We investigate the impact of the size of the training set on the performance of the PCE and show that a few hundreds design data sets is sufficient to provide reliable Markov chain Monte Carlo inversion at a fraction of the cost associated with a standard approach involving a fine discretization and physics-based forward solvers. Appropriate uncertainty quantification is achieved by reintroducing the truncated higher order principle components in the original model space after inversion on the manifold and by adapting a likelihood function that accounts for the fact that the truncated higher order components are not completely located in the null space.
Create date
30/06/2023 11:33
Last modification date
24/07/2023 7:17
Usage data