Robust Approximate Bayesian Inference with Synthetic Likelihood

Image credit: Unsplash

Abstract

Bayesian synthetic likelihood (BSL) is now an established method for conducting approximate Bayesian inference in models where, due to the intractability of the likelihood function, exact Bayesian approaches are either infeasible or computationally too demanding. Similar to other approximate Bayesian methods, such as the method of approximate Bayesian computation, implicit in the application of BSL is the assumption that the data generating process (DGP) can produce simulated summary statistics that capture the behaviour of the observed summary statistics. We demonstrate that this notion of compatibility between the observed and simulated summaries is critical for the performance of BSL and its variants. Across several different simulated and empirical examples, we demonstrate that if the assumed DGP used in BSL differs from the true DGP, BSL can yield unreliable parameter inference. To circumvent this issue, we propose two robust versions of BSL that can deliver reliable inference regardless of whether or not the assumed DGP is correctly specified. Simulation results and two real data examples demonstrate the performance of this robust approach to BSL, and its superiority over standard BSL when the assumed model is misspecified.

Avatar
David T. Frazier
DECRA Fellow and Associate Professor in Econometrics and Statistics

My research interests include simulation-based statistical theory and inference, approximate Bayesian analysis, forecasting and scoring rules.