General Bayesian loss function selection and the use of improper models

Open Access
  • Authors: Jack Jewson and David Rossell.
  • Statistics
  • Journal of the Royal Statistical Society. Series B: Statistical Methodology
Open Access Open Access Icon

Statisticians often face the choice between using probability models or a paradigm defined by minimising a loss function. Both approaches are useful and, if the loss can be re-cast into a proper probability model, there are many tools to decide which model or loss is more appropriate for the observed data, in the sense of explaining the data’s nature. However, when the loss leads to an improper model, there are no principled ways to guide this choice. We address this task by combining the Hyvärinen score, which naturally targets infinitesimal relative probabilities, and general Bayesian updating, which provides a unifying framework for inference on losses and models. Specifically we propose the (Formula presented.) -score, a general Bayesian selection criterion and prove that it consistently selects the (possibly improper) model closest to the data-generating truth in Fisher’s divergence. We also prove that an associated (Formula presented.) -posterior consistently learns optimal hyper-parameters featuring in loss functions, including a challenging tempering parameter in generalised Bayesian inference. As salient examples, we consider robust regression and non-parametric density estimation where popular loss functions define improper models for the data and hence cannot be dealt with using standard model selection tools. These examples illustrate advantages in robustness-efficiency trade-offs and enable Bayesian inference for kernel density estimation, opening a new avenue for Bayesian non-parametrics.

Subscribe to our newsletter
Want to receive the latest news and updates from the BSE? Share your details below.
Founding institutions
Distinctions
Logo BSE
© Barcelona Graduate School of
Economics. All rights reserved.
YoutubeFacebookLinkedinInstagramX