Neural Nets for Indirect Inference

  • Authors: Michael Creel.
  • BSE Working Paper: 110416 | November 16
  • Keywords: indirect inference , neural networks , approximate Bayesian computing , machine learning , DSGE , jump-diffusion
  • JEL codes: C13, C45, C58
  • indirect inference
  • neural networks
  • approximate Bayesian computing
  • machine learning
  • DSGE
  • jump-diffusion
Download PDF Download pdf Icon

Abstract

For simulable models, neural networks are used to approximate the limited information posterior mean, which conditions on a vector of statistics, rather than on the full sample. Because the model is simulable, training and testing samples may be generated with sizes large enough to train well a net that is large enough, in terms of number of hidden layers and neurons, to learn the limited information posterior mean with good accuracy. Targeting the limited information posterior mean using neural nets is simpler, faster, and more successful than is targeting the full information posterior mean, which conditions on the observed sample. The output of the trained net can be used directly as an estimator of the model’s parameters, or as an input to subsequent classical or Bayesian indirect inference estimation. Examples of indirect inference based on the out- put of the net include a small dynamic stochastic general equilibrium model, estimated using both classical indirect inference methods and approximate Bayesian computing (ABC) methods, and a continuous time jump-diffusion model for stock index returns, estimated using ABC.

Subscribe to our newsletter
Want to receive the latest news and updates from the BSE? Share your details below.
Founding institutions
Distinctions
Logo BSE
© Barcelona Graduate School of
Economics. All rights reserved.
YoutubeFacebookLinkedinInstagramX