Download presentation
Presentation is loading. Please wait.
Published byDerrick Esmond Lindsey Modified over 9 years ago
1
Bayes Factor Based on Han and Carlin (2001, JASA)
2
Model Selection Data: y : finite set of competing models : a distinct unknown parameter vector of dimension n j corresponding to the jth model Prior : all possible values for : collection of all model specific
3
Model Selection Posterior probability –A single “best” model –Model averaging Bayes factor: Choice between two models
4
Estimating Marginal Likelihood Marginal likelihood Estimation –Ordinary Monte Carlo sampling Difficult to implement for high-dimensional models MCMC does not provide estimate of marginal likelihood directly –Include model indicator as a parameter in sampling Product space search by Gibbs sampling Metropolis-Hastings Reversible Jump MCMC
5
Product space search Carlin and Chib (1995, JRSSB) Data likelihood of Model j Prior of model j Assumption: –M is merely an indicator of which is relevant to y –Y is independent of given the model indicator M –Proper priors are required –Prior independence among given M
6
Product space search The sampler operates over the product space Marginal likelihood Remark:
7
Search by Gibbs sampler
8
Bayes Factor Provided the sampling chain for the model indicator mixes well, the posterior probability of model j can be estimated by Bayes factor is estimated by
9
Choice of prior probability In general, can be chosen arbitrarily –Its effect is divided out in the estimate of Bayes factor Often, they are chosen so that the algorithm visits each model in roughly equal proportion –Allows more accurate estimate of Bayes factor –Preliminary runs are needed to select computationally efficient values
10
More remarks Performance of this method is optimized when the pseudo-priors match the corresponding model specific priors as nearly as possible Draw back of the method –Draw must be made from each pseudo prior at each iteration to produce acceptably accurate results –If a large number of models are considered, the method becomes impractical
11
Metropolized product space search Dellaportas P., Forster J.J., Ntzoufras I. (2002). On Bayesian Model and Variable Selection Using MCMC. Statistics and Computing, 12, 27-36. A hybrid Gibbs-Metropolis strategy Model selection step is based on a proposal moving between models
12
Metropolized Carlin and Chib (MCC)
13
Advantage Only needs to sample from the pseudo prior for the proposed model
14
Reversible jump MCMC Green (1995, Biometrika) This method operates on the union space It generates a Markov chain that can jump between models with parameter spaces of different dimensions
15
RJMCMC
16
Using Partial Analytic Structure (PAS) Godsill (2001, JCGS) Similar setup as in the CC method, but allows parameters to be shared between different models. Avoids dimension matching
17
PAS
18
Marginal likelihood estimation (Chib 1995)
19
Let When all full conditional distributions for the parameters are in closed form
20
Chib (1995) The first three terms on the right side are available in close form The last term on the right side can be estimated from Gibbs steps
21
Chib and Jeliazkov (2001) Estimation of the last term requires knowing the normalizing constant –Not applicable to Metropolis-Hastings Let the acceptance probability be
22
Chib and Jeliazkov (2001)
23
Example: linear regression model
24
Two models
25
Priors
26
For MCC h(1,1)=h(1,2)=h(2,1)=h(2,2)=0.5
27
For RJMCMC Dimension matching is automatically satisfied Due to similarities between two models
28
Chib’s method Two block gibbs sampler –(regression coefficients, variance)
29
Results By numerical integration, the true Bayes factor should be 4862 in favor of Model 2
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.