Presentation is loading. Please wait.

Presentation is loading. Please wait.

How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian.

Similar presentations


Presentation on theme: "How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian."— Presentation transcript:

1 How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian Pruteanu 11/17/2006

2 Outline Introduction How many iterations to estimate a posterior quantile? Extensions Examples

3 Introduction There is no guarantee, no matter how long you run the MCMC algorithm for, that it will converge to the posterior distribution. Diagnostic statistics identify problems with convergence but cannot “prove” that convergence has occurred. One long run or many short runs? The Gibbs sampler can be extremely computationally demanding, even for relatively small-scale problems.

4 One long run or many short runs? short runs: (a) choose a starting point; (b) run the Gibbs sampler for iterations and store only the last iterate; (c) return to (a). one long run may well be more efficient. The starting point for every sequence of length is closer to a draw from the stationary distribution than the short-runs case. it is still important to use several different starting points since we don’t know if a single run has converged.

5 Introduction The Raftery-Lewis test: 1.specify a particular quantile of the distribution of interest, an accuracy of the quantile. 2.The test breaks the chain (coming from the Gibbs sampling) into a (1,0) sequence and generates a two-state Markov chain. 3.The tests uses the sequence to estimate the transition probabilities and than the number of addition burn-ins required and the total chain length required to achieve the present level of accuracy.

6 The Raftery-Lewis test we want to estimate within with probability, where is the posterior conditional distribution we are looking to estimate. we calculate for each iteration and then form the problem is to determine - initial iterations; - further iterations and - step size is a binary 0-1 process derived from a Markov chain we form a new process where assuming that is indeed a Markov chain, we determine, and to approach stationarity.

7 The Raftery-Lewis test let be the transition matrix for the equilibrium distribution is then the -step transition matrix is

8 The Raftery-Lewis test we require that be within of then where then or thus

9 The Raftery-Lewis test the sample mean of the process is which follows a normal distribution (the central limit theorem) so, will be satisfied if thus initial number of iterations to estimate and

10 Extensions several quantiles: run the Gibbs sampler for iterations to each quantile and then use the maximum values of, and. independent iterates: when it is much more expensive to analyze a Gibbs iterate than to simulate it, it is desirable to have approximately independent Gibbs iterates (by making big enough).

11 Examples The method was applied to both simulated and real examples. The results are given for, and

12 Discussion for ‘nice’ posterior distributions, the accuracy can be achieved by running the sampler for 5000 iterations and using all the iterates. when the posterior is not ‘nice’ the required number can be much greater. the required number of iterations can be dramatically different, and even for different quantities of interest within the same problem.

13 References Billingsley, P. “Convergence of probability measures”, 1968 Holland, P. W. “Discrete multivariate analysis”, 1975 Gelfand, A. E. “Sampling-based approaches to calculate marginal densities”, 1990


Download ppt "How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian."

Similar presentations


Ads by Google