Presentation is loading. Please wait.

Presentation is loading. Please wait.

Course on Bayesian Methods in Environmental Valuation

Similar presentations


Presentation on theme: "Course on Bayesian Methods in Environmental Valuation"— Presentation transcript:

1 Course on Bayesian Methods in Environmental Valuation
Basics (continued): Models for proportions and means Francisco José Vázquez Polo [ Miguel Ángel Negrín Hernández [ {fjvpolo or 1

2 Markov Chain Monte Carlo Methods
Goals -To make inference about model parameters - To make predictions ¿Simulation? Posterior distribution is not analytically tractable

3 Monte Carlo integration and MCMC
Draw independent samples from required distribution Use sample averages to approximate expectations Markov Chain Monte Carlo (MCMC) - Draws samples by running a Markov chain that is constructed so that its limiting distribution is the joint distribution of interest

4 Markov chains A Markov chain is a sequence of random variables X0, X1, X2,… At each time t ≥ 0 the next state Xt+1 is sampled from a distribution P(Xt+1 | Xt) that depends only on the state at time t - Called “transition kernel” Under certain regularity conditions, the iterates from a Markov chain will gradually converge to draws from a unique stationary or invariant distribution i.e. chain will forget its initial state As t increases, samples points Xt will look increasingly like (correlated) samples from the stationary distribution

5 Markov chains Suppose MC is run for N (large number) iterations
We throw away output from first m iterations (burn-in) Regularity conditions are met Then by ergodic theorem We can use averages of remaining samples to estimate means

6 Gibbs sampling: one way to construct the transition kernel
Seminal references Geman and Geman (IEEE Trans. Pattn. Anal. Mach. Intel., 1984) Gelfand and Smith (JASA, 1990) Hasting (Biometrika, 1970) Metropolis, Rosenbluth, et al. (J. Chem. Phys, 1953)

7 Gibbs sampling: one way to construct the transition kernel
Subject to regularity conditions, joint distribution is uniquely determined by “full conditional distributions” - Full conditional distribution for a model quantity is distribution of that quantity conditional on assumed known values of all the other quantities in the model Break complicated, high-dimensional problem into a large number of simpler, low-dimensional problems

8 Gibbs sampling: one way to construct the transition kernel
Example: inference about normal mean and variance, both unknown. Model Priors We want posterior means, posterior means, posterior credible sets for µ, σ2

9 This is not a known distribution
Gibbs sampling: one way to construct the transition kernel Posterior distribution (Bayes’ theorem): (, σ2|x)  h((n+n0)/2-1) exp{(-1/2)[b0(-a0)2 +s0h+hi(xi-)²]} This is not a known distribution

10 (|h, x) = = (, h|x) (h|x) (, h|x)d (, h|x) (|x)
Gibbs sampling: one way to construct the transition kernel We can obtain the conditional distributions: (|h, x) = = (, h|x) (h|x) (, h|x)d (, h|x) (|x) (, h|x)dh (h|, x) = =

11 a0b0 +hn (|h, x)  exp{ } -1 (b0+nh)( - )2 2 b0+nh a0b0 +hn 1
Gibbs sampling: one way to construct the transition kernel where, (|h, x)  exp{ } (b0+nh)( )2 -1 2 a0b0 +hn b0+nh a0b0 +hn b0+nh 1 ~ N( , ) h exp{ ·h} n0+n 2 (s0+i(xi-)²) (h|, x)  -1 2 n0+n (s0+i(xi-)²) ~ G( , )

12 Gibbs sampling: one way to construct the transition kernel
Gibbs sampler algorithm for Normal Choose initial values µ(0) σ2(0) At each iteration t, generate new value for each parameter, conditional on most recent value of all other parameters

13 Gibbs sampling: one way to construct the transition kernel
Step 0. Initial values: (0) = (0, h0) Step 1. How to generate the next simulation (1) = (1, h1): We can simulate 1 from (|h=h0, x), (Normal distribution) We can simulate h1 from (h|= 1, x), (Gamma distribution) We can update (0, h0) to (1, h1), Step k. Update (k) = (k, hk), from (k-1) .

14 Other MCMC techniques - Metropolis Hasting Data augmentation
Metropolis et al. (1953) and Hasting (1970), Tierney (1994), Chib and Greenberg (1995), Robert and Casella (1999) Data augmentation Tanner and Wrong (1987)

15 Software: WinBUGS www.mrc-bsu.cam.ac.uk/bugs/ What is WinBUGS?
“Bayesian inference Using Gibbs Sampling” General purpose program for fitting Bayesian models Developed by David Spiegelhater, Andrew Thomas, Nicky Best, and Wally Gilks at the Medical Research Council Biostatistics Unit, Institute of Public Health, in Cambridge, UK Excellent documentation, including two volumes of examples Web page:

16 What does WinBUGS do? Enable user to specify model in simple language
Construct the transition kernel of a Markov chain with the joint posterior as its stationary distribution, and simulate a sample path of the resulting chain Generate random variables from standard densities using standard algorithms. WinBUGS uses Metropolis algorithm to generate from nonstandard full conditionals

17 Topics. WinBUGS Deciding how many chains to run
Choosing initial values - Do not confuse initial values with priors! Priors are part of the model. Initial values are part of the computing strategy used to fit the model Priors must not be based on the current data. The best choices of initial values are values that are in a high-posterior-density region of the parameter space. If the prior is not very strong, then maximum likelihood estimates (from the current data) are excellent choices of initial values if they can be calculated.

18 Initial values Initial values are not like priors!
Choosing initial values Run more than one chain with initial values selected to give you information about sampler performance Initial values may be generated from priors Initial values may be based on frequentist estimates.

19 Initial values WinBUGS usually can automatically generate initial values for other parameters But it’s often advantageous to specify even those WinBUGS can generate. Initial values must be specified for variance components

20 Topics. WinBUGS In the simple models we have encountered so far, the MCMC sampler will converge quickly even with a poor choice of initial values. In more complicated models, choosing initial values in low posterior density regions may make the sampler take a huge number of iterations to finally star drawing from a good approximation to the true posterior. “Assessing, whether sampler has converged”. How many initial iterations need to be discarded in order that remaining samples are drawn from a distribution close enough to the true stationary distribution to be usable for estimation and inference?

21 Convergence assessment
How many initial iterations need to be discarded in order that remaining samples are drawn form a distribution close enough to the true stationary distribution to be usable for estimation and inference? “burn-in” (0), (1), . . ., (m), (m+1), . . ., (N). ”burn in” sample

22 Topics. WinBUGS Once we are drawing from the right distribution, how many samples are needed in order to provide the desired precision in estimation and inference? Choosing model parameterizations and MCMC algorithms that will lead to convergence, in a reasonable amount of time. N-m ² Monte Carlo error:

23 Monte Carlo errors On “stats” output
Similar to standard error of the mean but adjusted for autocorrelated sample It will get smaller as more iterations are run Use it to guide your decisions as to how many iterations you need to run after burn-in is done

24 Monte Carlo errors Decide whether to
Discard some initial burn-in iterations and use remaining sampler output for inference or take some corrective action Run more iterations Change parameterization of model

25 Using MCMC for Bayesian inference
Specify a Bayesian model Construct a Markov chain whose target distribution is the joint posterior distribution of interest Run one or more chain(s) as long as you can stand it Assess convergence - Burn-in - Monte Carlo error 5. Inference

26 Using MCMC for Bayesian inference
History plots: Early graphical methods of convergence assessment Trajectories of sampler output for each model unknown Can quickly reveal failure to reach stationarity Can give qualitative information about sampler behaviour Cannot confirm that convergence have occurred

27 Monitor All types of model parameters, not only parameters of substantive interest Sample paths graphically Autocorrelations Cross-correlations between parameters Apply more than one diagnostic, including one or more that uses information about the specific model.

28 Conclusions MCMC methods have enabled the fitting of complex, realistic models. Use of MCMC methods requires careful attention to Model parameterization MCMC sampler algorithms Choice of initial values Convergence assessment Output Analysis Ongoing research in theoretical verification of convergence, MCMC acceleration, and exact sampling holds great promise.

29 WinBUGS

30 WinBUGS Where to get the WinBUGS software
From the web site of the MRC Biostatistics Unit in Cambridge. The BUGS home page is Once you have downloaded the files you need to the BUGS project for a key that will let you use the full version. Manuals The WinBUGS manual available online. WinBUGS examples, volumes 1 and 2 with explanations, available online.

31 Specifying the model The first stage in model fitting is to specify your model in the BUGS language. This can either be done graphically (and then code written form your graph) or in the BUGS language. Graphically – DOODLE

32 Specifying the model Starting WinBUGS
Click on the WinBUGS icon or program file. You will get a message about the license conditions that you can read, and then close. Now explores the menus. HELP – you see manuals and examples. FILE – allows you to open existing files, or to start a new file to program your own example in the BUGS language DOODLE – NEW is what you need to use to start a file for a new model specified in graphical format

33 Specifying the model Example – a simple proportion Inference is required for the proportion (pi) of people who are willing to pay a certain amount to visit a park. X = (0,0,0,0,1,0,0,0,1,1,1,0,1,1,1) ( 1: yes, 0: no) Sample size = 15 x1, x2, . . ., xn iid ~ Bin(1, ) or Bernouilli()

34 Example – a simple proportion
Starting WinBUGS DODDLE - NEW Press ‘ok’

35 Example – a simple proportion Basic instructions
Click left mouse button to create a “doodle” with Click CTRL + Supr to delete a “doodle” To create a “plate” press CLICK + CTRL Click CTRL + Supr to delete a “plate”

36 Example – a simple proportion
Nodes: Constants Stochastic nodes Deterministic nodes The relationship between node: Arrows stochastic dependence logical function To create a narrow we have to illuminate the “destiny” node and the press CTRL + CLICK on the “origin” node. If we repeat the process we have to repeat the process.

37 Example – a simple proportion
We have to create a “node” for each variable or constant included In the model: Nodes: An oval for stochastic nodes (we choose the density and the parameters) A rectangle for the constants Plate for the node xi

38 Example – a simple proportion
Now we add the arrows that represents the relations between nodes: Copy and paste the doodle in a new file

39 Example – a simple proportion
Data Now you will have to enter the data.The following list format will do (notice that WinBUGS is case sensitive with the capital letters). Initial values (opcional, WinBUGS can generate them) list(n = 15, alpha = 1, beta = 1, x=c(0, 0, 0, 0, 1, ...)) list(phi =0.1)

40 Example – a simple proportion
Specifying the model The first stage in model fitting is to specify your model in the BUGS language (doodle or code). You then go to the model menu and you will get a specification tool with buttons. Click on the check model button. Any error messages will appear in the grey bar at the bottom of the screen. If all is well, you will get the message “model is syntactically correct”. You will aso see that the compile and load data buttons will have their text darkened. You are ready for the next step.

41 Example – a simple proportion
Load Data Data can be entered either in list format or in rectangular format (columns). Once you have typed in your data, highlight either the word list. You can use the load data button on the specification tool to read the data. Compile You are now ready to compile your model using compile button on the specification tool. Again, check error messages. If all is well you will get a “model compiled” message.

42 Example – a simple proportion
Initial values All nodes that are not given as data, or derived from other nodes, need to be given initial values. This is done with the specification tool menu either by setting them specifically form values you type in (set inits button) or by generating a random value form the prior (gen inits button). WinBUGS 1.4 allows you to run more than one chain at the same item; see specification tool above. If you want more chains you will need to set different initial values for each one.

43 Example – a simple proportion
Generating the samples - updating You are now ready to generate samples and to examine the simulated output. To start the sampler, go to model and then update and you will get an updating tool. You can select how many updates you get for each press of the update button and how often the screen is refreshed to show how sampling is proceeding. Updating does the sampling, but does not store any values. In MCMC methods you usually want to run the sampler for some time (perhaps iterations) to be sure it is stable before you start storing values.

44 Example – a simple proportion
Storing values and summarising results After an initial run values go to the inference menu and samples and you will get a sample monitoring tool. You start by entering the parameters you want to monitor in the node box, and for each one press ser. If you also press trace you will see a plot of the samples as they are generated. Now go back to the updating tool and generate some samples.

45 Example – a simple proportion
Now go back to the sampling tool to look at the various ways of displaying your results or summary statistics. The most useful buttons are: history – shows you a plot of all the samples you have generated density – gives a kernel density estimate of the posterior stats – gives summary statistics including mean, s.d., median and percentiles that can be set with the panel on the right. These can be used for credible intervals. You will also get a MonterCarlo error for the mean that will indicate how well the mean of the posterior has been estimated form your number of samples. AutoC – a plot of the autocorrelation in the chains

46 Example – a simple proportion
Results: “click” in “stats”: mean = (real mean= ) median = Bayesian interval 95% = (0.246, ) MC error = “click” in “history”: (all the chain) “click” in “density”:

47 Example – a simple proportion
Exercise: Repeat the analysis new simulations

48 Example – a simple proportion
Code: Doodle – Write Code


Download ppt "Course on Bayesian Methods in Environmental Valuation"

Similar presentations


Ads by Google