Download presentation
Presentation is loading. Please wait.
1
June 2, 2015 1 MARKOV CHAIN MONTE CARLO: A Workhorse for Modern Scientific Computation Xiao-Li Meng Department of Statistics Harvard University
2
2 June 2, 2015 Introduction
3
3 June 2, 2015 Applications of Monte Carlo Physics Chemistry Astronomy Biology Environment Engineering Traffic Sociology Education Psychology Arts Linguistics History Medical Science Economics Finance Management Policy Military Government Business …
4
4 June 2, 2015 Monte Carlo 之应用 A Chinese version of the previous slide 物理 化学 天文 生物 环境 工程 交通 社会 教育 心理 人文 语言 历史 医学 经济 金融 管理 政策 军事 政府 商务 …
5
5 June 2, 2015 Monte Carlo Integration Suppose we want to compute where f(x) is a probability density. If we have samples x 1,…,x n » f(x), we can estimate I by
6
6 June 2, 2015 Monte Carlo Optimization We want to maximize p(x) Simulate from f(x) / p (x). As ! 1, the simulated draws will be more and more concentrated around the maximizer of p(x) =1 =2 =20
7
7 June 2, 2015 Simulating from a Distribution What does it mean? Suppose a random variable ( 随机变量 ) X can only take two values: Simulating from the distribution of X means that we want a collection of 0’s and 1’s: such that about 25% of them are 0’s and about 75%of them are 1’s, when n, the simulation size is large. The {x i, i = 1,…,n} don’t have to be independent
8
8 June 2, 2015 Simulating from a Complex Distribution Continuous variable X, described by a density function f(x) Complex: the form of f(x) the dimension of x
9
9 June 2, 2015 Markov Chain Monte Carlo where {U (t), t=1,2,…} are identically and independently distributed. Under regularity conditions, So We can treat {x (t), t= N 0, …, N} as an approximate sample from f(x), the stationary/limiting distribution.
10
10 June 2, 2015 Gibbs Sampler Target density: We know how to simulate form the conditional distributions For the previous example, N( , 2 ) Normal Distribution “Bell Curve”
11
11 June 2, 2015
12
12 June 2, 2015 Statistical Inference Point Estimator: Variance Estimator: Interval Estimator:
13
13 June 2, 2015 Gibbs Sampler (k steps) Select an initial value (x 1 (0), x 2 (0),…, x k (0) ). For t = 0,1,2, …, N Step 1: Draw x 1 (t+1) from f(x 1 |x 2 (t), x 3 (t),…, x k ( t ) ) Step 2: Draw x 2 (t+1) from f(x 2 |x 1 (t+1), x 3 (t),…, x k ( t ) ) … Step K:Draw x k (t+1) from f(x k |x 1 (t+1), x 2 (t+1),…, x k-1 ( t+1 ) ) Output {(x 1 (t), x 2 (t),…, x k ( t ) ): t= 1,2,…,N} Discard the first N 0 draws Use {(x 1 (t), x 2 (t),…, x k ( t ) ): t= N 0 +1,2,…,N} as (approximate) samples from f(x 1, x 2,…, x k ).
14
14 June 2, 2015 Data Augmentation We want to simulate from But this is just the marginal distribution of So once we have simulations: {(x (t), y (t) : t= 1,2,…,N)}, we also obtain draws: {x (t) : t= 1,2,…,N)}
15
15 June 2, 2015 A More Complicated Example
16
16 June 2, 2015 Metropolis-Hastings algorithm Simulate from an approximate distribution q(z 1 |z 2 ), then Step 0: Select z (0) ; Now for t = 1,2,…,N, repeat Step 1: draw z 1 from q(z 1 |z 2 =z (t) ) Step 2: Calculate Step 3: set Discard the first N 0 draws Accept reject
17
17 June 2, 2015 M-H Algorithm: An Intuitive Explanation Assume, then
18
18 June 2, 2015 M-H: A Terrible Implementation We choose q(z|z 2 )=q(z)= (x-4) (y-4) Step 1: draw x » N(4,1), y » N(4,1); Dnote z 1 =(x,y) Step 2: Calculate Step 3: draw u » U[0,1] Let
19
19 June 2, 2015 Why is it so bad?
20
20 June 2, 2015 M-H: A Better Implementation Starting from some arbitrary (x (0),y (0) ) Step 1: draw x » N(x (t),1), y » N(y (t),1) “random walk” Step 2: dnote z 1 =(x,y), calculate Step 3: draw u » U[0,1] Let
21
21 June 2, 2015 Much Improved!
22
22 June 2, 2015 Further Discussion How large should N 0 and N be? Not an easy problem ! Key difficulty: multiple modes in unknown area We would like to know all (major) modes, as well as their surrounding mass. Not just the global mode We need “automatic, Hill-climbing” algorithms. ) The Expectation/Maximization (EM) Algorithm, which can be viewed as a deterministic version of Gibbs Sampler.
23
23 June 2, 2015 Drive/Drink Safely, Don’t become a Statistic ; Go to Graduate School, Become a Statistician !
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.