Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lirong Xia Hidden Markov Models Tue, March 28, 2014.

Similar presentations


Presentation on theme: "Lirong Xia Hidden Markov Models Tue, March 28, 2014."— Presentation transcript:

1 Lirong Xia Hidden Markov Models Tue, March 28, 2014

2 Markov decision process (MDP) –transition probability only depends on (state,action) in the previous step Reinforcement learning –unknown probability/rewards Markov models Hidden Markov models 1 The “Markov”s we have learned so far

3 Markov Models 2 A Markov model is a chain-structured BN –Conditional probabilities are the same (stationarity) –Value of X at a given time is called the state –As a BN: –Parameters: called transition probabilities p(X 1 ) p(X|X -1 )

4 p(X=sun)=p(X=sun|X -1 =sun)p(X=sun)+ p(X=sun|X -1 =rain)p(X=rain) p(X=rain)=p(X=rain|X -1 =sun)p(X=sun)+ p(X=rain|X -1 =rain)p(X=rain) 3 Computing the stationary distribution

5 Hidden Markov Models 4 Hidden Markov models (HMMs) –Underlying Markov chain over state X –Effects (observations) at each time step –As a Bayes’ net:

6 Example 5 An HMM is defined by: –Initial distribution: p(X 1 ) –Transitions:p(X|X -1 ) –Emissions:p(E|X) R t-1 p(R t ) t0.7 f0.3 RtRt p(U t ) t0.9 f0.2

7 Filtering / Monitoring 6 Filtering, or monitoring, is the task of tracking the distribution B(X) (the belief state) over time B(X t ) = p(X t |e 1:t ) We start with B(X) in an initial setting, usually uniform As time passes, or we get observations, we update B(X)

8 Example: Robot Localization 7 Sensor model: never more than 1 mistake Motion model: may not execute action with small prob.

9 HMM weather example: a question s c r.1.2.6.3.4.3.5.3 You have been stuck in the lab for three days (!) On those days, your labmate was dry, wet, wet, respectively What is the probability that it is now raining outside? p(X 3 = r | E 1 = d, E 2 = w, E 3 = w) p(w|s) =.1 p(w|c) =.3 p(w|r) =.8

10 Filtering Computationally efficient approach: first compute p(X 1 = i, E 1 = d) for all states i p(X t, e 1:t ) = p(e t | X t )Σ x t-1 p(x t-1, e 1:t-1 ) p(X t | x t-1 ) s c r.1.2.6.3.4.3.5.3 p(w|s) =.1 p(w|c) =.3 p(w|r) =.8

11 Formal algorithm for filtering –Elapse of time compute p(X t+1 |X t,e 1:t ) from p(X t |e 1:t ) –Observe compute p(X t+1 |e 1:t+1 ) from p(X t+1 |e 1:t ) –Renormalization Introduction to sampling 10 Today

12 Inference Recap: Simple Cases 11

13 Elapse of Time 12 Assume we have current belief p(X t-1 |evidence to t- 1) B(X t-1 )=p(X t-1 |e 1:t-1 ) Then, after one time step passes: p(X t |e 1:t-1 )= Σ x t-1 p(X t |x t-1 )p(X t-1 |e 1:t-1 ) Or, compactly B’(X t )= Σ x t-1 p(X t |x t-1 )B(x t-1 ) With the “B” notation, be careful about –what time step t the belief is about, –what evidence it includes

14 Observe and renormalization 13 Assume we have current belief p(X t | previous evidence): B’(X t )=p(X t |e 1:t-1 ) Then: p(X t |e 1:t ) ∝ p(e t |X t )p(X t |e 1:t-1 ) Or: B(X t ) ∝ p(e t |X t )B’(X t ) Basic idea: beliefs reweighted by likelihood of evidence Need to renormalize B(X t )

15 Recap: The Forward Algorithm 14 We are given evidence at each time and want to know We can derive the following updates We can normalize as we go if we want to have p(x|e) at each time step, or just once at the end…

16 Example HMM 15 R t-1 p(R t ) t0.7 f0.3 R t-1 p(U t ) t0.9 f0.2

17 Observe and time elapse 16 Observe Time elapse and renormalize Want to know B(Rain 2 )=p(Rain 2 |+u 1,+u 2 )

18 Online Belief Updates 17 Each time step, we start with p(X t-1 | previous evidence): Elapse of time B’(X t )= Σ x t-1 p(X t |x t-1 )B(x t-1 ) Observe B(X t ) ∝ p(e t |X t )B’(X t ) Renormalize B(X t ) Problem: space is |X| and time is |X| 2 per time step –what if the state is continuous?

19 Real-world robot localization 18 Continuous probability space

20 19 Sampling

21 Approximate Inference 20 Sampling is a hot topic in machine learning, and it’s really simple Basic idea: –Draw N samples from a sampling distribution S –Compute an approximate posterior probability –Show this converges to the true probability P Why sample? –Learning: get samples from a distribution you don’t know –Inference: getting a sample is faster than computing the right answer (e.g. with variable elimination)

22 Prior Sampling 21 +c +s 0.1 -s 0.9 -c +s 0.5 -s 0.5 +c +r 0.8 -r 0.2 -c +r 0.2 -r 0.8 +s +r +w 0.99 -w 0.01 -r +w 0.90 -w 0.10 -s +r +w 0.90 -w 0.10 -r +w 0.01 -w 0.99 +c0.5 -c0.5 Samples: +c, -s, +r, +w -c, +s, -r, +w

23 Prior Sampling (w/o evidences) 22 This process generates samples with probability: i.e. the BN’s joint probability Let the number of samples of an event be Then I.e., the sampling procedure is consistent

24 Example 23 We’ll get a bunch of samples from the BN: +c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w If we want to p(W) –We have counts –Normalize to get p(W) = –This will get closer to the true distribution with more samples –Can estimate anything else, too –What about p(C|+w)? p(C|+r,+w)? p(C|-r,-w)? –Fast: can use fewer samples if less time (what’s the drawback?)

25 Rejection Sampling 24 Let’s say we want p(C) –No point keeping all samples around –Just tally counts of C as we go Let’s say we want p(C|+s) –Same thing: tally C outcomes, but ignore (reject) samples which don’t have S=+s –This is called rejection sampling –It is also consistent for conditional probabilities (i.e., correct in the limit) +c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w

26 Likelihood Weighting 25 Problem with rejection sampling: –If evidence is unlikely, you reject a lot of samples –You don’t exploit your evidence as you sample –Consider p(B|+a) Idea: fix evidence variables and sample the rest Problem: sample distribution not consistent! Solution: weight by probability of evidence given parents -b, -a +b, +a -b, +a +b, +a

27 Likelihood Weighting 26 +c +s 0.1 -s 0.9 -c +s 0.5 -s 0.5 +c +r 0.8 -r 0.2 -c +r 0.2 -r 0.8 +s +r +w 0.99 -w 0.01 -r +w 0.90 -w 0.10 -s +r +w 0.90 -w 0.10 -r +w 0.01 -w 0.99 +c0.5 -c0.5 Samples: +c, +s, +r, +w ……

28 Likelihood Weighting 27 Sampling distribution if z sampled and e fixed evidence now, samples have weights Together, weighted sampling distribution is consistent

29 Ghostbusters HMM 28 –p(X 1 ) = uniform –p(X|X’) = usually move clockwise, but sometimes move in a random direction or stay in place –p(R ij |X) = same sensor model as before: red means close, green means far away. 1/9 p(X 1 ) 1/6 1/2 01/60 000 p(X|X’= )

30 Example: Passage of Time 29 As time passes, uncertainty “accumulates” Transition model: ghosts usually go clockwise T = 1 T = 2 T= 5

31 Example: Observation 30 As we get observations, beliefs get reweighted, uncertainty “decreases” Before observation After observation


Download ppt "Lirong Xia Hidden Markov Models Tue, March 28, 2014."

Similar presentations


Ads by Google